![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
'New Technologies in Hospital Information Systems' is launched by the European Telematics Applications Project 'Healthcare Advanced Networked System Architecture' (HANSA) with support of the GMDS WG Hospital Information Systems and the GMDS FA Medical Informatics. It contains 28 high quality papers dealing with architectural concepts, models and developments for hospital information systems. The book has been organized in seven sections: Reference Architectures, Modelling and Applications, The Distributed Healthcare Environment, Intranet Solutions, Object Orientation, Networked Solutions and Standards and Applications. The HANSA project is based upon the European Pre-standard for Healthcare Information System Architecture which has been drawn up by CEN/TC 251 PT01-13. The editors felt that this standard will have a major impact on future developments for hospital information systems. Therefore the standard is completely included as an appendix.
Managing Complexity is the first book that clearly defines the concept of Complexity, explains how Complexity can be measured and tuned, and describes the seven key features of Complex Systems: 1. Connectivity 2. Autonomy 3. Emergency 4. Nonequilibrium 5. Non-linearity 6. Self-organisation 7. Co-evolution The thesis of the book is that complexity of the environment in which we work and live offers new opportunities and that the best strategy for surviving and prospering under conditions of complexity is to develop adaptability to perpetually changing conditions. An effective method for designing adaptability into business processes using multi-agent technology is presented and illustrated by several extensive examples, including adaptive, real-time scheduling of taxis, see-going tankers, road transport, supply chains, railway trains, production processes and swarms of small space satellites. Additional case studies include adaptive servicing of the International Space Station; adaptive processing of design changes of large structures such as wings of the largest airliner in the world; dynamic data mining, knowledge discovery and distributed semantic processing.Finally, the book provides a foretaste of the next generation of complex issues, notably, The Internet of Things, Smart Cities, Digital Enterprises and Smart Logistics.
To solve performance problems in modern computing infrastructures, often comprising thousands of servers running hundreds of applications, spanning multiple tiers, you need tools that go beyond mere reporting. You need tools that enable performance analysis of application workflow across the entire enterprise. That's what PDQ (Pretty Damn Quick) provides. PDQ is an open-source performance analyzer based on the paradigm of queues. Queues are ubiquitous in every computing environment as buffers, and since any application architecture can be represented as a circuit of queueing delays, PDQ is a natural fit for analyzing system performance. Building on the success of the first edition, this considerably expanded second edition now comprises four parts. Part I contains the foundational concepts, as well as a new first chapter that explains the central role of queues in successful performance analysis. Part II provides the basics of queueing theory in a highly intelligible style for the non-mathematician; little more than high-school algebra being required. Part III presents many practical examples of how PDQ can be applied. The PDQ manual has been relegated to an appendix in Part IV, along with solutions to the exercises contained in each chapter. Throughout, the Perl code listings have been newly formatted to improve readability. The PDQ code and updates to the PDQ manual are available from the author's web site at www.perfdynamics.com
Modern embedded systems require high performance, low cost and low power consumption. Such systems typically consist of a heterogeneous collection of processors, specialized memory subsystems, and partially programmable or fixed-function components. This heterogeneity, coupled with issues such as hardware/software partitioning, mapping, scheduling, etc., leads to a large number of design possibilities, making performance debugging and validation of such systems a difficult problem. Embedded systems are used to control safety critical
applications such as flight control, automotive electronics and
healthcare monitoring. Clearly, developing reliable
software/systems for such applications is of utmost importance.
This book describes a host of debugging and verification methods
which can help to achieve this goal.
"Discrete-Time Linear Systems: Theory and Design with Applications "combines system theory and design in order to show the importance of system theory and its role in system design. The book focuses on system theory (including optimal state feedback and optimal state estimation) and system design (with applications to feedback control systems and wireless transceivers, plus system identification and channel estimation).
This book serves as a practical guide for practicing engineers who need to design embedded systems for high-speed data acquisition and control systems. A minimum amount of theory is presented, along with a review of analog and digital electronics, followed by detailed explanations of essential topics in hardware design and software development. The discussion of hardware focuses on microcontroller design (ARM microcontrollers and FPGAs), techniques of embedded design, high speed data acquisition (DAQ) and control systems. Coverage of software development includes main programming techniques, culminating in the study of real-time operating systems. All concepts are introduced in a manner to be highly-accessible to practicing engineers and lead to the practical implementation of an embedded board that can be used in various industrial fields as a control system and high speed data acquisition system.
Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting potentials can be identified. The book presents the relevant theoretical background and measuring methods as well as proposed solutions. An evaluation of network monitors and checklists rounds out the work.
The innovative process of open source software is led in greater part by the end-users; therefore this aspect of open source software remains significant beyond the realm of traditional software development. Open Source Software Dynamics, Processes, and Applications is a multidisciplinary collection of research and approaches on the applications and processes of open source software. Highlighting the development processes performed by software programmers, the motivations of its participants, and the legal and economic issues that have been raised; this book is essential for scholars, students, and practitioners in the fields of software engineering and management as well as sociology.
Systems for Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) are currently separate. The potential of the latest technologies and changes in operational and analytical applications over the last decade have given rise to the unification of these systems, which can be of benefit for both workloads. Research and industry have reacted and prototypes of hybrid database systems are now appearing. Benchmarks are the standard method for evaluating, comparing and supporting the development of new database systems. Because of the separation of OLTP and OLAP systems, existing benchmarks are only focused on one or the other. With the rise of hybrid database systems, benchmarks to assess these systems will be needed as well. Based on the examination of existing benchmarks, a new benchmark for hybrid database systems is introduced in this book. It is furthermore used to determine the effect of adding OLAP to an OLTP workload and is applied to analyze the impact of typically used optimizations in the historically separate OLTP and OLAP domains in mixed-workload scenarios.
Jack Ganssle has been forming the careers of embedded engineers for
20+ years. He has done this with four books, over 500 articles, a
weekly column, and continuous lecturing. Technology moves fast and
since the first edition of this best-selling classic much has
changed. The new edition will reflect the author's new and ever
evolving philosophy in the face of new technology and realities.
The demand for large-scale dependable, systems, such as Air Traffic Management, industrial plants and space systems, is attracting efforts of many word-leading European companies and SMEs in the area, and is expected to increase in the near future. The adoption of Off-The-Shelf (OTS) items plays a key role in such a scenario. OTS items allow mastering complexity and reducing costs and time-to-market; however, achieving these goals by ensuring dependability requirements at the same time is challenging. CRITICAL STEP project establishes a strategic collaboration between academic and industrial partners, and proposes a framework to support the development of dependable, OTS-based, critical systems. The book introduces methods and tools adopted by the critical systems industry, and surveys key achievements of the CRITICAL STEP project along four directions: fault injection tools, V&V of critical systems, runtime monitoring and evaluation techniques, and security assessment.
This book presents cutting-edge research contributions that address various aspects of network design, optimization, implementation, and application of cognitive radio technologies. It demonstrates how to make better utilization of the available spectrum, cognitive radios and spectrum access to achieve effective spectrum sharing between licensed and unlicensed users. The book provides academics and researchers essential information on current developments and future trends in cognitive radios for possible integration with the upcoming 5G networks. In addition, it includes a brief introduction to cognitive radio networks for newcomers to the field.
Fundamental Problems in Computing is in honor of Professor Daniel J. Rosenkrantz, a distinguished researcher in Computer Science. Professor Rosenkrantz has made seminal contributions to many subareas of Computer Science including formal languages and compilers, automata theory, algorithms, database systems, very large scale integrated systems, fault-tolerant computing and discrete dynamical systems. For many years, Professor Rosenkrantz served as the Editor-in-Chief of the Journal of the Association for Computing Machinery (JACM), a very prestigious archival journal in Computer Science. His contributions to Computer Science have earned him many awards including the Fellowship from ACM and the ACM SIGMOD Contributions Award.
SystemVerilog is a rich set of extensions to the IEEE 1364-2001 Verilog Hardware Description Language (Verilog HDL). These extensions address two major aspects of HDL-based design. First, modeling very large designs with concise, accurate, and intuitive code. Second, writing high-level test programs to efficiently and effectively verify these large designs. The first edition of this book addressed the first aspect of the SystemVerilog extensions to Verilog. Important modeling features were presented, such as two-state data types, enumerated types, user-degined types, structures, unions, and interfaces. Emphasis was placed on the proper usage of these enhancements for simulation and synthesis.
The creation and consumption of content, especially visual content, is ingrained into our modern world. This book contains a collection of texts centered on the evaluation of image retrieval systems. To enable reproducible evaluation we must create standardized benchmarks and evaluation methodologies. The individual chapters in this book highlight major issues and challenges in evaluating image retrieval systems and describe various initiatives that provide researchers with the necessary evaluation resources. In particular they describe activities within ImageCLEF, an initiative to evaluate cross-language image retrieval systems which has been running as part of the Cross Language Evaluation Forum (CLEF) since 2003. To this end, the editors collected contributions from a range of people: those involved directly with ImageCLEF, such as the organizers of specific image retrieval or annotation tasks; participants who have developed techniques to tackle the challenges set forth by the organizers; and people from industry and academia involved with image retrieval and evaluation generally. Mostly written for researchers in academia and industry, the book stresses the importance of combing textual and visual information - a multimodal approach - for effective retrieval. It provides the reader with clear ideas about information retrieval and its evaluation in contexts and domains such as healthcare, robot vision, press photography, and the Web.
This book presents intuitive explanations of the principles and applications of power system resiliency, as well as a number of straightforward and practical methods for the impact analysis of risk events on power system operations. It also describes the challenges of modelling, distribution networks, optimal scheduling, multi-stage planning, deliberate attacks, cyber-physical systems and SCADA-based smart grids, and how to overcome these challenges. Further, it highlights the resiliency issues using various methods, including strengthening the system against high impact events with low frequency and the fast recovery of the system properties. A large number of specialists have collaborated to provide innovative solutions and research in power systems resiliency. They discuss the fundamentals and contemporary materials of power systems resiliency, theoretical and practical issues, as well as current issues and methods for controlling the risk attacks and other threats to AC power systems. The book includes theoretical research, significant results, case studies, and practical implementation processes to offer insights into electric power and engineering and energy systems. Showing how systems should respond in case of malicious attacks, and helping readers to decide on the best approaches, this book is essential reading for electrical engineers, researchers and specialists. The book is also useful as a reference for undergraduate and graduate students studying the resiliency and reliability of power systems.
With the rise of mobile and wireless technologies, more sustainable networks are necessary to support communication. These next-generation networks can now be utilized to extend the growing era of the Internet of Things. Enabling Technologies and Architectures for Next-Generation Networking Capabilities is an essential reference source that explores the latest research and trends in large-scale 5G technologies deployment, software-defined networking, and other emerging network technologies. Featuring research on topics such as data management, heterogeneous networks, and spectrum sensing, this book is ideally designed for computer engineers, technology developers, network administrators and researchers, professionals, and graduate-level students seeking coverage on current and future network technologies.
The 2008 TUB-SJTU joint workshop on Autonomous Systems Self-Organization, Management, and Control was held on October 6, 2008 at Shanghai Jiao Tong University, Shanghai, China. The workshop, sponsored by Shanghai Jiao Tong University and Technical University of Berlin brought together scientists and researchers from both universities to present and discuss the latest progress on autonomous systems and its applications in diverse areas. Autonomous systems are designed to integrate machines, computing, sensing, and software to create intelligent systems capable of interacting with the complexities of the real world. Autonomous systems represent the physical embodiment of machine intelligence. Topics of interest include, but are not limited to theory and modeling for autonomous systems; organization of autonomous systems; learning and perception; complex systems; multi-agent systems; robotics and control; applications of autonomous systems.
This book presents the technical program of the International Embedded Systems Symposium (IESS) 2009. Timely topics, techniques and trends in embedded system design are covered by the chapters in this volume, including modelling, simulation, verification, test, scheduling, platforms and processors. Particular emphasis is paid to automotive systems and wireless sensor networks. Sets of actual case studies in the area of embedded system design are also included. Over recent years, embedded systems have gained an enormous amount of proce- ing power and functionality and now enter numerous application areas, due to the fact that many of the formerly external components can now be integrated into a single System-on-Chip. This tendency has resulted in a dramatic reduction in the size and cost of embedded systems. As a unique technology, the design of embedded systems is an essential element of many innovations. Embedded systems meet their performance goals, including real-time constraints, through a combination of special-purpose hardware and software components tailored to the system requirements. Both the development of new features and the reuse of existing intellectual property components are essential to keeping up with ever more demanding customer requirements. Furthermore, design complexities are steadily growing with an increasing number of components that have to cooperate properly. Embedded system designers have to cope with multiple goals and constraints simul- neously, including timing, power, reliability, dependability, maintenance, packaging and, last but not least, price.
This book aims to deconstruct ethnography to alert systems designers, and other stakeholders, to the issues presented by new approaches that move beyond the studies of 'work' and 'work practice' within the social sciences (in particular anthropology and sociology). The theoretical and methodological apparatus of the social sciences distort the social and cultural world as lived in and understood by ordinary members, whose common-sense understandings shape the actual milieu into which systems are placed and used. In Deconstructing Ethnography the authors show how 'new' calls are returning systems design to 'old' and problematic ways of understanding the social. They argue that systems design can be appropriately grounded in the social through the ordinary methods that members use to order their actions and interactions. This work is written for post-graduate students and researchers alike, as well as design practitioners who have an interest in bringing the social to bear on design in a systematic rather than a piecemeal way. This is not a 'how to' book, but instead elaborates the foundations upon which the social can be systematically built into the design of ubiquitous and interactive systems.
Requirements Management has proven itself to be an enormous potential for the optimization of development projects throughout the last few years. Especially in the climate of an increasingly competitive market Requirements Management helps in carrying out developments faster, cheaper and with a higher quality. This book focuses on the interfaces of Requirements Management to the other disciplines of Systems Engineering, for example Project Management, Change Management and Configuration and Version Management. To this end, an introduction into Requirements Management and Requirements Development is given, along with a short sketch of Systems Engineering, and especially the necessary inputs and resulting outputs of Requirements Management are explained. Using these flows of information it is shown how Requirements Management can support and optimize the other project disciplines and how very important therefore a functioning Requirements Management is for all areas of development.
The most significant articles from each of the fields represented at the conference on Work with Display Units 1992 are presented in this volume. Such topics are:
This book is dedicated to Prof. Dr. Heinz Gerhauser on the occasion of his retirement both from the position of Executive Director of the Fraunhofer Institute for Integrated Circuits IIS and from the Endowed Chair of Information Technologies with a Focus on Communication Electronics (LIKE) at the Friedrich-Alexander-Universitat Erlangen-Nurnberg. Heinz Gerhauser's vision and entrepreneurial spirit have made the Fraunhofer IIS one of the most successful and renowned German research institutions. He has been Director of the Fraunhofer IIS since 1993, and under his leadership it has grown to become the largest of Germany's 60 Fraunhofer Institutes, a position it retains to this day, currently employing over 730 staff. Likely his most important scientific as well as application-related contribution was his pivotal role in the development of the mp3 format, which would later become a worldwide success. The contributions to this Festschrift were written by both Fraunhofer IIS staff and external project team members in appreciation of Prof. Dr. Gerhauser's lifetime academic achievements and his inspiring leadership at the Fraunhofer IIS. The papers reflect the broad spectrum of the institute's research activities and are grouped into sections on circuits, information systems, visual computing, and audio and multimedia. They provide academic and industrial researchers in fields like signal processing, sensor networks, microelectronics, and integrated circuits with an up-to-date overview of research results that have a huge potential for cutting-edge industrial applications.
For courses in structured systems analysis and design. Prioritising the practical over the technical, Modern Systems Analysis and Design presents the concepts, skills, methodologies, techniques, tools, and perspectives essential for systems analysts to develop information systems. The authors assume students have taken an introductory course on computer systems and have experience designing programs in at least one programming language. By drawing on the systems development life cycle, the authors provide a conceptual and systematic framework while progressing through topics logically. The 9th edition has been completely revised to adapt to the changing environment for systems development, with a renewed focus on agile methodologies. |
![]() ![]() You may like...
Handbook of Research on Innovation…
Gonçalo Poeta Fernandes, António Silva Melo
Hardcover
R7,930
Discovery Miles 79 300
Active Lighting and Its Application for…
Katsushi Ikeuchi, Yasuyuki Matsushita, …
Hardcover
R4,934
Discovery Miles 49 340
Ubiquitous and Pervasive Knowledge and…
Miltiadis D Lytras, Ambjorn Naeve
Hardcover
R2,836
Discovery Miles 28 360
|