![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
"The healthcare industry in the United States consumes roughly 20% of the gross national product per year. This huge expenditure not only represents a large portion of the country's collective interests, but also an enormous amount of medical information. Information intensive healthcare enterprises have unique issues related to the collection, disbursement, and integration of various data within the healthcare system.Information Systems and Healthcare Enterprises provides insight on the challenges arising from the adaptation of information systems to the healthcare industry, including development, design, usage, adoption, expansion, and compliance with industry regulations. Highlighting the role of healthcare information systems in fighting healthcare fraud and the role of information technology and vendors, this book will be a highly valued addition to academic, medical, and health science libraries."
"The Supply of ConceptS" achieves a major breakthrough in the general theory of systems. It unfolds a theory of everything that steps beyond Physics' theory of the same name. The author unites all knowledge by including not only the natural but also the philosophical and theological universes of discourse. The general systems model presented here resembles an organizational flow chart that represents conceptual positions within any type of system and shows how the parts are connected hierarchically for communication and control. Analyzing many types of systems in various branches of learned discourse, the model demonstrates how any system type manages to maintain itself true to type. The concepts thus generated form a network that serves as a storehouse for the supply of concepts in learned discourse. Partial to the use of analogies, Irving Silverman presents his thesis in an easy-to-read style, explaining a way of thinking that he has found useful. This book will be of particular interest to the specialist in systems theory, philosophy, linguistics, and the social sciences. Irving Silverman applies his general systems model to 22 system types and presents rationales for these analyses. He provides the reader with a method, and a way to apply that method; a theory of knowledge derived from the method; and a practical outlook based on a comprehensive approach. Chapters include: Minding the Storehouse; Standing Together; The Cognitive Contract; The Ecological Contract; The Social Contract; The Semantic Terrain.
This graduate-level textbook elucidates low-risk and fail-safe systems in mathematical detail. It addresses, in particular, problems where mission-critical performance is paramount, such as in aircraft, missiles, nuclear reactors and weapons, submarines, and many other types of systems where "failure" can result in overwhelming loss of life and property. The book is divided into four parts: Fundamentals, Electronics, Software, and Dangerous Goods. The first part on Fundamentals addresses general concepts of system safety engineering that are applicable to any type of system. The second part, Electronics, addresses the detection and correction of electronic hazards. In particular, the Bent Pin Problem, Sneak Circuit Problem, and related electrical problems are discussed with mathematical precision. The third part on Software addresses predicting software failure rates as well as detecting and correcting deep software logical flaws (called defects). The fourth part on Dangerous Goods presents solutions to three typical industrial chemical problems faced by the system safety engineer during the design, storage, and disposal phases of a dangerous goods' life cycle.
This book surveys the recent development of maintenance theory, advanced maintenance techniques with shock and damage models, and their applications in computer systems dealing with efficiency problems. It also equips readers to handle multiple maintenance, informs maintenance policies, and explores comparative methods for several different kinds of maintenance. Further, it discusses shock and damage modelling as an important failure mechanism for reliability systems, and extensively explores the degradation processes, failure modes, and maintenance characteristics of modern, highly complex systems, especially for some key mechanical systems designed for specific tasks.
Applying TQM to systems engineering can reduce costs while simultaneously improving product quality. This guide to proactive systems engineering shows how to develop and optimize a practical approach, while highlighting the pitfalls and potentials involved.
This book covers reliability assessment and prediction of new technologies such as next generation networks that use cloud computing, Network Function Virtualization (NVF), Software Defined Network (SDN), Next Generation Transport, Evolving Wireless Systems, Digital VoIP Telephony, and Reliability Testing techniques specific to Next Generation Networks (NGN). This book introduces the technology to the reader first, followed by advanced reliability techniques applicable to both hardware and software reliability analysis. The book covers methodologies that can predict reliability using component failure rates to system level downtimes. The book's goal is to familiarize the reader with analytical techniques, tools and methods necessary for analyzing very complex networks using very different technologies. The book lets readers quickly learn technologies behind currently evolving NGN and apply advanced Markov modeling and Software Reliability Engineering (SRE) techniques for assessing their operational reliability. Covers reliability analysis of advanced networks and provides basic mathematical tools and analysis techniques and methodology for reliability and quality assessment; Develops Markov and Software Engineering Models to predict reliability; Covers both hardware and software reliability for next generation technologies.
The main objective of pervasive computing systems is to create environments where computers become invisible by being seamlessly integrated and connected into our everyday environment, where such embedded computers can then provide inf- mation and exercise intelligent control when needed, but without being obtrusive. Pervasive computing and intelligent multimedia technologies are becoming incre- ingly important to the modern way of living. However, many of their potential applications have not yet been fully realized. Intelligent multimedia allows dynamic selection, composition and presentation of the most appropriate multimedia content based on user preferences. A variety of applications of pervasive computing and - telligent multimedia are being developed for all walks of personal and business life. Pervasive computing (often synonymously called ubiquitous computing, palpable computing or ambient intelligence) is an emerging ?eld of research that brings in revolutionary paradigms for computing models in the 21st century. Pervasive c- puting is the trend towards increasingly ubiquitous connected computing devices in the environment, a trend being brought about by a convergence of advanced el- tronic - and particularly, wireless - technologies and the Internet. Recent advances in pervasive computers, networks, telecommunications and information technology, along with the proliferation of multimedia mobile devices - such as laptops, iPods, personal digital assistants (PDAs) and cellular telephones - have further stimulated the development of intelligent pervasive multimedia applications. These key te- nologiesarecreatingamultimediarevolutionthatwillhavesigni?cantimpactacross a wide spectrum of consumer, business, healthcare and governmental domains.
This book provides a comprehensive presentation of the most advanced research results and technological developments enabling understanding, qualifying and mitigating the soft errors effect in advanced electronics, including the fundamental physical mechanisms of radiation induced soft errors, the various steps that lead to a system failure, the modelling and simulation of soft error at various levels (including physical, electrical, netlist, event driven, RTL, and system level modelling and simulation), hardware fault injection, accelerated radiation testing and natural environment testing, soft error oriented test structures, process-level, device-level, cell-level, circuit-level, architectural-level, software level and system level soft error mitigation techniques. The book contains a comprehensive presentation of most recent advances on understanding, qualifying and mitigating the soft error effect in advanced electronic systems, presented by academia and industry experts in reliability, fault tolerance, EDA, processor, SoC and system design, and in particular, experts from industries that have faced the soft error impact in terms of product reliability and related business issues and were in the forefront of the countermeasures taken by these companies at multiple levels in order to mitigate the soft error effects at a cost acceptable for commercial products. In a fast moving field, where the impact on ground level electronics is very recent and its severity is steadily increasing at each new process node, impacting one after another various industry sectors (as an example, the Automotive Electronics Council comes to publish qualification requirements on soft errors), research and technology developments and industrial practices have evolve very fast, outdating the most recent books edited at 2004.
VLSI 2010 Annual Symposium will present extended versions of the best papers presented in ISVLSI 2010 conference. The areas covered by the papers will include among others: Emerging Trends in VLSI, Nanoelectronics, Molecular, Biological and Quantum Computing. MEMS, VLSI Circuits and Systems, Field-programmable and Reconfigurable Systems, System Level Design, System-on-a-Chip Design, Application-Specific Low Power, VLSI System Design, System Issues in Complexity, Low Power, Heat Dissipation, Power Awareness in VLSI Design, Test and Verification, Mixed-Signal Design and Analysis, Electrical/Packaging Co-Design, Physical Design, Intellectual property creating and sharing.
This book covers the important aspects involved in making cognitive radio devices portable, mobile and green, while also extending their service life. At the same time, it presents a variety of established theories and practices concerning cognitive radio from academia and industry. Cognitive radio can be utilized as a backbone communication medium for wireless devices. To effectively achieve its commercial application, various aspects of quality of service and energy management need to be addressed. The topics covered in the book include energy management and quality of service provisioning at Layer 2 of the protocol stack from the perspectives of medium access control, spectrum selection, and self-coexistence for cognitive radio networks.
Whether you re new to systems analysis or have been there, done that and seen it all but especially if you want to ponder the significance of information systems analysis in the scheme of the universe, this book is for you. The author brings a unique perspective to the problems of computer system analysis
Healthcare Informatics: Improving Efficiency and Productivity examines the complexities involved in managing resources in our healthcare system and explains how management theory and informatics applications can increase efficiencies in various functional areas of healthcare services. Delving into data and project management and advanced analytics, this book details and provides supporting evidence for the strategic concepts that are critical to achieving successful healthcare information technology (HIT), information management, and electronic health record (EHR) applications. This includes the vital importance of involving nursing staff in rollouts, engaging physicians early in any process, and developing a more receptive organizational culture to digital information and systems adoption. We owe it to ourselves and future generations to do all we can to make our healthcare systems work smarter, be more effective, and reach more people. The power to know is at our fingertips; we need only embrace it. -From the foreword by James H. Goodnight, PhD, CEO, SAS Bridging the gap from theory to practice, it discusses actual informatics applications that have been incorporated by various healthcare organizations and the corresponding management strategies that led to their successful employment. Offering a wealth of detail, it details several working projects, including: A computer physician order entry (CPOE) system project at a North Carolina hospital E-commerce self-service patient check-in at a New Jersey hospital The informatics project that turned a healthcare system's paper-based resources into digital assets Projects at one hospital that helped reduce excesses in length of stay, improved patient safety; and improved efficiency with an ADE alert system A healthcare system's use of algorithms to identify patients at risk for hepatitis Offering the guidance that healthcare specialists need to make use of various informatics platforms, this book provides the motivation and the proven methods that can be adapted and applied to any number of staff, patient, or regulatory concerns.
'New Technologies in Hospital Information Systems' is launched by the European Telematics Applications Project 'Healthcare Advanced Networked System Architecture' (HANSA) with support of the GMDS WG Hospital Information Systems and the GMDS FA Medical Informatics. It contains 28 high quality papers dealing with architectural concepts, models and developments for hospital information systems. The book has been organized in seven sections: Reference Architectures, Modelling and Applications, The Distributed Healthcare Environment, Intranet Solutions, Object Orientation, Networked Solutions and Standards and Applications. The HANSA project is based upon the European Pre-standard for Healthcare Information System Architecture which has been drawn up by CEN/TC 251 PT01-13. The editors felt that this standard will have a major impact on future developments for hospital information systems. Therefore the standard is completely included as an appendix.
Managing Complexity is the first book that clearly defines the concept of Complexity, explains how Complexity can be measured and tuned, and describes the seven key features of Complex Systems: 1. Connectivity 2. Autonomy 3. Emergency 4. Nonequilibrium 5. Non-linearity 6. Self-organisation 7. Co-evolution The thesis of the book is that complexity of the environment in which we work and live offers new opportunities and that the best strategy for surviving and prospering under conditions of complexity is to develop adaptability to perpetually changing conditions. An effective method for designing adaptability into business processes using multi-agent technology is presented and illustrated by several extensive examples, including adaptive, real-time scheduling of taxis, see-going tankers, road transport, supply chains, railway trains, production processes and swarms of small space satellites. Additional case studies include adaptive servicing of the International Space Station; adaptive processing of design changes of large structures such as wings of the largest airliner in the world; dynamic data mining, knowledge discovery and distributed semantic processing.Finally, the book provides a foretaste of the next generation of complex issues, notably, The Internet of Things, Smart Cities, Digital Enterprises and Smart Logistics.
"Discrete-Time Linear Systems: Theory and Design with Applications "combines system theory and design in order to show the importance of system theory and its role in system design. The book focuses on system theory (including optimal state feedback and optimal state estimation) and system design (with applications to feedback control systems and wireless transceivers, plus system identification and channel estimation).
To solve performance problems in modern computing infrastructures, often comprising thousands of servers running hundreds of applications, spanning multiple tiers, you need tools that go beyond mere reporting. You need tools that enable performance analysis of application workflow across the entire enterprise. That's what PDQ (Pretty Damn Quick) provides. PDQ is an open-source performance analyzer based on the paradigm of queues. Queues are ubiquitous in every computing environment as buffers, and since any application architecture can be represented as a circuit of queueing delays, PDQ is a natural fit for analyzing system performance. Building on the success of the first edition, this considerably expanded second edition now comprises four parts. Part I contains the foundational concepts, as well as a new first chapter that explains the central role of queues in successful performance analysis. Part II provides the basics of queueing theory in a highly intelligible style for the non-mathematician; little more than high-school algebra being required. Part III presents many practical examples of how PDQ can be applied. The PDQ manual has been relegated to an appendix in Part IV, along with solutions to the exercises contained in each chapter. Throughout, the Perl code listings have been newly formatted to improve readability. The PDQ code and updates to the PDQ manual are available from the author's web site at www.perfdynamics.com
The innovative process of open source software is led in greater part by the end-users; therefore this aspect of open source software remains significant beyond the realm of traditional software development. Open Source Software Dynamics, Processes, and Applications is a multidisciplinary collection of research and approaches on the applications and processes of open source software. Highlighting the development processes performed by software programmers, the motivations of its participants, and the legal and economic issues that have been raised; this book is essential for scholars, students, and practitioners in the fields of software engineering and management as well as sociology.
This book serves as a practical guide for practicing engineers who need to design embedded systems for high-speed data acquisition and control systems. A minimum amount of theory is presented, along with a review of analog and digital electronics, followed by detailed explanations of essential topics in hardware design and software development. The discussion of hardware focuses on microcontroller design (ARM microcontrollers and FPGAs), techniques of embedded design, high speed data acquisition (DAQ) and control systems. Coverage of software development includes main programming techniques, culminating in the study of real-time operating systems. All concepts are introduced in a manner to be highly-accessible to practicing engineers and lead to the practical implementation of an embedded board that can be used in various industrial fields as a control system and high speed data acquisition system.
This book presents cutting-edge research contributions that address various aspects of network design, optimization, implementation, and application of cognitive radio technologies. It demonstrates how to make better utilization of the available spectrum, cognitive radios and spectrum access to achieve effective spectrum sharing between licensed and unlicensed users. The book provides academics and researchers essential information on current developments and future trends in cognitive radios for possible integration with the upcoming 5G networks. In addition, it includes a brief introduction to cognitive radio networks for newcomers to the field.
Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting potentials can be identified. The book presents the relevant theoretical background and measuring methods as well as proposed solutions. An evaluation of network monitors and checklists rounds out the work.
Systems for Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) are currently separate. The potential of the latest technologies and changes in operational and analytical applications over the last decade have given rise to the unification of these systems, which can be of benefit for both workloads. Research and industry have reacted and prototypes of hybrid database systems are now appearing. Benchmarks are the standard method for evaluating, comparing and supporting the development of new database systems. Because of the separation of OLTP and OLAP systems, existing benchmarks are only focused on one or the other. With the rise of hybrid database systems, benchmarks to assess these systems will be needed as well. Based on the examination of existing benchmarks, a new benchmark for hybrid database systems is introduced in this book. It is furthermore used to determine the effect of adding OLAP to an OLTP workload and is applied to analyze the impact of typically used optimizations in the historically separate OLTP and OLAP domains in mixed-workload scenarios.
The demand for large-scale dependable, systems, such as Air Traffic Management, industrial plants and space systems, is attracting efforts of many word-leading European companies and SMEs in the area, and is expected to increase in the near future. The adoption of Off-The-Shelf (OTS) items plays a key role in such a scenario. OTS items allow mastering complexity and reducing costs and time-to-market; however, achieving these goals by ensuring dependability requirements at the same time is challenging. CRITICAL STEP project establishes a strategic collaboration between academic and industrial partners, and proposes a framework to support the development of dependable, OTS-based, critical systems. The book introduces methods and tools adopted by the critical systems industry, and surveys key achievements of the CRITICAL STEP project along four directions: fault injection tools, V&V of critical systems, runtime monitoring and evaluation techniques, and security assessment. |
![]() ![]() You may like...
Thermodynamic Equilibria and Extrema…
Alexander N Gorban, Boris M. Kaganovich, …
Hardcover
R3,054
Discovery Miles 30 540
|