![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
The volume contains latest research on software reliability assessment, testing, quality management, inventory management, mathematical modeling, analysis using soft computing techniques and management analytics. It links researcher and practitioner perspectives from different branches of engineering and management, and from around the world for a bird's eye view on the topics. The interdisciplinarity of engineering and management research is widely recognized and considered to be the most appropriate and significant in the fast changing dynamics of today's times. With insights from the volume, companies looking to drive decision making are provided actionable insight on each level and for every role using key indicators, to generate mobile-enabled scorecards, time-series based analysis using charts, and dashboards. At the same time, the book provides scholars with a platform to derive maximum utility in the area by subscribing to the idea of managing business through performance and business analytics.
From the Foreword "Getting CPS dependability right is essential to forming a solid foundation for a world that increasingly depends on such systems. This book represents the cutting edge of what we know about rigorous ways to ensure that our CPS designs are trustworthy. I recommend it to anyone who wants to get a deep look at these concepts that will form a cornerstone for future CPS designs." --Phil Koopman, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA Trustworthy Cyber-Physical Systems Engineering provides practitioners and researchers with a comprehensive introduction to the area of trustworthy Cyber Physical Systems (CPS) engineering. Topics in this book cover questions such as What does having a trustworthy CPS actually mean for something as pervasive as a global-scale CPS? How does CPS trustworthiness map onto existing knowledge, and where do we need to know more? How can we mathematically prove timeliness, correctness, and other essential properties for systems that may be adaptive and even self-healing? How can we better represent the physical reality underlying real-world numeric quantities in the computing system? How can we establish, reason about, and ensure trust between CPS components that are designed, installed, maintained, and operated by different organizations, and which may never have really been intended to work together? Featuring contributions from leading international experts, the book contains sixteen self-contained chapters that analyze the challenges in developing trustworthy CPS, and identify important issues in developing engineering methods for CPS. The book addresses various issues contributing to trustworthiness complemented by contributions on TCSP roadmapping, taxonomy, and standardization, as well as experience in deploying advanced system engineering methods in industry. Specific approaches to ensuring trustworthiness, namely, proof and refinement, are covered, as well as engineering methods for dealing with hybrid aspects.
Collected together in this book are ten state-of-the-art expository articles on the most important topics in optimization, written by leading experts in the field. The book therefore provides a primary reference for those performing research in some area of optimization or for those who have some basic knowledge of optimization techniques but wish to learn the most up-to-date and efficient algorithms for particular classes of problems. The first sections of each chapter are expository and therefore accessible to master's level graduate students. However, the chapters also contain advanced material on current topics of interest to researchers. For instance there are chapters which describe the polynomial-time linear programming algorithms of Khachian and Karmarkar and the techniques used to solve combinatorial and integer programming problems, an order of magnitude larger than was possible just a few years ago. Overall a comprehensive yet lively and up-to-date discussion of the state-of-the-art in optimization is presented in this book.
System on chips designs have evolved from fairly simple unicore, single memory designs to complex heterogeneous multicore SoC architectures consisting of a large number of IP blocks on the same silicon. To meet high computational demands posed by latest consumer electronic devices, most current systems are based on such paradigm, which represents a real revolution in many aspects in computing. The attraction of multicore processing for power reduction is compelling. By splitting a set of tasks among multiple processor cores, the operating frequency necessary for each core can be reduced, allowing to reduce the voltage on each core. Because dynamic power is proportional to the frequency and to the square of the voltage, we get a big gain, even though we may have more cores running. As more and more cores are integrated into these designs to share the ever increasing processing load, the main challenges lie in efficient memory hierarchy, scalable system interconnect, new programming paradigms, and efficient integration methodology for connecting such heterogeneous cores into a single system capable of leveraging their individual flexibility. Current design methods tend toward mixed HW/SW co-designs targeting multicore systems on-chip for specific applications. To decide on the lowest cost mix of cores, designers must iteratively map the device's functionality to a particular HW/SW partition and target architectures. In addition, to connect the heterogeneous cores, the architecture requires high performance complex communication architectures and efficient communication protocols, such as hierarchical bus, point-to-point connection, or Network-on-Chip. Software development also becomes far more complex due to the difficulties in breaking a single processing task into multiple parts that can be processed separately and then reassembled later. This reflects the fact that certain processor jobs cannot be easily parallelized to run concurrently on multiple processing cores and that load balancing between processing cores - especially heterogeneous cores - is very difficult.
Control system design is a challenging task for practicing engineers. It requires knowledge of different engineering fields, a good understanding of technical specifications and good communication skills. The current book introduces the reader into practical control system design, bridging the gap between theory and practice. The control design techniques presented in the book are all model based., considering the needs and possibilities of practicing engineers. Classical control design techniques are reviewed and methods are presented how to verify the robustness of the design. It is how the designed control algorithm can be implemented in real-time and tested, fulfilling different safety requirements. Good design practices and the systematic software development process are emphasized in the book according to the generic standard IEC61508. The book is mainly addressed to practicing control and embedded software engineers - working in research and development - as well as graduate students who are faced with the challenge to design control systems and implement them in real-time."
This book presents an examination of the middleware that can be used to configure and operate heterogeneous node platforms and sensor networks. The middleware requirements for a range of application scenarios are compared and analysed. The text then defines middleware architecture that has been integrated in an approach demonstrated live in a refinery. Features: presents a thorough introduction to the major concepts behind wireless sensor networks (WSNs); reviews the various application scenarios and existing middleware solutions for WSNs; discusses the middleware mechanisms necessary for heterogeneous WSNs; provides a detailed examination of platform-agnostic middleware architecture, including important implementation details; investigates the programming paradigms for WSNs, and for heterogeneous sensor networks in general; describes the results of extensive experimentation and testing, demonstrating that the generic architecture is viable for implementation on multiple platforms.
This book bridges fundamental gaps between control theory and formal methods. Although it focuses on discrete-time linear and piecewise affine systems, it also provides general frameworks for abstraction, analysis, and control of more general models. The book is self-contained, and while some mathematical knowledge is necessary, readers are not expected to have a background in formal methods or control theory. It rigorously defines concepts from formal methods, such as transition systems, temporal logics, model checking and synthesis. It then links these to the infinite state dynamical systems through abstractions that are intuitive and only require basic convex-analysis and control-theory terminology, which is provided in the appendix. Several examples and illustrations help readers understand and visualize the concepts introduced throughout the book.
The variety and abundance of qualitative characteristics of agricultural products have been the main reasons for the development of different types of non-destructive methods (NDTs). Quality control of these products is one of the most important tasks in manufacturing processes. The use of control and automation has become more widespread, and new approaches provide opportunities for production competition through new technologies. Applications of Image Processing and Soft Computing Systems in Agriculture examines applications of artificial intelligence in agriculture and the main uses of shape analysis on agricultural products such as relationships between form and genetics, adaptation, product characteristics, and product sorting. Additionally, it provides insights developed through computer vision techniques. Highlighting such topics as deep learning, agribusiness, and augmented reality, it is designed for academicians, researchers, agricultural practitioners, and industry professionals.
Information Systems Development: Business Systems and Services: Modeling and Development, is the collected proceedings of the 19th International Conference on Information Systems Development held in Prague, Czech Republic, August 25 - 27, 2010. It follows in the tradition of previous conferences in the series in exploring the connections between industry, research and education. These proceedings represent ongoing reflections within the academic community on established information systems topics and emerging concepts, approaches and ideas. It is hoped that the papers herein contribute towards disseminating research and improving practice.
"The healthcare industry in the United States consumes roughly 20% of the gross national product per year. This huge expenditure not only represents a large portion of the country's collective interests, but also an enormous amount of medical information. Information intensive healthcare enterprises have unique issues related to the collection, disbursement, and integration of various data within the healthcare system.Information Systems and Healthcare Enterprises provides insight on the challenges arising from the adaptation of information systems to the healthcare industry, including development, design, usage, adoption, expansion, and compliance with industry regulations. Highlighting the role of healthcare information systems in fighting healthcare fraud and the role of information technology and vendors, this book will be a highly valued addition to academic, medical, and health science libraries."
"The Supply of ConceptS" achieves a major breakthrough in the general theory of systems. It unfolds a theory of everything that steps beyond Physics' theory of the same name. The author unites all knowledge by including not only the natural but also the philosophical and theological universes of discourse. The general systems model presented here resembles an organizational flow chart that represents conceptual positions within any type of system and shows how the parts are connected hierarchically for communication and control. Analyzing many types of systems in various branches of learned discourse, the model demonstrates how any system type manages to maintain itself true to type. The concepts thus generated form a network that serves as a storehouse for the supply of concepts in learned discourse. Partial to the use of analogies, Irving Silverman presents his thesis in an easy-to-read style, explaining a way of thinking that he has found useful. This book will be of particular interest to the specialist in systems theory, philosophy, linguistics, and the social sciences. Irving Silverman applies his general systems model to 22 system types and presents rationales for these analyses. He provides the reader with a method, and a way to apply that method; a theory of knowledge derived from the method; and a practical outlook based on a comprehensive approach. Chapters include: Minding the Storehouse; Standing Together; The Cognitive Contract; The Ecological Contract; The Social Contract; The Semantic Terrain.
This book surveys the recent development of maintenance theory, advanced maintenance techniques with shock and damage models, and their applications in computer systems dealing with efficiency problems. It also equips readers to handle multiple maintenance, informs maintenance policies, and explores comparative methods for several different kinds of maintenance. Further, it discusses shock and damage modelling as an important failure mechanism for reliability systems, and extensively explores the degradation processes, failure modes, and maintenance characteristics of modern, highly complex systems, especially for some key mechanical systems designed for specific tasks.
Applying TQM to systems engineering can reduce costs while simultaneously improving product quality. This guide to proactive systems engineering shows how to develop and optimize a practical approach, while highlighting the pitfalls and potentials involved.
This book covers reliability assessment and prediction of new technologies such as next generation networks that use cloud computing, Network Function Virtualization (NVF), Software Defined Network (SDN), Next Generation Transport, Evolving Wireless Systems, Digital VoIP Telephony, and Reliability Testing techniques specific to Next Generation Networks (NGN). This book introduces the technology to the reader first, followed by advanced reliability techniques applicable to both hardware and software reliability analysis. The book covers methodologies that can predict reliability using component failure rates to system level downtimes. The book's goal is to familiarize the reader with analytical techniques, tools and methods necessary for analyzing very complex networks using very different technologies. The book lets readers quickly learn technologies behind currently evolving NGN and apply advanced Markov modeling and Software Reliability Engineering (SRE) techniques for assessing their operational reliability. Covers reliability analysis of advanced networks and provides basic mathematical tools and analysis techniques and methodology for reliability and quality assessment; Develops Markov and Software Engineering Models to predict reliability; Covers both hardware and software reliability for next generation technologies.
The main objective of pervasive computing systems is to create environments where computers become invisible by being seamlessly integrated and connected into our everyday environment, where such embedded computers can then provide inf- mation and exercise intelligent control when needed, but without being obtrusive. Pervasive computing and intelligent multimedia technologies are becoming incre- ingly important to the modern way of living. However, many of their potential applications have not yet been fully realized. Intelligent multimedia allows dynamic selection, composition and presentation of the most appropriate multimedia content based on user preferences. A variety of applications of pervasive computing and - telligent multimedia are being developed for all walks of personal and business life. Pervasive computing (often synonymously called ubiquitous computing, palpable computing or ambient intelligence) is an emerging ?eld of research that brings in revolutionary paradigms for computing models in the 21st century. Pervasive c- puting is the trend towards increasingly ubiquitous connected computing devices in the environment, a trend being brought about by a convergence of advanced el- tronic - and particularly, wireless - technologies and the Internet. Recent advances in pervasive computers, networks, telecommunications and information technology, along with the proliferation of multimedia mobile devices - such as laptops, iPods, personal digital assistants (PDAs) and cellular telephones - have further stimulated the development of intelligent pervasive multimedia applications. These key te- nologiesarecreatingamultimediarevolutionthatwillhavesigni?cantimpactacross a wide spectrum of consumer, business, healthcare and governmental domains.
This book provides a comprehensive presentation of the most advanced research results and technological developments enabling understanding, qualifying and mitigating the soft errors effect in advanced electronics, including the fundamental physical mechanisms of radiation induced soft errors, the various steps that lead to a system failure, the modelling and simulation of soft error at various levels (including physical, electrical, netlist, event driven, RTL, and system level modelling and simulation), hardware fault injection, accelerated radiation testing and natural environment testing, soft error oriented test structures, process-level, device-level, cell-level, circuit-level, architectural-level, software level and system level soft error mitigation techniques. The book contains a comprehensive presentation of most recent advances on understanding, qualifying and mitigating the soft error effect in advanced electronic systems, presented by academia and industry experts in reliability, fault tolerance, EDA, processor, SoC and system design, and in particular, experts from industries that have faced the soft error impact in terms of product reliability and related business issues and were in the forefront of the countermeasures taken by these companies at multiple levels in order to mitigate the soft error effects at a cost acceptable for commercial products. In a fast moving field, where the impact on ground level electronics is very recent and its severity is steadily increasing at each new process node, impacting one after another various industry sectors (as an example, the Automotive Electronics Council comes to publish qualification requirements on soft errors), research and technology developments and industrial practices have evolve very fast, outdating the most recent books edited at 2004.
VLSI 2010 Annual Symposium will present extended versions of the best papers presented in ISVLSI 2010 conference. The areas covered by the papers will include among others: Emerging Trends in VLSI, Nanoelectronics, Molecular, Biological and Quantum Computing. MEMS, VLSI Circuits and Systems, Field-programmable and Reconfigurable Systems, System Level Design, System-on-a-Chip Design, Application-Specific Low Power, VLSI System Design, System Issues in Complexity, Low Power, Heat Dissipation, Power Awareness in VLSI Design, Test and Verification, Mixed-Signal Design and Analysis, Electrical/Packaging Co-Design, Physical Design, Intellectual property creating and sharing.
This book covers the important aspects involved in making cognitive radio devices portable, mobile and green, while also extending their service life. At the same time, it presents a variety of established theories and practices concerning cognitive radio from academia and industry. Cognitive radio can be utilized as a backbone communication medium for wireless devices. To effectively achieve its commercial application, various aspects of quality of service and energy management need to be addressed. The topics covered in the book include energy management and quality of service provisioning at Layer 2 of the protocol stack from the perspectives of medium access control, spectrum selection, and self-coexistence for cognitive radio networks.
Whether you re new to systems analysis or have been there, done that and seen it all but especially if you want to ponder the significance of information systems analysis in the scheme of the universe, this book is for you. The author brings a unique perspective to the problems of computer system analysis
Healthcare Informatics: Improving Efficiency and Productivity examines the complexities involved in managing resources in our healthcare system and explains how management theory and informatics applications can increase efficiencies in various functional areas of healthcare services. Delving into data and project management and advanced analytics, this book details and provides supporting evidence for the strategic concepts that are critical to achieving successful healthcare information technology (HIT), information management, and electronic health record (EHR) applications. This includes the vital importance of involving nursing staff in rollouts, engaging physicians early in any process, and developing a more receptive organizational culture to digital information and systems adoption. We owe it to ourselves and future generations to do all we can to make our healthcare systems work smarter, be more effective, and reach more people. The power to know is at our fingertips; we need only embrace it. -From the foreword by James H. Goodnight, PhD, CEO, SAS Bridging the gap from theory to practice, it discusses actual informatics applications that have been incorporated by various healthcare organizations and the corresponding management strategies that led to their successful employment. Offering a wealth of detail, it details several working projects, including: A computer physician order entry (CPOE) system project at a North Carolina hospital E-commerce self-service patient check-in at a New Jersey hospital The informatics project that turned a healthcare system's paper-based resources into digital assets Projects at one hospital that helped reduce excesses in length of stay, improved patient safety; and improved efficiency with an ADE alert system A healthcare system's use of algorithms to identify patients at risk for hepatitis Offering the guidance that healthcare specialists need to make use of various informatics platforms, this book provides the motivation and the proven methods that can be adapted and applied to any number of staff, patient, or regulatory concerns. |
![]() ![]() You may like...
Descriptive Accounting - IFRS Accounting…
Z. Koppeschaar, J. Rossouw, …
Paperback
Design and Analysis of Simulation…
S.M. Ermakov, Viatcheslav B. Melas
Hardcover
R2,991
Discovery Miles 29 910
Uva's Rigging Guide for Studio and…
Sabrina Uva, Michael Uva
Hardcover
R4,487
Discovery Miles 44 870
Interact with Information Technology 3…
Roland Birbal, Michele Taylor
Paperback
R785
Discovery Miles 7 850
Computational Models for Neuroscience…
Robert Hecht-Nielsen, Thomas McKenna
Hardcover
R4,383
Discovery Miles 43 830
|