![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
Our fascination with new technologies is based on the assumption that more powerful automation will overcome human limitations and make our systems 'faster, better, cheaper,' resulting in simple, easy tasks for people. But how does new technology and more powerful automation change our work? Research in Cognitive Systems Engineering (CSE) looks at the intersection of people, technology, and work. What it has found is not stories of simplification through more automation, but stories of complexity and adaptation. When work changed through new technology, practitioners had to cope with new complexities and tighter constraints. They adapted their strategies and the artifacts to work around difficulties and accomplish their goals as responsible agents. The surprise was that new powers had transformed work, creating new roles, new decisions, and new vulnerabilities. Ironically, more autonomous machines have created the requirement for more sophisticated forms of coordination across people, and across people and machines, to adapt to new demands and pressures. This book synthesizes these emergent Patterns though stories about coordination and mis-coordination, resilience and brittleness, affordance and clumsiness in a variety of settings, from a hospital intensive care unit, to a nuclear power control room, to a space shuttle control center. The stories reveal how new demands make work difficult, how people at work adapt but get trapped by complexity, and how people at a distance from work oversimplify their perceptions of the complexities, squeezing practitioners. The authors explore how CSE observes at the intersection of people, technology, and work, how CSE abstracts patterns behind the surface details and wide variations, and how CSE discovers promising new directions to help people cope with complexities. The stories of CSE show that one key to well-adapted work is the ability to be prepared to be surprised. Are you ready?
Nothing has been more prolific over the past century than human/machine interaction. Automobiles, telephones, computers, manufacturing machines, robots, office equipment, machines large and small; all affect the very essence of our daily lives. However, this interaction has not always been efficient or easy and has at times turned fairly hazardous. Cognitive Systems Engineering (CSE) seeks to improve this situation by the careful study of human/machine interaction as the meaningful behavior of a unified system. Written by pioneers in the development of CSE, Joint Cognitive Systems: Foundations of Cognitive Systems Engineering offers a principled approach to studying human work with complex technology. The authors use a top-down, functional approach and emphasize a proactive (coping) perspective on work that overcomes the limitations of the structural human information processing view. They describe a conceptual framework for analysis with concrete theories and methods for joint system modeling that can be applied across the spectrum of single human/machine systems, social/technical systems, and whole organizations. The book explores both current and potential applications of CSE illustrated by examples. Understanding the complexities and functions of the human/machine interaction is critical to designing safe, highly functional, and efficient technological systems. This is a critical reference for students, designers, and engineers in a wide variety of disciplines.
The Unified Modeling Language is rapidly gaining acceptance as the
mechanism of choice to model complex software systems at various
steps of their specification and design, using a number of
orthogonal views that illustrate use cases, class diagrams and even
detailed state machine-based behaviors of objects. -UML and the Real-time/Embedded Domain, with chapters on the
role of UML in software development and on UML and Real-Time
Systems.
Recognized as a "Recommended" title by Choice for their November 2020 issue. Choice is a publishing unit at the Association of College & Research Libraries (ACR&L), a division of the American Library Association. Choice has been the acknowledged leader in the provision of objective, high-quality evaluations of nonfiction academic writing. Presenting a fundamental definition of resilience, the book examines the concept of resilience as it relates to space system design. The book establishes the required definitions, relates its place to existing state-of-the-art systems engineering practices, and explains the process and mathematical tools used to achieve a resilient design. It discusses a variety of potential threats and their impact upon a space system. By providing multiple, real-world examples to illustrate the application of the design methodology, the book covers the necessary techniques and tools, while guiding the reader through the entirety of the process. The book begins with space systems basics to ensure the reader is versed in the functions and components of the system prior to diving into the details of resilience. However, the text does not assume that the reader has an extensive background in the subject matter of resilience. This book is aimed at engineers and architects in the areas of aerospace, space systems, and space communications.
Calculation is the main function of a computer. The central unit is responsible for executing the programs. The microprocessor is its integrated form. This component, since the announcement of its marketing in 1971, has not stopped breaking records in terms of computing power, price reduction and integration of functions (calculation of basic functions, storage with integrated controllers). It is present today in most electronic devices. Knowing its internal mechanisms and programming is essential for the electronics engineer and computer scientist to understand and master the operation of a computer and advanced concepts of programming. This first volume focuses more particularly on the first generations of microprocessors, that is to say those that handle integers in 4 and 8-bit formats. The first chapter presents the calculation function and reminds the memory function. The following is devoted to notions of calculation model and architecture. The concept of bus is then presented. Chapters 4 and 5 can then address the internal organization and operation of the microprocessor first in hardware and then software. The mechanism of the function call, conventional and interrupted, is more particularly detailed in a separate chapter. The book ends with a presentation of architectures of the first microcomputers for a historical perspective. The knowledge is presented in the most exhaustive way possible with examples drawn from current and old technologies that illustrate and make accessible the theoretical concepts. Each chapter ends if necessary with corrected exercises and a bibliography. The list of acronyms used and an index are at the end of the book.
Classical Feedback Control with Nonlinear Multi-Loop Systems describes the design of high-performance feedback control systems, emphasizing the frequency-domain approach widely used in practical engineering. It presents design methods for high-order nonlinear single- and multi-loop controllers with efficient analog and digital implementations. Bode integrals are employed to estimate the available system performance and to determine the ideal frequency responses that maximize the disturbance rejection and feedback bandwidth. Nonlinear dynamic compensators provide global stability and improve transient responses. This book serves as a unique text for an advanced course in control system engineering, and as a valuable reference for practicing engineers competing in today's industrial environment.
Become a more effective decision-maker, communicator, and manager by using the valuable techniques described in this unique book. It's designed to help you break away from the constraints of the technologist's "analytical/scientific" viewpoint and employ broader organizational and personal perspectives that strengthen your decision-making ability and leadership skills. "Decision-Making for Technology Executives" shows you how to utilize this multiple perspective approach to problem-solving and systems development in real-world, outside the laboratory, situations. You learn how this three-dimensional approach has been applied successfully to a wide spectrum of complex systems tasks: from system forecasting to technology assessment, from industrial catastrophes to facility siting decisions, from corporate strategy to acquisition. Through valuable case studies, such as the Exxon Valdez and Bhopal accidents, you learn lessons on improving technology and risk assessment, forecasting, and crisis management. And through ready-to-implement, practical guidelines you see how to become a more effective decision-maker and manager, while improving communication between technologists and others involved in the decision process. A one-of-its-kind look at the multiple perspective concept, this guide helps to increase your understanding of complex sociotechnical systems, boost the technologist's effectiveness as an executive, and improve technological risk management, forecasting, and planning.
Engineering systems are an important element of world economy. Each year billions of dollars are spent to develop, manufacture, operate, and maintain various types of engineering systems about the globe. The reliability and usability of these systems have become important because of their increasing complexity, sophistication, and non-specialist users. Global competition and other factors are forcing manufacturers to produce highly reliable and usable engineering systems. Along with examples and solutions, this book integrates engineering systems reliability and usability into a single volume for those individuals that directly or indirectly are concerned with these areas.
SystemC provides a robust set of extensions to the C++ language that enables rapid development of complex models of hardware and software systems. The authors focus on practical use of the language for modeling real systems, showing: A step-by-step build-up of syntax Code examples for each concept Over 8000 lines of downloadable code examples Updates to reflect the SystemC standard, IEEE 1666 Why features are as they are Many resource references How SystemC fits into an ESL methodology This new edition of an industry best seller is updated to reflect the standardization of SystemC as IEEE 1666 and other improvements that reflect feedback from readers of the first edition. The wide ranging feedback also include suggestions from editors of the Japanese and Korean language translations, professors and students, and computer engineers from a broad industrial and geographical spectrum, all who have successfully used the first edition. New chapters have been added on the SystemC Verification Library and the Transaction Level Modeling, and proposed changes to the current SystemC standard. David Black and Jack Donovan, well known consultants in the EDA industry, have teamed with Bill Bunton and Anna Keist, experienced SystemC modeling engineers, to write the second edition of this highly popular classic. As a team the authors bring over 100 years of ASIC and system design experience together to make a very readable introduction to SystemC.
Thinking: A Guide to Systems Engineering Problem-Solving focuses upon articulating ways of thinking in today's world of systems and systems engineering. It also explores how the old masters made the advances they made, hundreds of years ago. Taken together, these considerations represent new ways of problem solving and new pathways to answers for modern times. Special areas of interest include types of intelligence, attributes of superior thinkers, systems architecting, corporate standouts, barriers to thinking, and innovative companies and universities. This book provides an overview of more than a dozen ways of thinking, to include: Inductive Thinking, Deductive Thinking, Reductionist Thinking, Out-of-the-Box Thinking, Systems Thinking, Design Thinking, Disruptive Thinking, Lateral Thinking, Critical Thinking, Fast and Slow Thinking, and Breakthrough Thinking. With these thinking skills, the reader is better able to tackle and solve new and varied types of problems. Features Proposes new approaches to problem solving for the systems engineer Compares as well as contrasts various types of Systems Thinking Articulates thinking attributes of the great masters as well as selected modern systems engineers Offers chapter by chapter thinking exercises for consideration and testing Suggests a "top dozen" for today's systems engineers
This book explores the application of breakthrough technologies to improve transportation performance. Transportation systems represent the "blood vessels" of a society, in which people and goods travel. They also influence people's lives and affect the liveability and sustainability of our cities. The book shows how emergent technologies are able to monitor the condition of the structure in real time in order to schedule the right moment for maintenance activities an so reduce the disturbance to users. This book is a valuable resource for those involved in research and development in this field. Part I discusses the context of transportation systems, highlighting the major issues and challenges, the importance of understating human factors that could affect the maintenance operations and the main goals in terms of safety standards. Part II focuses on process-oriented innovations in transportation systems; this section stresses the importance of including design parameters in the planning, offering a comparison between risk-based and condition-based maintenance and, lastly, showing applications of emergent technologies. Part III goes on to reflect on the technical-oriented innovations, discussing the importance of studying the physical phenomena that are behind transportation system failures and problems. It then introduces the general trend of collecting and analyzing big data using real-world cases to evaluate the positive and negative aspects of adopting extensive smart sensors for gathering information on the health of the assets. The last part (IV) explores cultural and behavioural changes, and new knowledge management methods, proposing novel forms of maintenance and vocational training, and introduces the need for radical new visions in transportation for managing unexpected events. The continuous evolution of maintenance fields suggests that this compendium of "state-of-the-art" applications will not be the only one; the authors are planning a collection of cutting-edge examples of transportation systems that can assist researchers and practitioners as well as students in the process of understanding the complex and multidisciplinary environment of maintenance engineering applied to the transport sector.
Standardization of hardware description languages and the availability of synthesis tools has brought about a remarkable increase in the productivity of hardware designers. Yet design verification methods and tools lag behind and have difficulty in dealing with the increasing design complexity. This may get worse because more complex systems are now constructed by (re)using Intellectual Property blocks developed by third parties. To verify such designs, abstract models of the blocks and the system must be developed, with separate concerns, such as interface communication, functionality, and timing, that can be verified in an almost independent fashion. Standard Hardware Description Languages such as VHDL and Verilog are inspired by procedural imperative' programming languages in which function and timing are inherently intertwined in the statements of the language. Furthermore, they are not conceived to state the intent of the design in a simple declarative way that contains provisions for design choices, for stating assumptions on the environment, and for indicating uncertainty in system timing. Hierarchical Annotated Action Diagrams: An Interface-Oriented Specification and Verification Method presents a description methodology that was inspired by Timing Diagrams and Process Algebras, the so-called Hierarchical Annotated Diagrams. It is suitable for specifying systems with complex interface behaviors that govern the global system behavior. A HADD specification can be converted into a behavioral real-time model in VHDL and used to verify the surrounding logic, such as interface transducers. Also, function can be conservatively abstracted away and the interactions between interconnecteddevices can be verified using Constraint Logic Programming based on Relational Interval Arithmetic. Hierarchical Annotated Action Diagrams: An Interface-Oriented Specification and Verification Method is of interest to readers who are involved in defining methods and tools for system-level design specification and verification. The techniques for interface compatibility verification can be used by practicing designers, without any more sophisticated tool than a calculator.
A comprehensive introduction to reliability and availability modeling, analysis, and design at the system, hardware, and software levels Reliability of Computer Systems and Networks presents the fundamentals of reliability and availability analysis for various computer hardware, software, and networked systems. Reliability and availability as major objectives in system design are the focus. Various redundancy and fault-tolerant techniques, as well as error-correcting coding techniques are treated. The author proposes a high-level design approach based on apportioning the reliability and availability goals to subsystems and provides various techniques for achieving these subsystem goals. The next step is an efficient, exact optimization approach based on upper and lower bounds to minimize the number of feasible candidates. The most readily applied methods for analysis are utilized and design techniques are derived from basic principles. Analytical simplifications and approximations are developed to validate the results of computer models used for large-scale complex problems. Coverage includes:
Reliability of Computer Systems and Networks offers in-depth and up-to-date coverage of reliability and availability for students with a focus on important applications areas, computer systems, and networks. Professionals in systems and reliability design, as well as computer architecture, will find it a highly useful reference.
This book presents practical guidelines for university research and administration. It uses a project management framework within a systems perspective to provide strategies for planning, scheduling, allocating resources, tracking, reporting, and controlling university-based research projects and programs. Project Management for Scholarly Researchers: Systems, Innovation, and Technologies covers the technical and human aspects of research management. It discusses federal requirements and compliance issues, in addition to offering advice on proper research lab management and faculty mentoring. It explains the hierarchy of needs of researchers to help readers identify their own needs for their research enterprises. This book provides rigorous treatment and guidance for all engineering fields and related business disciplines, as well as all management and humanities fields.
In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact. Certifications of Critical Systems - The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases. Topics covered include: Safety Assessment, Reliability Analysis, Critical Systems and Applications, Functional Safety, Dependability Validation, Dependable Software Systems, Embedded Systems, System Certification.
This book provides an essential update for experienced data processing professionals, transaction managers and database specialists who are seeking system solutions beyond the confines of traditional approaches. It provides practical advice on how to manage complex transactions and share distributed databases on client servers and the Internet. Based on extensive research in over 100 companies in the USA, Europe, Japan and the UK, topics covered include : * the challenge of global transaction requirements within an expanding business perspective *how to handle long transactions and their constituent elements *possible benefits from object-oriented solutions * the contribution of knowledge engineering in transaction management * the Internet, the World Wide Web and transaction handling * systems software and transaction-processing monitors * OSF/1 and the Encina transaction monitor * active data transfers and remote procedure calls * serialization in a transaction environment * transaction locks, two-phase commit and deadlocks * improving transaction-oriented database management * the successful development of an increasingly complex transaction environment.
This book focuses on core functionalities for wireless real-time multi-hop networking with TDMA (time-division multiple access) and their integration into a flexible, versatile, fully operational, self-contained communication system. The use of wireless real-time communication technologies for the flexible networking of sensors, actuators, and controllers is a crucial building block for future production and control systems. WirelessHART and ISA 100.11a, two technologies that have been developed predominantly for industrial use, are currently available. However, a closer analysis of these approaches reveals certain deficits. Current research on wireless real-time communication systems shows potential to remove these limitations, resulting in flexible, versatile, and robust solutions that can be implemented on today's low-cost and resource-constrained hardware platforms. Unlike other books on wireless communication, this book presents protocols located on MAC layer and above, and build on the physical (PHY) layer of standard wireless communication technologies.
You know how to code in Elixir; now learn to think in it. Learn to design libraries with intelligent layers that shape the right data structures, flow from one function into the next, and present the right APIs. Embrace the same OTP that's kept our telephone systems reliable and fast for over 30 years. Move beyond understanding the OTP functions to knowing what's happening under the hood, and why that matters. Using that knowledge, instinctively know how to design systems that deliver fast and resilient services to your users, all with an Elixir focus. Elixir is gaining mindshare as the programming language you can use to keep you software running forever, even in the face of unexpected errors and an ever growing need to use more processors. This power comes from an effective programming language, an excellent foundation for concurrency and its inheritance of a battle-tested framework called the OTP. If you're using frameworks like Phoenix or Nerves, you're already experiencing the features that make Elixir an excellent language for today's demands. This book shows you how to go beyond simple programming to designing, and that means building the right layers. Embrace those data structures that work best in functional programs and use them to build functions that perform and compose well, layer by layer, across processes. Test your code at the right place using the right techniques. Layer your code into pieces that are easy to understand and heal themselves when errors strike. Of all Elixir's boons, the most important one is that it guides us to design our programs in a way to most benefit from the architecture that they run on. The experts do it and now you can learn to design programs that do the same. What You Need: Elixir Version 1.7 or greater.
Upon its initial publication, the Handbook of Circuits and Filters broke new ground. It quickly became the resource for comprehensive coverage of issues and practical information that can be put to immediate use. Not content to rest on his laurels, editor Wai-kai Chen divided the second edition into volumes, making the information easily accessible and digestible. In the third edition, these volumes have been revised, updated, and expanded so that they continue to provide solid coverage of standard practices and enlightened perspectives on new and emerging techniques. Feedback, Nonlinear, and Distributed Circuits draws together international contributors who discuss feedback amplifier theory and then move on to explore feedback amplifier configurations. They develop Bode's feedback theory as an example of general feedback theory. The coverage then moves on to the importance of complementing numerical analysis with qualitative analysis to get a global picture of a circuit's performance. After reviewing a wide range of approximation techniques and circuit design styles for discreet and monolithic circuits, the book presents a comprehensive description of the use of piecewise-linear methods in modeling, analysis, and structural properties of nonlinear circuits highlighting the advantages. It describes the circuit modeling in the frequency domain of uniform MTL based on the Telegrapher's equations and covers frequency and time domain experimental characterization techniques for uniform and nonuniform multiconductor structures. This volume will undoubtedly take its place as the engineer's first choice in looking for solutions to problems encountered in the analysis and behavior predictions of circuits and filters.
In spite of their importance and potential societal impact, there is currently no comprehensive source of information about vehicular ad hoc networks (VANETs). Cohesively integrating the state of the art in this emerging field, Vehicular Networks: From Theory to Practice elucidates many issues involved in vehicular networking, including traffic engineering, human factors studies, and novel computer science research. Divided into six broad sections, the book begins with an overview of traffic engineering issues, such as traffic monitoring and traffic flow modeling. It then introduces governmental and industrial efforts in the United States and Europe to set standards and perform field tests on the feasibility of vehicular networks. After highlighting innovative applications enabled by vehicular networks, the book discusses several networking-related issues, including routing and localization. The following section focuses on simulation, which is currently the primary method for evaluating vehicular networking systems. The final part explores the extent and impact of driver distraction with in-vehicle displays. Encompassing both introductory and advanced concepts, this guide covers the various areas that impact the design of applications for vehicular networks. It details key research challenges, offers guidance on developing future standards, and supplies valuable information on existing experimental studies.
This book presents the state-of-the-art work in terms of searchable storage in cloud computing. It introduces and presents new schemes for exploring and exploiting the searchable storage via cost-efficient semantic hashing computation. Specifically, the contents in this book include basic hashing structures (Bloom filters, locality sensitive hashing, cuckoo hashing), semantic storage systems, and searchable namespace, which support multiple applications, such as cloud backups, exact and approximate queries and image analytics. Readers would be interested in the searchable techniques due to the ease of use and simplicity. More importantly, all these mentioned structures and techniques have been really implemented to support real-world applications, some of which offer open-source codes for public use. Readers will obtain solid backgrounds, new insights and implementation experiences with basic knowledge in data structure and computer systems.
Classical FORTRAN: Programming for Engineering and Scientific Applications, Second Edition teaches how to write programs in the Classical dialect of FORTRAN, the original and still most widely recognized language for numerical computing. This edition retains the conversational style of the original, along with its simple, carefully chosen subset language and its focus on floating-point calculations. New to the Second Edition Additional case study on file I/O More about CPU timing on Pentium processors More about the g77 compiler and Linux With numerous updates and revisions throughout, this second edition continues to use case studies and examples to introduce the language elements and design skills needed to write graceful, correct, and efficient programs for real engineering and scientific applications. After reading this book, students will know what statements to use and where as well as why to avoid the others, helping them become expert FORTRAN programmers.
Develops a Comprehensive, Global Model for Contextually Based Processing Systems A new perspective on global information systems operation Helping to advance a valuable paradigm shift in the next generation and processing of knowledge, Introduction to Contextual Processing: Theory and Applications provides a comprehensive model for constructing a contextually based processing system. It explores the components of this system, the interactions of the components, key mathematical foundations behind the model, and new concepts necessary for operating the system. After defining the key dimensions of a model for contextual processing, the book discusses how data is used to develop a semantic model for contexts as well as language-driven context-specific processing actions. It then applies rigorous mathematical methods to contexts, examines basic sensor data fusion theory and applies it to the contextual fusion of information, and describes the means to distribute contextual information. The authors also illustrate a new type of data repository model to manage contextual data, before concluding with the requirements of contextual security in a global environment. This seminal work presents an integrated framework for the design and operation of the next generation of IT processing. It guides the way for developing advanced IT systems and offers new models and concepts that can support advanced semantic web and cloud computing capabilities at a global scale. |
![]() ![]() You may like...
GAIMME - Guidelines for Assessment…
Sol Garfunkle, Michelle Montgomery
Paperback
R631
Discovery Miles 6 310
Protecting Privacy through Homomorphic…
Kristin Lauter, Wei Dai, …
Hardcover
R3,121
Discovery Miles 31 210
Mathematical Modelling - Education…
C Haines, P. Galbraith, …
Paperback
High-Arctic Ecosystem Dynamics in a…
Hans Meltofte, Torben R. Christensen, …
Hardcover
R5,640
Discovery Miles 56 400
Mathematical Modelling Education in East…
Frederick Koon-Shing Leung, Gloria Ann Stillman, …
Hardcover
R4,290
Discovery Miles 42 900
|