![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
Nothing has been more prolific over the past century than human/machine interaction. Automobiles, telephones, computers, manufacturing machines, robots, office equipment, machines large and small; all affect the very essence of our daily lives. However, this interaction has not always been efficient or easy and has at times turned fairly hazardous. Cognitive Systems Engineering (CSE) seeks to improve this situation by the careful study of human/machine interaction as the meaningful behavior of a unified system. Written by pioneers in the development of CSE, Joint Cognitive Systems: Foundations of Cognitive Systems Engineering offers a principled approach to studying human work with complex technology. The authors use a top-down, functional approach and emphasize a proactive (coping) perspective on work that overcomes the limitations of the structural human information processing view. They describe a conceptual framework for analysis with concrete theories and methods for joint system modeling that can be applied across the spectrum of single human/machine systems, social/technical systems, and whole organizations. The book explores both current and potential applications of CSE illustrated by examples. Understanding the complexities and functions of the human/machine interaction is critical to designing safe, highly functional, and efficient technological systems. This is a critical reference for students, designers, and engineers in a wide variety of disciplines.
This classic reference work is a comprehensive guide to the design, evaluation, and use of reliable computer systems. It includes case studies of reliable systems from manufacturers, such as Tandem, Stratus, IBM, and Digital. It covers special systems such as the Galileo Orbiter fault protection system and AT&T telephone switching system processors.
Recognized as a "Recommended" title by Choice for their November 2020 issue. Choice is a publishing unit at the Association of College & Research Libraries (ACR&L), a division of the American Library Association. Choice has been the acknowledged leader in the provision of objective, high-quality evaluations of nonfiction academic writing. Presenting a fundamental definition of resilience, the book examines the concept of resilience as it relates to space system design. The book establishes the required definitions, relates its place to existing state-of-the-art systems engineering practices, and explains the process and mathematical tools used to achieve a resilient design. It discusses a variety of potential threats and their impact upon a space system. By providing multiple, real-world examples to illustrate the application of the design methodology, the book covers the necessary techniques and tools, while guiding the reader through the entirety of the process. The book begins with space systems basics to ensure the reader is versed in the functions and components of the system prior to diving into the details of resilience. However, the text does not assume that the reader has an extensive background in the subject matter of resilience. This book is aimed at engineers and architects in the areas of aerospace, space systems, and space communications.
SystemC provides a robust set of extensions to the C++ language that enables rapid development of complex models of hardware and software systems. The authors focus on practical use of the language for modeling real systems, showing: A step-by-step build-up of syntax Code examples for each concept Over 8000 lines of downloadable code examples Updates to reflect the SystemC standard, IEEE 1666 Why features are as they are Many resource references How SystemC fits into an ESL methodology This new edition of an industry best seller is updated to reflect the standardization of SystemC as IEEE 1666 and other improvements that reflect feedback from readers of the first edition. The wide ranging feedback also include suggestions from editors of the Japanese and Korean language translations, professors and students, and computer engineers from a broad industrial and geographical spectrum, all who have successfully used the first edition. New chapters have been added on the SystemC Verification Library and the Transaction Level Modeling, and proposed changes to the current SystemC standard. David Black and Jack Donovan, well known consultants in the EDA industry, have teamed with Bill Bunton and Anna Keist, experienced SystemC modeling engineers, to write the second edition of this highly popular classic. As a team the authors bring over 100 years of ASIC and system design experience together to make a very readable introduction to SystemC.
Over the last ten years, the ARM architecture has become one of the
most pervasive architectures in the world, with more than 2 billion
ARM-based processors embedded in products ranging from cell phones
to automotive braking systems. A world-wide community of ARM
developers in semiconductor and product design companies includes
software developers, system designers and hardware engineers. To
date no book has directly addressed their need to develop the
system and software for an ARM-based system. This text fills that
gap.
Zuse's textbook on software measurement provides basic principles as well as theoretical and practical guidelines for the use of numerous kinds of software measures. It is written to enable scientists, teachers, practit ioners, and students to define the basic terminology of Software Measurement and to contribute to theory building. The textbook considers, among other, qualitative and numerical models behind software measures. It explains step-by-step the importance of qualitative properties, the meaning of scale types, the foundations of the validation of measures, and the foundations of prediction models, the models behind the Function-Point method and the COCOMO model, and the qualitative assumption of object-oriented measures. For applications of software measures in practice more than two hundred software measures of the software life-cycle are described in detail (object-oriented measures included). The enclosed CD contains a selection of more than 1,600 references of literature, and a small demo version of ZD-MIS (Zuse/Drabe - Measurement Information System) is presented.
I Grundlagen.- Wer braucht wofur Avatare? Konzeption und Implementierung naturlichsprachlicher Systeme - Zur Einfuhrung.- Die Bedeutung von naturlichsprachlichen Dialogsystemen im Internet-Business.- Technische Grundlagen von naturlichsprachlichen Dialogsystemen.- Ein Quantensprung fur Dialogsysteme.- II E-Business und Avatare.- "Ich habe Ihre Eingabe leider nicht verstanden" - Qualitatskriterien fur Online-Tests von Bots.- Strategien fur Dialogfuhrungssysteme - Automation der Kundenkommunikation im Kontaktkanal Internet.- Schoen - schnell - schlau: Online-Marketing mit Avataren.- Avatare und die Usability von Websites.- Support-Chat und Avatare als Mittel der persoenlichen Kundenbetreuung im World Wide Web.- Cor@: Der Avatar der Deutschen Bank - Eine Fallstudie aus der Sicht des Auftragnehmers.- PIA - Der virtuelle Einkaufs-Guide - Eine Fallstudie des Club Bertelsmann.- Ein virtueller Berater fur Yello - Auswahl, Implementierung und Betrieb eines Avatars.- III Marketing und Avatare.- Darf's ein bisschen menschlicher sein? - Virtuelle Charaktere am Point of Sale.- Robert T-Online - Eine Karriere zwischen Wirklichkeit und Cyberspace.- Robert T-Online - Ein universeller Markenbotschafter.- Avatare und Entertainment.- It's time for a Strike! - Wahlkampf einer digitalen Prasidentschaftskandidatin.- IV Ausblick.- Mehr als nur ein nettes Gesicht: Embodied conversational interface agents.- Mit Hand und Fuss - Die Bedeutung der nonverbalen Kommunikation fur die Emotionalisierung von Dialogfuhrungssystemen.- Virtualisierung und Personalisierung - Technologietrends machen Avatare zur innovativen Mensch-Maschine-Schnittstelle.
Engineering systems are an important element of world economy. Each year billions of dollars are spent to develop, manufacture, operate, and maintain various types of engineering systems about the globe. The reliability and usability of these systems have become important because of their increasing complexity, sophistication, and non-specialist users. Global competition and other factors are forcing manufacturers to produce highly reliable and usable engineering systems. Along with examples and solutions, this book integrates engineering systems reliability and usability into a single volume for those individuals that directly or indirectly are concerned with these areas.
Thinking: A Guide to Systems Engineering Problem-Solving focuses upon articulating ways of thinking in today's world of systems and systems engineering. It also explores how the old masters made the advances they made, hundreds of years ago. Taken together, these considerations represent new ways of problem solving and new pathways to answers for modern times. Special areas of interest include types of intelligence, attributes of superior thinkers, systems architecting, corporate standouts, barriers to thinking, and innovative companies and universities. This book provides an overview of more than a dozen ways of thinking, to include: Inductive Thinking, Deductive Thinking, Reductionist Thinking, Out-of-the-Box Thinking, Systems Thinking, Design Thinking, Disruptive Thinking, Lateral Thinking, Critical Thinking, Fast and Slow Thinking, and Breakthrough Thinking. With these thinking skills, the reader is better able to tackle and solve new and varied types of problems. Features Proposes new approaches to problem solving for the systems engineer Compares as well as contrasts various types of Systems Thinking Articulates thinking attributes of the great masters as well as selected modern systems engineers Offers chapter by chapter thinking exercises for consideration and testing Suggests a "top dozen" for today's systems engineers
Standardization of hardware description languages and the availability of synthesis tools has brought about a remarkable increase in the productivity of hardware designers. Yet design verification methods and tools lag behind and have difficulty in dealing with the increasing design complexity. This may get worse because more complex systems are now constructed by (re)using Intellectual Property blocks developed by third parties. To verify such designs, abstract models of the blocks and the system must be developed, with separate concerns, such as interface communication, functionality, and timing, that can be verified in an almost independent fashion. Standard Hardware Description Languages such as VHDL and Verilog are inspired by procedural imperative' programming languages in which function and timing are inherently intertwined in the statements of the language. Furthermore, they are not conceived to state the intent of the design in a simple declarative way that contains provisions for design choices, for stating assumptions on the environment, and for indicating uncertainty in system timing. Hierarchical Annotated Action Diagrams: An Interface-Oriented Specification and Verification Method presents a description methodology that was inspired by Timing Diagrams and Process Algebras, the so-called Hierarchical Annotated Diagrams. It is suitable for specifying systems with complex interface behaviors that govern the global system behavior. A HADD specification can be converted into a behavioral real-time model in VHDL and used to verify the surrounding logic, such as interface transducers. Also, function can be conservatively abstracted away and the interactions between interconnecteddevices can be verified using Constraint Logic Programming based on Relational Interval Arithmetic. Hierarchical Annotated Action Diagrams: An Interface-Oriented Specification and Verification Method is of interest to readers who are involved in defining methods and tools for system-level design specification and verification. The techniques for interface compatibility verification can be used by practicing designers, without any more sophisticated tool than a calculator.
This book explores the application of breakthrough technologies to improve transportation performance. Transportation systems represent the "blood vessels" of a society, in which people and goods travel. They also influence people's lives and affect the liveability and sustainability of our cities. The book shows how emergent technologies are able to monitor the condition of the structure in real time in order to schedule the right moment for maintenance activities an so reduce the disturbance to users. This book is a valuable resource for those involved in research and development in this field. Part I discusses the context of transportation systems, highlighting the major issues and challenges, the importance of understating human factors that could affect the maintenance operations and the main goals in terms of safety standards. Part II focuses on process-oriented innovations in transportation systems; this section stresses the importance of including design parameters in the planning, offering a comparison between risk-based and condition-based maintenance and, lastly, showing applications of emergent technologies. Part III goes on to reflect on the technical-oriented innovations, discussing the importance of studying the physical phenomena that are behind transportation system failures and problems. It then introduces the general trend of collecting and analyzing big data using real-world cases to evaluate the positive and negative aspects of adopting extensive smart sensors for gathering information on the health of the assets. The last part (IV) explores cultural and behavioural changes, and new knowledge management methods, proposing novel forms of maintenance and vocational training, and introduces the need for radical new visions in transportation for managing unexpected events. The continuous evolution of maintenance fields suggests that this compendium of "state-of-the-art" applications will not be the only one; the authors are planning a collection of cutting-edge examples of transportation systems that can assist researchers and practitioners as well as students in the process of understanding the complex and multidisciplinary environment of maintenance engineering applied to the transport sector.
The TransNav 2011 Symposium held at the Gdynia Maritime University, Poland in June 2011 has brought together a wide range of participants from all over the world. The program has offered a variety of contributions, allowing to look at many aspects of the navigational safety from various different points of view. Topics presented and discussed at the Symposium were: navigation, safety at sea, sea transportation, education of navigators and simulator-based training, sea traffic engineering, ship's manoeuvrability, integrated systems, electronic charts systems, satellite, radio-navigation and anti-collision systems and many others. This book is part of a series of six volumes and provides an overview of Transport Systems and Processes and is addressed to scientists and professionals involved in research and development of navigation, safety of navigation and sea transportation.
Martin Fowler's guide to reworking bad code into well-structured code Refactoring improves the design of existing code and enhances software maintainability, as well as making existing code easier to understand. Original Agile Manifesto signer and software development thought leader, Martin Fowler, provides a catalog of refactorings that explains why you should refactor; how to recognize code that needs refactoring; and how to actually do it successfully, no matter what language you use. Refactoring principles: understand the process and general principles of refactoring Code smells: recognize "bad smells" in code that signal opportunities to refactor Application improvement: quickly apply useful refactorings to make a program easier to comprehend and change Building tests: writing good tests increases a programmer's effectiveness Moving features: an important part of refactoring is moving elements between contexts Data structures: a collection of refactorings to organize data, an important role in programs Conditional Logic: use refactorings to make conditional sections easier to understand APIs: modules and their functions are the building blocks of our software, and APIs are the joints that we use to plug them together Inheritance: it is both very useful and easy to misuse, and it's often hard to see the misuse until it's in the rear-view mirror---refactorings can fix the misuse Examples are written in JavaScript, but you shouldn't find it difficult to adapt the refactorings to whatever language you are currently using as they look mostly the same in different languages. "Whenever you read [Refactoring], it's time to read it again. And if you haven't read it yet, please do before writing another line of code." -David Heinemeier Hansson, Creator of Ruby on Rails, Founder & CTO at Basecamp "Any fool can write code that a computer can understand. Good programmers write code that humans can understand." -M. Fowler (1999)
A comprehensive introduction to reliability and availability modeling, analysis, and design at the system, hardware, and software levels Reliability of Computer Systems and Networks presents the fundamentals of reliability and availability analysis for various computer hardware, software, and networked systems. Reliability and availability as major objectives in system design are the focus. Various redundancy and fault-tolerant techniques, as well as error-correcting coding techniques are treated. The author proposes a high-level design approach based on apportioning the reliability and availability goals to subsystems and provides various techniques for achieving these subsystem goals. The next step is an efficient, exact optimization approach based on upper and lower bounds to minimize the number of feasible candidates. The most readily applied methods for analysis are utilized and design techniques are derived from basic principles. Analytical simplifications and approximations are developed to validate the results of computer models used for large-scale complex problems. Coverage includes:
Reliability of Computer Systems and Networks offers in-depth and up-to-date coverage of reliability and availability for students with a focus on important applications areas, computer systems, and networks. Professionals in systems and reliability design, as well as computer architecture, will find it a highly useful reference.
This book provides an essential update for experienced data processing professionals, transaction managers and database specialists who are seeking system solutions beyond the confines of traditional approaches. It provides practical advice on how to manage complex transactions and share distributed databases on client servers and the Internet. Based on extensive research in over 100 companies in the USA, Europe, Japan and the UK, topics covered include : * the challenge of global transaction requirements within an expanding business perspective *how to handle long transactions and their constituent elements *possible benefits from object-oriented solutions * the contribution of knowledge engineering in transaction management * the Internet, the World Wide Web and transaction handling * systems software and transaction-processing monitors * OSF/1 and the Encina transaction monitor * active data transfers and remote procedure calls * serialization in a transaction environment * transaction locks, two-phase commit and deadlocks * improving transaction-oriented database management * the successful development of an increasingly complex transaction environment.
VHDL Coding Styles and Methodologies provides an in-depth study of the VHDL language rules, coding styles, and methodologies. This book clearly distinguishes good from poor coding methodologies using an easy to remember symbology notation along with a rationale for each guideline. The VHDL concepts, rules and styles are demonstrated using complete compilable and simulatable examples which are also supplied on the accompanying disk. VHDL Coding Styles and Methodologies provides practical applications of VHDL and techniques that are current in the industry. It explains how to apply the VHDL guidelines using several complete examples. The learning by example' teaching approach along with an in-depth presentation of the language rules application methodology provides the necessary knowledge to create digital hardware designs and models that are readable, maintainable, predictable, and efficient. VHDL Coding Styles and Methodologies is intended for both college students and design engineers. It provides a practical approach to learning VHDL. Combining methodologies and coding styles along with VHDL rules leads the reader in the right direction from the beginning.
In recent years, a considerable amount of effort has been devoted, both in industry and academia, to the development, validation and verification of critical systems, i.e. those systems whose malfunctions or failures reach a critical level both in terms of risks to human life as well as having a large economic impact. Certifications of Critical Systems - The CECRIS Experience documents the main insights on Cost Effective Verification and Validation processes that were gained during work in the European Research Project CECRIS (Certification of Critical Systems). The objective of the research was to tackle the challenges of certification by focusing on those aspects that turn out to be more difficult/important for current and future critical systems industry: the effective use of methodologies, processes and tools. Starting from both the scientific and industrial state of the art methodologies for system development and the impact of their usage on the verification and validation and certification of critical systems, the project aimed at developing strategies and techniques supported by automatic or semi-automatic tools and methods for these activities, setting guidelines to support engineers during the planning of the verification and validation phases. Topics covered include: Safety Assessment, Reliability Analysis, Critical Systems and Applications, Functional Safety, Dependability Validation, Dependable Software Systems, Embedded Systems, System Certification.
Hybrid Intelligent Techniques for Pattern Analysis and Understanding outlines the latest research on the development and application of synergistic approaches to pattern analysis in real-world scenarios. An invaluable resource for lecturers, researchers, and graduates students in computer science and engineering, this book covers a diverse range of hybrid intelligent techniques, including image segmentation, character recognition, human behavioral analysis, hyperspectral data processing, and medical image analysis.
This book focuses on core functionalities for wireless real-time multi-hop networking with TDMA (time-division multiple access) and their integration into a flexible, versatile, fully operational, self-contained communication system. The use of wireless real-time communication technologies for the flexible networking of sensors, actuators, and controllers is a crucial building block for future production and control systems. WirelessHART and ISA 100.11a, two technologies that have been developed predominantly for industrial use, are currently available. However, a closer analysis of these approaches reveals certain deficits. Current research on wireless real-time communication systems shows potential to remove these limitations, resulting in flexible, versatile, and robust solutions that can be implemented on today's low-cost and resource-constrained hardware platforms. Unlike other books on wireless communication, this book presents protocols located on MAC layer and above, and build on the physical (PHY) layer of standard wireless communication technologies.
Constructing the Infrastructure for the Knowledge Economy: Methods and Tools, Theory and Practice is the proceedings of the 12th International Conference on Information Systems Development, held in Melbourne, Australia, August 29-31, 2003. The purpose of these proceedings is to provide a forum for research and practice addressing current issues associated with Information Systems Development (ISD). ISD is undergoing dramatic transformation; every day, new technologies, applications, and methods raise the standards for the quality of systems expected by organizations as well as end users. All are becoming more dependent on the systems reliability, scalability, and performance. Thus, it is crucial to exchange ideas and experiences, and to stimulate exploration of new solutions. This proceedings provides a forum for just that, addressing both technical and organizational issues.
Upon its initial publication, the Handbook of Circuits and Filters broke new ground. It quickly became the resource for comprehensive coverage of issues and practical information that can be put to immediate use. Not content to rest on his laurels, editor Wai-kai Chen divided the second edition into volumes, making the information easily accessible and digestible. In the third edition, these volumes have been revised, updated, and expanded so that they continue to provide solid coverage of standard practices and enlightened perspectives on new and emerging techniques. Feedback, Nonlinear, and Distributed Circuits draws together international contributors who discuss feedback amplifier theory and then move on to explore feedback amplifier configurations. They develop Bode's feedback theory as an example of general feedback theory. The coverage then moves on to the importance of complementing numerical analysis with qualitative analysis to get a global picture of a circuit's performance. After reviewing a wide range of approximation techniques and circuit design styles for discreet and monolithic circuits, the book presents a comprehensive description of the use of piecewise-linear methods in modeling, analysis, and structural properties of nonlinear circuits highlighting the advantages. It describes the circuit modeling in the frequency domain of uniform MTL based on the Telegrapher's equations and covers frequency and time domain experimental characterization techniques for uniform and nonuniform multiconductor structures. This volume will undoubtedly take its place as the engineer's first choice in looking for solutions to problems encountered in the analysis and behavior predictions of circuits and filters.
Coarse-grained reconfigurable architecture (CGRA) has emerged as a solution for flexible, application-specific optimization of embedded systems. Helping you understand the issues involved in designing and constructing embedded systems, Design of Low-Power Coarse-Grained Reconfigurable Architectures offers new frameworks for optimizing the architecture of components in embedded systems in order to decrease area and save power. Real application benchmarks and gate-level simulations substantiate these frameworks. The first half of the book explains how to reduce power in the configuration cache. The authors present a low-power reconfiguration technique based on reusable context pipelining that merges the concept of context reuse into context pipelining. They also propose dynamic context compression capable of supporting required bits of the context words set to enable and the redundant bits set to disable. In addition, they discuss dynamic context management for reducing power consumption in the configuration cache by controlling a read/write operation of the redundant context words. Focusing on the design of a cost-effective processing element array to reduce area and power consumption, the second half of the text presents a cost-effective array fabric that uniquely rearranges processing elements and their interconnection designs. The book also describes hierarchical reconfigurable computing arrays consisting of two reconfigurable computing blocks with two types of communication structure. The two computing blocks share critical resources, offering an efficient communication interface between them and reducing the overall area. The final chapter takes an integrated approach to optimization that draws on the design schemes presented in earlier chapters. Using a case study, the authors demonstrate the synergy effect of combining multiple design schemes.
In spite of their importance and potential societal impact, there is currently no comprehensive source of information about vehicular ad hoc networks (VANETs). Cohesively integrating the state of the art in this emerging field, Vehicular Networks: From Theory to Practice elucidates many issues involved in vehicular networking, including traffic engineering, human factors studies, and novel computer science research. Divided into six broad sections, the book begins with an overview of traffic engineering issues, such as traffic monitoring and traffic flow modeling. It then introduces governmental and industrial efforts in the United States and Europe to set standards and perform field tests on the feasibility of vehicular networks. After highlighting innovative applications enabled by vehicular networks, the book discusses several networking-related issues, including routing and localization. The following section focuses on simulation, which is currently the primary method for evaluating vehicular networking systems. The final part explores the extent and impact of driver distraction with in-vehicle displays. Encompassing both introductory and advanced concepts, this guide covers the various areas that impact the design of applications for vehicular networks. It details key research challenges, offers guidance on developing future standards, and supplies valuable information on existing experimental studies.
This book presents the state-of-the-art work in terms of searchable storage in cloud computing. It introduces and presents new schemes for exploring and exploiting the searchable storage via cost-efficient semantic hashing computation. Specifically, the contents in this book include basic hashing structures (Bloom filters, locality sensitive hashing, cuckoo hashing), semantic storage systems, and searchable namespace, which support multiple applications, such as cloud backups, exact and approximate queries and image analytics. Readers would be interested in the searchable techniques due to the ease of use and simplicity. More importantly, all these mentioned structures and techniques have been really implemented to support real-world applications, some of which offer open-source codes for public use. Readers will obtain solid backgrounds, new insights and implementation experiences with basic knowledge in data structure and computer systems. |
You may like...
Mining Over Air: Wireless Communication…
Ye Ouyang, Mantian Hu, …
Hardcover
R2,885
Discovery Miles 28 850
Hybrid Metaheuristics - Powerful Tools…
Christian Blum, Gunther R. Raidl
Hardcover
R3,913
Discovery Miles 39 130
Programming for Hybrid Multi/Manycore…
John Levesque, Aaron Vose
Paperback
R1,466
Discovery Miles 14 660
Constraint Programming and Decision…
Martine Ceberio, Vladik Kreinovich
Hardcover
R3,182
Discovery Miles 31 820
Foundations of Information Technology in…
Ricardo Baeza-Yates, Ugo Montanari, …
Hardcover
R4,351
Discovery Miles 43 510
Agent-Based Models and Complexity…
Liliana Perez, Eun-Kyeong Kim, …
Hardcover
R4,011
Discovery Miles 40 110
Graph Separators, with Applications
Arnold L. Rosenberg, Lenwood S. Heath
Hardcover
R2,794
Discovery Miles 27 940
|