![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
With the fast development of networking and software technologies, information processing infrastructure and applications have been growing at an impressive rate in both size and complexity, to such a degree that the design and development of high performance and scalable data processing systems and networks have become an ever-challenging issue. As a result, the use of performance modeling and m- surementtechniquesas a critical step in designand developmenthas becomea c- mon practice. Research and developmenton methodologyand tools of performance modeling and performance engineering have gained further importance in order to improve the performance and scalability of these systems. Since the seminal work of A. K. Erlang almost a century ago on the mod- ing of telephone traf c, performance modeling and measurement have grown into a discipline and have been evolving both in their methodologies and in the areas in which they are applied. It is noteworthy that various mathematical techniques were brought into this eld, including in particular probability theory, stochastic processes, statistics, complex analysis, stochastic calculus, stochastic comparison, optimization, control theory, machine learning and information theory. The app- cation areas extended from telephone networks to Internet and Web applications, from computer systems to computer software, from manufacturing systems to s- ply chain, from call centers to workforce management.
Co-Design is the set of emerging techniques which allows for the simultaneous design of Hardware and Software. In many cases where the application is very demanding in terms of various performances (time, surface, power consumption), trade-offs between dedicated hardware and dedicated software are becoming increasingly difficult to decide upon in the early stages of a design. Verification techniques - such as simulation or proof techniques - that have proven necessary in the hardware design must be dramatically adapted to the simultaneous verification of Software and Hardware. Describing the latest tools available for both Co-Design and Co-Verification of systems, Hardware/Software Co-Design and Co-Verification offers a complete look at this evolving set of procedures for CAD environments. The book considers all trade-offs that have to be made when co-designing a system. Several models are presented for determining the optimum solution to any co-design problem, including partitioning, architecture synthesis and code generation. When deciding on trade-offs, one of the main factors to be considered is the flow of communication, especially to and from the outside world. This involves the modeling of communication protocols. An approach to the synthesis of interface circuits in the context of co-design is presented. Other chapters present a co-design oriented flexible component data-base and retrieval methods; a case study of an ethernet bridge, designed using LOTOS and co-design methodologies and finally a programmable user interface based on monitors. Hardware/Software Co-Design and Co-Verification will help designers and researchers to understand these latest techniques in system design and as such will be of interest to all involved in embedded system design.
For more and more systems, software has moved from a peripheral to a central role, replacing mechanical parts and hardware and giving the product a competitive edge. Consequences of this trend are an increase in: the size of software systems, the variability in software artifacts, and the importance of software in achieving the system-level properties. Software architecture provides the necessary abstractions for managing the resulting complexity. We here introduce the Third Working IEEFlIFIP Conference on Software Architecture, WICSA3. That it is already the third such conference is in itself a clear indication that software architecture continues to be an important topic in industrial software development and in software engineering research. However, becoming an established field does not mean that software architecture provides less opportunity for innovation and new directions. On the contrary, one can identify a number of interesting trends within software architecture research. The first trend is that the role of the software architecture in all phases of software development is more explicitly recognized. Whereas initially software architecture was primarily associated with the architecture design phase, we now see that the software architecture is treated explicitly during development, product derivation in software product lines, at run-time, and during system evolution. Software architecture as an artifact has been decoupled from a particular lifecycle phase.
Aimed at improving a programmers ability for altering code to fit changing requirements and for detecting and correcting errors, this book argues for a new way of thinking about maintaining software. It proposes the use of a set of human factors principles that govern the programmer-software-event world interactions and form the core of the maintenance process. The book is thus highly valuable for systems analysts and programmers, managers seeking to reduce costs, researchers looking at solutions to the maintenance problem, and students learning to write clear unambiguous programs.
Hardware correctness is becoming ever more important in the design of computer systems. The authors introduce a powerful new approach to the design and analysis of modern computer architectures, based on mathematically well-founded formal methods which allows for rigorous correctness proofs, accurate hardware costs determination, and performance evaluation. This book develops, at the gate level, the complete design of a pipelined RISC processor with a fully IEEE-compliant floating-point unit. In contrast to other design approaches, the design presented here is modular, clean and complete.
As integrated circuit (IC) feature sizes scaled below a quarter of a micron, thereby defining the deep submicron (DSM) era, there began a gradual shift in the impact on performance due to the metal interconnections among the active circuit components. Once viewed as merely parasitics in terms of their relevance to the overall circuit behavior, the interconnect can now have a dominant impact on the IC area and performance. Beginning in the late 1980's there was significant research toward better modeling and characterization of the resistance, capacitance and ultimately the inductance of on-chip interconnect. IC Interconnect Analysis covers the state-of-the-art methods for modeling and analyzing IC interconnect based on the past fifteen years of research. This is done at a level suitable for most practitioners who work in the semiconductor and electronic design automation fields, but also includes significant depth for the research professionals who will ultimately extend this work into other areas and applications. IC Interconnect Analysis begins with an in-depth coverage of delay metrics, including the ubiquitous Elmore delay and its many variations. This is followed by an outline of moment matching methods, calculating moments efficiently, and Krylov subspace methods for model order reduction. The final two chapters describe how to interface these reduced-order models to circuit simulators and gate-level timing analyzers respectively. IC Interconnect Analysis is written for CAD tool developers, IC designers and graduate students.
The general trend of modern network devices towards greater intelligence and programmability is accelerating the development of systems that are increasingly autonomous and to a certain degree self-managing. Examples range from router scripting environments to fully programmable server blades. This has opened up a new field of computer science research, reflected in this new volume. This selection of contributions to the first ever international workshop on network-embedded management applications (NEMA) features six papers selected from submissions to the workshop, held in October 2010 at Niagara Falls, Canada. They represent a wide cross-section of the current work in this vital field of inquiry. Covering a diversity of perspectives, the volume's dual structure first of all examines the 'enablers' for NEMAs-the platforms, frameworks, and development environments which facilitate the evolution of network-embedded management and applications. The second section of the book covers network-embedded applications that might both empower and benefit from such enabling platforms. These papers cover topics ranging from deciding where to best place management control functions inside a network to a discussion of how multi-core hardware processors can be leveraged for traffic filtering applications. The section concludes with an analysis of a delay-tolerant network application in the context of the 'One Laptop per Child' program. There is a growing recognition that it is vital to make network operation and administration as easy as possible to contain operational expenses and cope with ever shorter control cycles. This volume provides researchers in the field with the very latest in current thinking.
Object-oriented techniques and languages have been proven to significantly increase engineering efficiency in software development. Many benefits are expected from their introduction into electronic modeling. Among them are better support for model reusability and flexibility, more efficient system modeling, and more possibilities in design space exploration and prototyping. Object-Oriented Modeling explores the latest techniques in object-oriented methods, formalisms and hardware description language extensions. The seven chapters comprising this book provide an overview of the latest object-oriented techniques for designing systems and hardware. Many examples are given in C++, VHDL and real-time programming languages. Object-Oriented Modeling describes further the use of object-oriented techniques in applications such as embedded systems, telecommunications and real-time systems, using the very latest techniques in object-oriented modeling. It is an essential guide to researchers, practitioners and students involved in software, hardware and system design.
This monograph details the proceedings of the 15th International Conference on Information Systems Development. ISD is progressing rapidly, continually creating new challenges for the professionals involved. New concepts, approaches and techniques of systems development emerge constantly in this field. Progress in ISD comes from research as well as from practice. The aim of the Conference was to provide an international forum for the exchange of ideas and experiences between academia and industry, and to stimulate the exploration of new solutions.
Hierarchical design methods were originally introduced for the design of digital ICs, and they appeared to provide for significant advances in design productivity, Time-to-Market, and first-time right design. These concepts have gained increasing importance in the semiconductor industry in recent years. In the course of time, the supportive quality of hierarchical methods and their advantages were confirmed. System Level Hardware/Software Co-design: An Industrial Approach demonstrates the applicability of hierarchical methods to hardware / software codesign, and mixed analogue / digital design following a similar approach. Hierarchical design methods provide for high levels of design support, both in a qualitative and a quantitative sense. In the qualitative sense, the presented methods support all phases in the product life cycle of electronic products, ranging from requirements analysis to application support. Hierarchical methods furthermore allow for efficient digital hardware design, hardware / software codesign, and mixed analogue / digital design, on the basis of commercially available formalisms and design tools. In the quantitative sense, hierarchical methods have prompted a substantial increase in design productivity. System Level Hardware/Software Co-design: An Industrial Approach reports on a six year study during which time the number of square millimeters of normalized complexity an individual designer contributed every week rose by more than a factor of five. Hierarchical methods therefore enabled designers to keep track of the ever increasing design complexity, while effectively reducing the number of design iterations in the form of redesigns. System Level Hardware/Software Co-design: An Industrial Approach is the first book to provide a comprehensive, coherent system design methodology that has been proven to increase productivity in industrial practice. The book will be of interest to all managers, designers and researchers working in the semiconductor industry.
From the reviews: "This book crystallizes what may become a defining moment in the electronics industry - the shift to platform-based design. It provides the first comprehensive guidebook for those who will build, and use, the integration platforms that may soon drive the system-on-chip revolution." Electronic Engineering Times
This book brings together papers presented at the 2016 International Conference on Communications, Signal Processing, and Systems, which provides a venue to disseminate the latest developments and to discuss the interactions and links between these multidisciplinary fields. Spanning topics ranging from communications to signal processing and systems, this book is aimed at undergraduate and graduate students in electrical engineering, computer science and mathematics, researchers and engineers from academia and industry as well as government employees (such as NSF, DOD and DOE).
Software development is a complex problem-solving activity with a high level of uncertainty. There are many technical challenges concerning scheduling, cost estimation, reliability, performance, etc, which are further aggravated by weaknesses such as changing requirements, team dynamics, and high staff turnover. Thus the management of knowledge and experience is a key means of systematic software development and process improvement. "Managing Software Engineering Knowledge" illustrates several theoretical examples of this vision and solutions applied to industrial practice. It is structured in four parts addressing the motives for knowledge management, the concepts and models used in knowledge management for software engineering, their application to software engineering, and practical guidelines for managing software engineering knowledge. This book provides a comprehensive overview of the state of the art and best practice in knowledge management applied to software engineering. While researchers and graduate students will benefit from the interdisciplinary approach leading to basic frameworks and methodologies, professional software developers and project managers will also profit from industrial experience reports and practical guidelines.
The advent of the world-wide web and web-based applications have dramatically changed the nature of computer applications. Computer system design, in the light of these changes, involves understanding these modem workloads, identifying bottlenecks during their execution, and appropriately tailoring microprocessors, memory systems, and the overall system to minimize bottlenecks. This book contains ten chapters dealing with several contemporary programming paradigms including Java, web server and database workloads. The first two chapters concentrate on Java. While Barisone et al.'s characterization in Chapter 1 deals with instruction set usage of Java applications, Kim et al.'s analysis in Chapter 2 focuses on memory referencing behavior of Java workloads. Several applications including the SPECjvm98 suite are studied using interpreter and Just-In-Time (TIT) compilers. Barisone et al.'s work includes an analytical model to compute the utilization of various functional units. Kim et al. present information on locality, live-range of objects, object lifetime distribution, etc. Studying database workloads has been a challenge to research groups, due to the difficulty in accessing standard benchmarks. Configuring hardware and software for database benchmarks such as those from the Transactions Processing Council (TPC) requires extensive effort. In Chapter 3, Keeton and Patterson present a simplified workload (microbenchmark) that approximates the characteristics of complex standardized benchmarks.
Hugo de Man Professor Katholieke Universiteit Leuven Senior Research Fellow IMEC The steady evolution of hardware, software and communications technology is rapidly transforming the PC- and dot.com world into the world of Ambient Intelligence (AmI). This next wave of information technology is fundam- tally different in that it makes distributed wired and wireless computing and communication disappear to the background and puts users to the foreground. AmI adapts to people instead of the other way around. It will augment our consciousness, monitor our health and security, guide us through traffic etc. In short, its ultimate goal is to improve the quality of our life by a quiet, reliable and secure interaction with our social and material environment. What makes AmI engineering so fascinating is that its design starts from studying person to world interactions that need to be implemented as an int- ligent and autonomous interplay of virtually all necessary networked electronic intelligence on the globe. This is a new and exciting dimension for most elect- cal and software engineers and may attract more creative talent to engineering than pure technology does. Development of the leading technology for AmI will only succeed if the engineering research community is prepared to join forces in order to make Mark Weiser s dream of 1991 come true. This will not be business as usual by just doubling transistor count or clock speed in a microprocessor or increasing the bandwidth of communication."
Today's distributed systems are characterized by interactions-often complex-between many different hardware and software components cooperating and exchanging information. To simplify development of interactive systems and facilitate communication and documentation, experts of varying disciplines employ descriptions, or specifications, of a given system's behavior and/or structure. Specification and Development of Interactive Systems offers a unique approach to program and software development suitable for large distributed systems, with an emphasis on modular system development and systems engineering. The authors build a basic method, called FOCUS, that enables interactive systems to be described by characterizing their histories of message interaction. The method covers functional requirements, timing, structure, and implementation issues of systems. In addition, the book describes how to connect the models and techniques to tables and diagram-based methods popular in practical systems engineering. Topics and features: * Specification of interface behavior and modular top-down system development * Specification of time and the modeling of hardware/software systems * Interface refinement and the modeling of development steps leading from one level of abstraction to the next * State transition diagrams and tables and the usage of common description techniques, such as found in UML This book provides a mathematical and logical foundation for the specification and development of interactive systems based on a model that describes systems in terms of their input/output behavior. The reader gains a comprehensive understanding of all fundamental models, techniques, and methods for interactive system design. The book is an essential resource for all researchers and professionals in computer science, software systems engineering and computer engineering.
Principles of Verilog PLI is a how to do' text on Verilog Programming Language Interface. The primary focus of the book is on how to use PLI for problem solving. Both PLI 1.0 and PLI 2.0 are covered. Particular emphasis has been put on adopting a generic step-by-step approach to create a fully functional PLI code. Numerous examples were carefully selected so that a variety of problems can be solved through ther use. A separate chapter on Bus Functional Model (BFM), one of the most widely used commercial applications of PLI, is included. Principles of Verilog PLI is written for the professional engineer who uses Verilog for ASIC design and verification. Principles of Verilog PLI will be also of interest to students who are learning Verilog.
The mathematical theory of networks and systems has a long, and rich history, with antecedents in circuit synthesis and the analysis, design and synthesis of actuators, sensors and active elements in both electrical and mechanical systems. Fundamental paradigms such as the state-space real ization of an input/output system, or the use of feedback to prescribe the behavior of a closed-loop system have proved to be as resilient to change as were the practitioners who used them. This volume celebrates the resiliency to change of the fundamental con cepts underlying the mathematical theory of networks and systems. The articles presented here are among those presented as plenary addresses, invited addresses and minisymposia presented at the 12th International Symposium on the Mathematical Theory of Networks and Systems, held in St. Louis, Missouri from June 24 - 28, 1996. Incorporating models and methods drawn from biology, computing, materials science and math ematics, these articles have been written by leading researchers who are on the vanguard of the development of systems, control and estimation for the next century, as evidenced by the application of new methodologies in distributed parameter systems, linear nonlinear systems and stochastic sys tems for solving problems in areas such as aircraft design, circuit simulation, imaging, speech synthesis and visionics."
Embedded System Design: Topics, Techniques and Trends presents the technical program of the International Embedded Systems Symposium (IESS) 2007 held in Irvine, California. IESS is a unique forum to present novel ideas, exchange timely research results, and discuss the state of the art and future trends in the field of embedded systems. Contributors and participants from both industry and academia take active part in this symposium. Topics covered by the chapters in this book include design methodology, specification and modelling, embedded software and hardware synthesis, networks-on-chip, distributed and networked systems, and system verification and validation. Particular emphasis is paid to automotive and medical applications. A set of actual case studies and special aspects in embedded system design are included as well.
This book offers up novel research which uses analytical approaches to explore nonlinear features exhibited by various dynamic processes. Relevant to disciplines across engineering and physics, the asymptotic method combined with the multiple scale method is shown to be an efficient and intuitive way to approach mechanics. Beginning with new material on the development of cutting-edge asymptotic methods and multiple scale methods, the book introduces this method in time domain and provides examples of vibrations of systems. Clearly written throughout, it uses innovative graphics to exemplify complex concepts such as nonlinear stationary and nonstationary processes, various resonances and jump pull-in phenomena. It also demonstrates the simplification of problems through using mathematical modelling, by employing the use of limiting phase trajectories to quantify nonlinear phenomena. Particularly relevant to structural mechanics, in rods, cables, beams, plates and shells, as well as mechanical objects commonly found in everyday devices such as mobile phones and cameras, the book shows how each system is modelled, and how it behaves under various conditions. It will be of interest to engineers and professionals in mechanical engineering and structural engineering, alongside those interested in vibrations and dynamics. It will also be useful to those studying engineering maths and physics.
The Mobile Ad Hoc Network (MANET) has emerged as the next frontier for wireless communications networking in both the military and commercial arena. "Handbook of Mobile Ad Hoc Networks for Mobility Models" introduces 40 different major mobility models along with numerous associate mobility models to be used in a variety of MANET networking environments in the ground, air, space, and/or under water mobile vehicles and/or handheld devices. These vehicles include cars, armors, ships, under-sea vehicles, manned and unmanned airborne vehicles, spacecrafts and more. This handbook also describes how each mobility pattern affects the MANET performance from physical to application layer; such as throughput capacity, delay, jitter, packet loss and packet delivery ratio, longevity of route, route overhead, reliability, and survivability. Case studies, examples, and exercises are provided throughout the book. "Handbook of Mobile Ad Hoc Networks for Mobility Models" is for advanced-level students and researchers concentrating on electrical engineering and computer science within wireless technology. Industry professionals working in the areas of mobile ad hoc networks, communications engineering, military establishments engaged in communications engineering, equipment manufacturers who are designing radios, mobile wireless routers, wireless local area networks, and mobile ad hoc network equipment will find this book useful as well.
A Guide to VHDL, Second Edition is intended for the working engineer who needs to develop, document, simulate, and synthesize a design using the VHDL language. It is for system and chip designers who are working with VHDL CAD tools, and who have some experience programming in Fortran, Pascal, or C and have used a logic simulator. A Guide to VHDL, Second Edition includes a number of paper exercises and computer lab experiments. If a compiler/simulator is available to the reader, then the lab exercises included in the chapters can be run to reinforce the learning experience. For practical purposes, this book keeps simulator-specific text to a minimum, but does use the Synopsys VHDL Simulator command language in a few cases. A Guide to VHDL, Second Edition is designed as a primer and its contents are appropriate for an introductory course in VHDL. The VHDL language was updated in 1992 with some minor improvements. In most cases, the language is upward compatible. Although this book is based primarily on the VHDL 1987 standard, this new second edition indicates the significant changes in the 1992 language to assist the designer in writing upwardly compatible code.
This book introduces condition-based maintenance (CBM)/data-driven prognostics and health management (PHM) in detail, first explaining the PHM design approach from a systems engineering perspective, then summarizing and elaborating on the data-driven methodology for feature construction, as well as feature-based fault diagnosis and prognosis. The book includes a wealth of illustrations and tables to help explain the algorithms, as well as practical examples showing how to use this tool to solve situations for which analytic solutions are poorly suited. It equips readers to apply the concepts discussed in order to analyze and solve a variety of problems in PHM system design, feature construction, fault diagnosis and prognosis.
In any software design project, the analysis stage - documenting and designing technical requirements for the needs of users - is vital to the success of the project.This book provides a thorough introduction & survey to all aspects of analysis. This new edition provides new features including: additional chapters on system Development Life Cycle & Data Element Naming Conventions & Standards; more coverage on converting logical models to physical models, how to generate DDL & testing database functionalities; expansion of database section with concepts such as denormalization, security & change control; developments on new design & technologies, particularly in the area of web analysis and design; a revised Web/Commerce chapter, which addresses component middleware for complex systems design; and, new case studies. This book is a valuable resource and guide for all information systems students, practitioners and professionals who need an in-depth understanding of the principles of the analysis and design process.
This book describes how domain knowledge can be used in the design of interactive systems. It includes discussion of the theories and models of domain, generic domain architectures and construction of system components for specific domains. It draws on research experience from the Information Systems, Software Engineering and Human Computer Interaction communities. |
![]() ![]() You may like...
|