![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
As systems being developed by industry and government grow larger
and more complex, the need for superior specification and
verification approaches and tools becomes increasingly vital. The
developer and customer must have complete confidence that the
design produced is correct, and that it meets forma development and
verification standards. In this text, UML expert author Dr. Doron
Drusinsky compiles all the latest information on the application of
UML (Universal Modeling Language) statecharts, temporal logic,
automata, and other advanced tools for run-time monitoring and
verification. This is the first book that deals specifically with
UML verification techniques. This important information is
introduced within the context of real-life examples and solutions,
particularly focusing on national defense applications. A practical
text, as opposed to a high-level theoretical one, it emphasizes
getting the system developer up-to-speed on using the tools
necessary for daily practice.
Increasing system complexity has created a pressing need for better design tools and associated methodologies and languages for meeting the stringent time to market and cost constraints. Platform-centric and platfo- based system-on-chip (SoC) design methodologies, based on reuse of software and hardware functionality, has also gained increasing exposure and usage within the Electronic System-Level (ESL) design communities. The book proposes a new methodology for realizing platform-centric design of complex systems, and presents a detailed plan for its implementation. The proposed plan allows component vendors, system integrators and product developers to collaborate effectively and efficiently to create complex products within budget and schedule constraints. This book focuses more on the use of platforms in the design of products, and not on the design of platforms themselves. Platform-centric design is not for everyone, as some may feel that it does not allow them to differentiate their offering from competitors to a significant degree. However, its proponents may claim that the time-- market and cost advantages of platform-centric design more than compensate for any drawbacks.
This book is open access under a CC BY NC ND license. It addresses the most recent developments in cloud computing such as HPC in the Cloud, heterogeneous cloud, self-organising and self-management, and discusses the business implications of cloud computing adoption. Establishing the need for a new architecture for cloud computing, it discusses a novel cloud management and delivery architecture based on the principles of self-organisation and self-management. This focus shifts the deployment and optimisation effort from the consumer to the software stack running on the cloud infrastructure. It also outlines validation challenges and introduces a novel generalised extensible simulation framework to illustrate the effectiveness, performance and scalability of self-organising and self-managing delivery models on hyperscale cloud infrastructures. It concludes with a number of potential use cases for self-organising, self-managing clouds and the impact on those businesses.
Getting organizations going is one thing. Stopping them is another. This book examines how and why organizations become trapped in disastrous decisions. The focal point is Project Taurus, an IT venture commissioned by the London Stock Exchange and supported by numerous City Institutions. Taurus was intended to transform London's antiquated manual share settlement procedures into a state of the art electronic system that would be the envy of the world. The project collapsed after three year's intensive work and investments totalling almost GBP500 million. This book is an in depth study of escalation in decision making. The author has interviewed a number of people who played a key role and presents a most readable account of what actually happened. At the same time she sets the case in the broader literature of decision making.
This book focuses on the basic control and filtering synthesis problems for discrete-time switched linear systems under time-dependent switching signals. Chapter 1, as an introduction of the book, gives the backgrounds and motivations of switched systems, the definitions of the typical time-dependent switching signals, the differences and links to other types of systems with hybrid characteristics and a literature review mainly on the control and filtering for the underlying systems. By summarizing the multiple Lyapunov-like functions (MLFs) approach in which different requirements on comparisons of Lyapunov function values at switching instants, a series of methodologies are developed for the issues on stability and stabilization, and l2-gain performance or tube-based robustness for l disturbance, respectively, in Chapters 2 and 3. Chapters 4 and 5 are devoted to the control and filtering problems for the time-dependent switched linear systems with either polytopic uncertainties or measurable time-varying parameters in different sense of disturbances. The asynchronous switching problem, where there is time lag between the switching of the currently activated system mode and the controller/filter to be designed, is investigated in Chapter 6. The systems with various time delays under typical time-dependent switching signals are addressed in Chapter 7.
This book is open access under a CC BY-NC 2.5 license. This book presents the VISCERAL project benchmarks for analysis and retrieval of 3D medical images (CT and MRI) on a large scale, which used an innovative cloud-based evaluation approach where the image data were stored centrally on a cloud infrastructure and participants placed their programs in virtual machines on the cloud. The book presents the points of view of both the organizers of the VISCERAL benchmarks and the participants. The book is divided into five parts. Part I presents the cloud-based benchmarking and Evaluation-as-a-Service paradigm that the VISCERAL benchmarks used. Part II focuses on the datasets of medical images annotated with ground truth created in VISCERAL that continue to be available for research. It also covers the practical aspects of obtaining permission to use medical data and manually annotating 3D medical images efficiently and effectively. The VISCERAL benchmarks are described in Part III, including a presentation and analysis of metrics used in evaluation of medical image analysis and search. Lastly, Parts IV and V present reports by some of the participants in the VISCERAL benchmarks, with Part IV devoted to the anatomy benchmarks and Part V to the retrieval benchmark. This book has two main audiences: the datasets as well as the segmentation and retrieval results are of most interest to medical imaging researchers, while eScience and computational science experts benefit from the insights into using the Evaluation-as-a-Service paradigm for evaluation and benchmarking on huge amounts of data.
This book presents a comprehensive introduction to Internetware, covering aspects ranging from the fundamental principles and engineering methodologies to operational platforms, quality measurements and assurance and future directions. It also includes guidelines and numerous representative real-world case studies that serve as an invaluable reference resource for software engineers involved in the development of Internetware applications. Providing a detailed analysis of current trends in modern software engineering in the Internet, it offers an essential blueprint and an important contribution to the research on software engineering and systems for future Internet computing.
For undergraduate systems analysis and design courses. A practical and modern approach to systems analysis and design Kendall and Kendall's Systems Analysis and Design, Global Edition, 10th Edition concisely presents the latest systems development methods, tools, and techniques to students in an engaging and easy-to-understand manner. The 10th Edition reflects the rapidly changing face of the IS field, with new and advanced features integrated throughout - including additional coverage of security and privacy issues, and innovative materials on new developments such as designing virtual reality and intelligent personal assistants.
In 1998-99, at the dawn of the SoC Revolution, we wrote Surviving the SOC Revolution: A Guide to Platform Based Design. In that book, we focused on presenting guidelines and best practices to aid engineers beginning to design complex System-on-Chip devices (SoCs). Now, in 2003, facing the mid-point of that revolution, we believe that it is time to focus on winning. In this book, Winning the SoC Revolution: Experiences in Real Design, we gather the best practical experiences in how to design SoCs from the most advanced design groups, while setting the issues and techniques in the context of SoC design methodologies. As an edited volume, this book has contributions from the leading design houses who are winning in SoCs - Altera, ARM, IBM, Philips, TI, UC Berkeley, and Xilinx. These chapters present the many facets of SoC design - the platform based approach, how to best utilize IP, Verification, FPGA fabrics as an alternative to ASICs, and next generation process technology issues. We also include observations from Ron Wilson of CMP Media on best practices for SoC design team collaboration. We hope that by utilizing this book, you too, will win the SoC Revolution.
"Satellite Network Robust QoS-aware Routing" presents a novel routing strategy for satellite networks. This strategy is useful for the design of multi-layered satellite networks as it can greatly reduce the number of time slots in one system cycle. The traffic prediction and engineering approaches make the system robust so that the traffic spikes can be handled effectively. The multi-QoS optimization routing algorithm can satisfy various potential user requirements. Clear and sufficient illustrations are also presented in the book. As the chapters cover the above topics independently, readers from different research backgrounds in constellation design, multi-QoS routing, and traffic engineering can benefit from the book. Fei Long is a senior engineer at Beijing R&D Center of 54th Research Institute of China Electronics Technology Group Corporation.
This book offers the foundations of system analysis as an applied scientific methodology assigned for the investigation of complex and highly interdisciplinary problems. It presents the basic definitions and the methodological and theoretical basis of formalization and solution processes in various subject domains. It describes in detail the methods of formalizing the system tasks and reducing them to a solvable form under real-world conditions.
Adding internet access to embedded systems opens up a whole new world of capabilities. For example, a remote data logging system could automatically send data via the internet and be reconfigured - such as to log new types of data or to measure at different intervals - by commands sent over the internet from any computer or device with internet access. Embedded internet and internet appliances are the focus of great attention in the computing industry, as they are seen as the future of computing, but the design of such devices presents many technical challenges.;This book describes how to design, build and program embedded systems with internet access, giving special attention to sensors and actuators which gather data for transmission over the internet or execute commands sent by the internet, It shows how to build sensors and control devices that connect to the "tiny internet interface" (TINI) and explains how to write programs that control them in Java. Several design case histories are given, including weather monitoring stations, communications centres, automation systems, and data acquisitions systems. The authors discuss how these technologies work and where to get detailed specifications, and they provide ideas for the reader to pursue beyond the book. The accompanying CD-ROM includes Java source code for all the applications described in the book, and an electronic version of the text.
SystemC Kernel Extensions for Heterogeneous System Modeling is a result of an almost two year endeavour on our part to understand how SystemC can be made useful for system level modeling at higher levels of abstraction. Making it a truly heterogeneous modeling language and platform, for hardware/software co-design as well as complex embedded hardware designs has been our focus in the work reported in this book.
This volume provides an introduction to and overview of the emerging field of interconnected networks which include multilayer or multiplex networks, as well as networks of networks. Such networks present structural and dynamical features quite different from those observed in isolated networks. The presence of links between different networks or layers of a network typically alters the way such interconnected networks behave - understanding the role of interconnecting links is therefore a crucial step towards a more accurate description of real-world systems. While examples of such dissimilar properties are becoming more abundant - for example regarding diffusion, robustness and competition - the root of such differences remains to be elucidated. Each chapter in this topical collection is self-contained and can be read on its own, thus making it also suitable as reference for experienced researchers wishing to focus on a particular topic.
Embedded Processor-Based Self-Test is a guide to self-testing strategies for embedded processors. Embedded processors are regularly used today in most System-on-Chips (SoCs). Testing of microprocessors and embedded processors has always been a challenge because most traditional testing techniques fail when applied to them. This is due to the complex sequential structure of processor architectures, which consists of high performance datapath units and sophisticated control logic for performance optimization. Structured Design-for-Testability (DfT) and hardware-based self-testing techniques, which usually have a non-trivial impact on a circuit's performance, size and power, can not be applied without serious consideration and careful incorporation into the processor design. Embedded Processor-Based Self-Test shows how the powerful embedded functionality that processors offer can be utilized as a self-testing resource. Through a discussion of different strategies the book emphasizes on the emerging area of Software-Based Self-Testing (SBST). SBST is based on the idea of execution of embedded software programs to perform self-testing of the processor itself and its surrounding blocks in the SoC. SBST is a low-cost strategy in terms of overhead (area, speed, power), development effort and test application cost, as it is applied using low-cost, low-speed test equipment. Embedded Processor-Based Self-Test can be used by designers, DfT engineers, test practitioners, researchers and students working on digital testing, and in particular processor and SoC test. This book sets the framework for comparisons among different SBST methodologies by discussing key requirements. It presents successful applications of SBST to a number of embedded processors of different complexities and instruction set architectures.
To optimally design and manage a directory service, IS architects
and managers must understand current state-of-the-art products.
Directory Services covers Novell's NDS eDirectory, Microsoft's
Active Directory, UNIX directories and products by NEXOR, MaxWare,
Siemens, Critical Path and others. Directory design fundamentals
and products are woven into case studies of large enterprise
deployments. Cox thoroughly explores replication, security,
migration and legacy system integration and interoperability.
Business issues such as how to cost justify, plan, budget and
manage a directory project are also included. The book culminates
in a visionary discussion of future trends and emerging directory
technologies including the strategic direction of the top directory
products, the impact of wireless technology on directory enabled
applications and using directories to customize content delivery
from the Enterprise Portal.
The primary objective of this book is to teach the architectures, design principles, and troubleshooting techniques of a LAN. This will be imparted through the presentation of a broad scope of data and computer communication standards, real-world inter-networking techniques, architectures, hardware, software, protocols, technologies and services as they relate to the design, implementation and troubleshooting of a LAN. The logical and physical design of hardware and software is not the only process involved in the design and implementation of a LAN. The latter also encompasses many other aspects including making the business case, compiling the requirements, choosing the technology, planning for capacity, selecting the vendor, and weighing all the issues before the actual design begins.
ESL or "Electronic System Level" is a buzz word these days, in the electronic design automation (EDA) industry, in design houses, and in the academia. Even though numerous trade magazine articles have been written, quite a few books have been published that have attempted to de?ne ESL, it is still not clear what exactly it entails. However, what seems clear to every one is that the "Register Transfer Level" (RTL) languages are not adequate any more to be the design entry point for today's and tomorrow's complex electronic system design. There are multiple reasons for such thoughts. First, the c- tinued progression of the miniaturization of the silicon technology has led to the ability of putting almost a billion transistors on a single chip. Second, applications are becoming more and more complex, and integrated with c- munication, control, ubiquitous and pervasive computing, and hence the need for ever faster, ever more reliable, and more robust electronic systems is pu- ing designers towards a productivity demand that is not sustainable without a fundamental change in the design methodologies. Also, the hardware and software functionalities are getting interchangeable and ability to model and design both in the same manner is gaining importance. Given this context, we assume that any methodology that allows us to model an entire electronic system from a system perspective, rather than just hardware with discrete-event or cycle based semantics is an ESL method- ogy of some kind.
System-on-Chip Methodologies & Design Languages brings together a selection of the best papers from three international electronic design language conferences in 2000. The conferences are the Hardware Description Language Conference and Exhibition (HDLCon), held in the Silicon Valley area of USA; the Forum on Design Languages (FDL), held in Europe; and the Asia Pacific Chip Design Language (APChDL) Conference. The papers cover a range of topics, including design methods, specification and modeling languages, tool issues, formal verification, simulation and synthesis. The results presented in these papers will help researchers and practicing engineers keep abreast of developments in this rapidly evolving field.
xv From the Old to the New xvii Acknowledgments xxi 1 Verilog - A Tutorial Introduction 1 Getting Started 2 A Structural Description 2 Simulating the binaryToESeg Driver 4 Creating Ports For the Module 7 Creating a Testbench For a Module 8 11 Behavioral Modeling of Combinational Circuits Procedural Models 12 Rules for Synthesizing Combinational Circuits 13 14 Procedural Modeling of Clocked Sequential Circuits Modeling Finite State Machines 15 Rules for Synthesizing Sequential Systems 18 Non-Blocking Assignment ("
Memory Design Techniques for Low Energy Embedded Systems centers one of the most outstanding problems in chip design for embedded application. It guides the reader through different memory organizations and technologies and it reviews the most successful strategies for optimizing them in the power and performance plane.
Data Access and Storage Management for Embedded Programmable
Processors gives an overview of the state-of-the-art in
system-level data access and storage management for embedded
programmable processors. The targeted application domain covers
complex embedded real-time multi-media and communication
applications. Many of these applications are data-dominated in the
sense that their cost related aspects, namely power consumption and
footprint are heavily influenced (if not dominated) by the data
access and storage aspects. The material is mainly based on
research at IMEC in this area in the period 1996-2001. In order to
deal with the stringent timing requirements and the data dominated
characteristics of this domain, we have adopted a target
architecture style that is compatible with modern embedded
processors, and we have developed a systematic step-wise
methodology to make the exploration and optimization of such
applications feasible in a source-to-source precompilation
approach.
This text helps the reader generate clear, effective documentation that is tailored to the information requirements of the end-user. Written for technical writers and their managers, quality assurance experts, and software engineers, the book describes a user-centered information design method (UCID) that should help ensure documentation conveys significant information for the user. The UCID shows how to: integrate the four major information components of a software system - user interface labels, messages, online and printed documentation; make sure these elements work together to improve usability; deploy iterative design and prototyping procedures that minimize flaws and save time and money; and guide technical writers effectively.
This book brings together research on numerical methods adapted for Graphics Processing Units (GPUs). It explains recent efforts to adapt classic numerical methods, including solution of linear equations and FFT, for massively parallel GPU architectures. This volume consolidates recent research and adaptations, covering widely used methods that are at the core of many scientific and engineering computations. Each chapter is written by authors working on a specific group of methods; these leading experts provide mathematical background, parallel algorithms and implementation details leading to reusable, adaptable and scalable code fragments. This book also serves as a GPU implementation manual for many numerical algorithms, sharing tips on GPUs that can increase application efficiency. The valuable insights into parallelization strategies for GPUs are supplemented by ready-to-use code fragments. Numerical Computations with GPUs targets professionals and researchers working in high performance computing and GPU programming. Advanced-level students focused on computer science and mathematics will also find this book useful as secondary text book or reference.
Holographic Data Storage is an outstanding reference book on an exciting topic reaching out to the 21st century's key technologies. The editors, Hans J. Coufal (IBM), Demetri Psaltis (CalTech), and Glenn Sincerbox (University of Arizona), together with leading experts in this area of research from both academic research and industry, bring together the latest knowledge on this technique. The book starts with an introduction on the history and fundamentals, multiplexing methods, and noise sources. The following chapters describe in detail recording media, components, channels, platforms for demonstration, and competing technologies such as classical hard disks or optical disks. More than 700 references make this book the ultimate source of information for the years to come. The book is intended for physicists, optical engineers, and executives alike. |
![]() ![]() You may like...
Identification, Adaptation, Learning…
Sergio Bittanti, Giorgio Picci
Hardcover
R5,934
Discovery Miles 59 340
Knowledge, Complexity and Innovation…
Manfred M. Fischer, Josef Froehlich
Hardcover
R4,639
Discovery Miles 46 390
Computational Modeling and Problem…
Hemant K. Bhargava, Nong Ye
Hardcover
R4,542
Discovery Miles 45 420
Information Dynamics - Foundations and…
Gustavo Deco, Bernd Schurmann
Hardcover
R1,694
Discovery Miles 16 940
|