![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
This book brings together research on numerical methods adapted for Graphics Processing Units (GPUs). It explains recent efforts to adapt classic numerical methods, including solution of linear equations and FFT, for massively parallel GPU architectures. This volume consolidates recent research and adaptations, covering widely used methods that are at the core of many scientific and engineering computations. Each chapter is written by authors working on a specific group of methods; these leading experts provide mathematical background, parallel algorithms and implementation details leading to reusable, adaptable and scalable code fragments. This book also serves as a GPU implementation manual for many numerical algorithms, sharing tips on GPUs that can increase application efficiency. The valuable insights into parallelization strategies for GPUs are supplemented by ready-to-use code fragments. Numerical Computations with GPUs targets professionals and researchers working in high performance computing and GPU programming. Advanced-level students focused on computer science and mathematics will also find this book useful as secondary text book or reference.
2 Concept ( Tools * Specification ( Tools + Design Stages ( Tools * Implementation ( Tools Figure 1-1. A nominal, multi-stage development process From that beginning, we have progressed to the point where the EDA community at large, including both users and developers of the tools, are interested in more unified environments. Here, the notion is that the tools used at the various stages in the development process need to be able to complement each other, and to communicate with one another efficiently using effective file exchange capabilities. Furthermore, the idea of capturing all the tool support needed for an EDA development into a unified support environment is now becoming a reality. This reality is evidenced by some of the EDA suites we now see emerging, wherein several tool functions are integrated under a common graphical user interface (GUI), with supporting file exchange and libraries to enable all tool functions to operate effectively and synergistically. This concept, which we illustrate in Figure 1- 2, is the true future ofEDA.
Promptly growing demand for telecommunication services and information interchange has led to the fact that communication became one of the most dynamical branches of an infrastructure of a modern society. The book introduces to the bases of classical MDP theory; problems of a finding optimal CAC in models are investigated and various problems of improvement of characteristics of traditional and multimedia wireless communication networks are considered together with both classical and new methods of theory MDP which allow defining optimal strategy of access in teletraffic systems. The book will be useful to specialists in the field of telecommunication systems and also to students and post-graduate students of corresponding specialties.
This timely text presents a comprehensive overview of fault tolerance techniques for high-performance computing (HPC). The text opens with a detailed introduction to the concepts of checkpoint protocols and scheduling algorithms, prediction, replication, silent error detection and correction, together with some application-specific techniques such as ABFT. Emphasis is placed on analytical performance models. This is then followed by a review of general-purpose techniques, including several checkpoint and rollback recovery protocols. Relevant execution scenarios are also evaluated and compared through quantitative models. Features: provides a survey of resilience methods and performance models; examines the various sources for errors and faults in large-scale systems; reviews the spectrum of techniques that can be applied to design a fault-tolerant MPI; investigates different approaches to replication; discusses the challenge of energy consumption of fault-tolerance methods in extreme-scale systems.
As multimedia data advances in technology and becomes more complex, the hybridization of soft computing tools allows for more robust and safe solutions in data processing and analysis. Quantum-Inspired Intelligent Systems for Multimedia Data Analysis provides emerging research on techniques used in multimedia information processing using intelligent paradigms including swarm intelligence, neural networks, and deep learning. While highlighting topics such as clustering techniques, neural network architecture, and text data processing, this publication explores the methods and applications of computational intelligent tools. This book is an important resource for academics, computer engineers, IT professionals, students, and researchers seeking current research in the field of multimedia data processing and quantum intelligent systems.
Function Architecture Co-Design is a new paradigm for the design and implementation of embedded systems. Function/Architecture Optimization and Co-Design of Embedded Systems presents the authors' work in developing a function/architecture optimization and co-design formal methodology and framework for control-dominated embedded systems. The approach incorporates both data flow and control optimizations performed on a suitable novel intermediate design task representation. The aim is not only to enhance productivity of the designer and system developer, but also to improve quality of the final synthesis outcome. Function/Architecture Optimization and Co-Design of Embedded Systems discusses the proposed function/architecture co-design methodology, focusing on design representation, optimization, validation, and synthesis. Throughout the text, the difference between behavior specification and implementation is emphasized. The current need in co-design to move from synthesis-based technology to compiler-based technology is pointed out. The authors describe and show how performing data flow and control optimizations at the high abstraction level can lead to significant size and performance improvements in both the synthesized hardware and software. The work builds on bodies of research in the silicon and software compilation domains. The aforementioned techniques are specialized to the embedded systems domain. It is recognized that guided optimization can be applied on the internal design representation, no matter what the abstraction level, and need not be restricted to the final stages of software assembly code generation, or hardware synthesis. Function/Architecture Optimization and Co-Design of Embedded Systems will be of primary interest to researchers, developers, and professionals in the field of embedded systems design.
This guide provides a comprehensive overview of High Performance Computing (HPC) to equip students with a full skill set including cluster setup, network selection, and a background of supercomputing competitions. It covers the system, architecture, evaluating approaches, and other practical supercomputing techniques. As the world's largest supercomputing hackathon, the ASC Student Supercomputer Challenge has attracted a growing number of new talent to supercomputing and has greatly promoted communications in the global HPC community. Enclosed in this book, readers will also find how to analyze and optimize supercomputing systems and applications in real science and engineering cases.
The purpose of this book is to introduce VHSIC Hardware Description Lan guage (VHDL) and its use for synthesis. VHDL is a hardware description language which provides a means of specifying a digital system over different levels of abstraction. It supports behavior specification during the early stages of a design process and structural specification during the later implementation stages. VHDL was originally introduced as a hardware description language that per mitted the simulation of digital designs. It is now increasingly used for design specifications that are given as the input to synthesis tools which translate the specifications into netlists from which the physical systems can be built. One problem with this use of VHDL is that not all of its constructs are useful in synthesis. The specification of delay in signal assignments does not have a clear meaning in synthesis, where delays have already been determined by the im plementationtechnolo y. VHDL has data-structures such as files and pointers, useful for simulation purposes but not for actual synthesis. As a result synthe sis tools accept only subsets of VHDL. This book tries to cover the synthesis aspect of VHDL, while keeping the simulation-specifics to a minimum. This book is suitable for working professionals as well as for graduate or under graduate study. Readers can view this book as a way to get acquainted with VHDL and how it can be used in modeling of digital designs."
Evolutionary Algorithms for Embedded System Design describes how Evolutionary Algorithm (EA) concepts can be applied to circuit and system design - an area where time-to-market demands are critical. EAs create an interesting alternative to other approaches since they can be scaled with the problem size and can be easily run on parallel computer systems. This book presents several successful EA techniques and shows how they can be applied at different levels of the design process. Starting on a high-level abstraction, where software components are dominant, several optimization steps are demonstrated, including DSP code optimization and test generation. Throughout the book, EAs are tested on real-world applications and on large problem instances. For each application the main criteria for the successful application in the corresponding domain are discussed. In addition, contributions from leading international researchers provide the reader with a variety of perspectives, including a special focus on the combination of EAs with problem specific heuristics. Evolutionary Algorithms for Embedded System Design is an excellent reference for both practitioners working in the area of circuit and system design and for researchers in the field of evolutionary concepts.
The book presents the state of the art in high-performance computing and simulation on modern supercomputer architectures. It explores general trends in hardware and software development, and then focuses specifically on the future of high-performance systems and heterogeneous architectures. It also covers applications such as computational fluid dynamics, material science, medical applications and climate research and discusses innovative fields like coupled multi-physics or multi-scale simulations. The papers included were selected from the presentations given at the 20th Workshop on Sustained Simulation Performance at the HLRS, University of Stuttgart, Germany in December 2015, and the subsequent Workshop on Sustained Simulation Performance at Tohoku University in February 2016.
Contributions on UML address the application of UML in the
specification of embedded HW/SW systems. C-Based System Design
embraces the modeling of operating systems, modeling with different
models of computation, generation of test patterns, and experiences
from case studies with SystemC. Analog and Mixed-Signal Systems
covers rules for solving general modeling problems in VHDL-AMS,
modeling of multi-nature systems, synthesis, and modeling of
Mixed-Signal Systems with SystemC. Languages for formal methods are
addressed by contributions on formal specification and refinement
of hybrid, embedded and real-time stems.
The Verilog hardware description language provides the ability to describe digital and analog systems for design concepts and implementation. It was developed originally at Gateway Design and implemented there. Now it is an open standard of IEEE and Open Verilog International and is supported by many tools and processes. The Complete Verilog Book introduces the language and describes it in a comprehensive manner. In The Complete Verilog Book, each feature of the language is described using semantic introduction, syntax and examples. A chapter on semantics explains the basic concepts and algorithms that form the basis of every evaluation and every sequence of evaluations that ultimately provides the meaning or full semantics of the language. The Complete Verilog Book takes the approach that Verilog is not only a simulation language or a synthesis language or a formal method of describing design, but is a totality of all these and covers many aspects not covered before but which are essential parts of any design process using Verilog. The Complete Verilog Book starts with a tutorial introduction. It explains the data types in Verilog HDL, as the object-oriented world knows that the language-constructs and data types are equally important parts of a language. The Complete Verilog Book explains the three views, behavioral, RTL and structural and then describes features in each of these views. The Complete Verilog Book keeps the reader abreast of current developments in the Verilog world such as Verilog-A, cycle simulation, SD, and DCL, and uses IEEE 1364 syntax. The Complete Verilog Book will be useful to all those who want to learn Verilog HDL and to explore its various facets.
It is well known that embedded systems have to be implemented efficiently. This requires that processors optimized for certain application domains are used in embedded systems. Such an optimization requires a careful exploration of the design space, including a detailed study of cost/performance tradeoffs. In order to avoid time-consuming assembly language programming during design space exploration, compilers are needed. In order to analyze the effect of various software or hardware configurations on the performance, retargetable compilers are needed that can generate code for numerous different potential hardware configurations. This book provides a comprehensive and up-to-date overview of the fast developing area of retargetable compilers for embedded systems. It describes a large set important tools as well as applications of retargetable compilers at different levels in the design flow. Retargetable Compiler Technology for Embedded Systems is mostly self-contained and requires only fundamental knowledge in software and compiler design. It is intended to be a key reference for researchers and designers working on software, compilers, and processor optimization for embedded systems.
For more and more systems, software has moved from a peripheral to a central role, replacing mechanical parts and hardware and giving the product a competitive edge. Consequences of this trend are an increase in: the size of software systems, the variability in software artifacts, and the importance of software in achieving the system-level properties. Software architecture provides the necessary abstractions for managing the resulting complexity. We here introduce the Third Working IEEFlIFIP Conference on Software Architecture, WICSA3. That it is already the third such conference is in itself a clear indication that software architecture continues to be an important topic in industrial software development and in software engineering research. However, becoming an established field does not mean that software architecture provides less opportunity for innovation and new directions. On the contrary, one can identify a number of interesting trends within software architecture research. The first trend is that the role of the software architecture in all phases of software development is more explicitly recognized. Whereas initially software architecture was primarily associated with the architecture design phase, we now see that the software architecture is treated explicitly during development, product derivation in software product lines, at run-time, and during system evolution. Software architecture as an artifact has been decoupled from a particular lifecycle phase.
Zuse's textbook on software measurement provides basic principles as well as theoretical and practical guidelines for the use of numerous kinds of software measures. It is written to enable scientists, teachers, practit ioners, and students to define the basic terminology of Software Measurement and to contribute to theory building. The textbook considers, among other, qualitative and numerical models behind software measures. It explains step-by-step the importance of qualitative properties, the meaning of scale types, the foundations of the validation of measures, and the foundations of prediction models, the models behind the Function-Point method and the COCOMO model, and the qualitative assumption of object-oriented measures. For applications of software measures in practice more than two hundred software measures of the software life-cycle are described in detail (object-oriented measures included). The enclosed CD contains a selection of more than 1,600 references of literature, and a small demo version of ZD-MIS (Zuse/Drabe - Measurement Information System) is presented.
With the fast development of networking and software technologies, information processing infrastructure and applications have been growing at an impressive rate in both size and complexity, to such a degree that the design and development of high performance and scalable data processing systems and networks have become an ever-challenging issue. As a result, the use of performance modeling and m- surementtechniquesas a critical step in designand developmenthas becomea c- mon practice. Research and developmenton methodologyand tools of performance modeling and performance engineering have gained further importance in order to improve the performance and scalability of these systems. Since the seminal work of A. K. Erlang almost a century ago on the mod- ing of telephone traf c, performance modeling and measurement have grown into a discipline and have been evolving both in their methodologies and in the areas in which they are applied. It is noteworthy that various mathematical techniques were brought into this eld, including in particular probability theory, stochastic processes, statistics, complex analysis, stochastic calculus, stochastic comparison, optimization, control theory, machine learning and information theory. The app- cation areas extended from telephone networks to Internet and Web applications, from computer systems to computer software, from manufacturing systems to s- ply chain, from call centers to workforce management.
Hardware correctness is becoming ever more important in the design of computer systems. The authors introduce a powerful new approach to the design and analysis of modern computer architectures, based on mathematically well-founded formal methods which allows for rigorous correctness proofs, accurate hardware costs determination, and performance evaluation. This book develops, at the gate level, the complete design of a pipelined RISC processor with a fully IEEE-compliant floating-point unit. In contrast to other design approaches, the design presented here is modular, clean and complete.
Co-Design is the set of emerging techniques which allows for the simultaneous design of Hardware and Software. In many cases where the application is very demanding in terms of various performances (time, surface, power consumption), trade-offs between dedicated hardware and dedicated software are becoming increasingly difficult to decide upon in the early stages of a design. Verification techniques - such as simulation or proof techniques - that have proven necessary in the hardware design must be dramatically adapted to the simultaneous verification of Software and Hardware. Describing the latest tools available for both Co-Design and Co-Verification of systems, Hardware/Software Co-Design and Co-Verification offers a complete look at this evolving set of procedures for CAD environments. The book considers all trade-offs that have to be made when co-designing a system. Several models are presented for determining the optimum solution to any co-design problem, including partitioning, architecture synthesis and code generation. When deciding on trade-offs, one of the main factors to be considered is the flow of communication, especially to and from the outside world. This involves the modeling of communication protocols. An approach to the synthesis of interface circuits in the context of co-design is presented. Other chapters present a co-design oriented flexible component data-base and retrieval methods; a case study of an ethernet bridge, designed using LOTOS and co-design methodologies and finally a programmable user interface based on monitors. Hardware/Software Co-Design and Co-Verification will help designers and researchers to understand these latest techniques in system design and as such will be of interest to all involved in embedded system design.
Aimed at improving a programmers ability for altering code to fit changing requirements and for detecting and correcting errors, this book argues for a new way of thinking about maintaining software. It proposes the use of a set of human factors principles that govern the programmer-software-event world interactions and form the core of the maintenance process. The book is thus highly valuable for systems analysts and programmers, managers seeking to reduce costs, researchers looking at solutions to the maintenance problem, and students learning to write clear unambiguous programs.
This monograph details the proceedings of the 15th International Conference on Information Systems Development. ISD is progressing rapidly, continually creating new challenges for the professionals involved. New concepts, approaches and techniques of systems development emerge constantly in this field. Progress in ISD comes from research as well as from practice. The aim of the Conference was to provide an international forum for the exchange of ideas and experiences between academia and industry, and to stimulate the exploration of new solutions.
As integrated circuit (IC) feature sizes scaled below a quarter of a micron, thereby defining the deep submicron (DSM) era, there began a gradual shift in the impact on performance due to the metal interconnections among the active circuit components. Once viewed as merely parasitics in terms of their relevance to the overall circuit behavior, the interconnect can now have a dominant impact on the IC area and performance. Beginning in the late 1980's there was significant research toward better modeling and characterization of the resistance, capacitance and ultimately the inductance of on-chip interconnect. IC Interconnect Analysis covers the state-of-the-art methods for modeling and analyzing IC interconnect based on the past fifteen years of research. This is done at a level suitable for most practitioners who work in the semiconductor and electronic design automation fields, but also includes significant depth for the research professionals who will ultimately extend this work into other areas and applications. IC Interconnect Analysis begins with an in-depth coverage of delay metrics, including the ubiquitous Elmore delay and its many variations. This is followed by an outline of moment matching methods, calculating moments efficiently, and Krylov subspace methods for model order reduction. The final two chapters describe how to interface these reduced-order models to circuit simulators and gate-level timing analyzers respectively. IC Interconnect Analysis is written for CAD tool developers, IC designers and graduate students.
Object-oriented techniques and languages have been proven to significantly increase engineering efficiency in software development. Many benefits are expected from their introduction into electronic modeling. Among them are better support for model reusability and flexibility, more efficient system modeling, and more possibilities in design space exploration and prototyping. Object-Oriented Modeling explores the latest techniques in object-oriented methods, formalisms and hardware description language extensions. The seven chapters comprising this book provide an overview of the latest object-oriented techniques for designing systems and hardware. Many examples are given in C++, VHDL and real-time programming languages. Object-Oriented Modeling describes further the use of object-oriented techniques in applications such as embedded systems, telecommunications and real-time systems, using the very latest techniques in object-oriented modeling. It is an essential guide to researchers, practitioners and students involved in software, hardware and system design.
In addition to capital infrastructure and consumers, digital information created by individual and corporate consumers of information technology is quickly being recognised as a key economic resource and an extremely valuable asset to a company. Organizational, Legal, and Technological Dimensions of Information System Administration recognises the importance of information technology by addressing the most crucial issues, challenges, opportunities, and solutions related to the role and responsibility of an information system. Highlighting various aspects of the organizational and legal implications of system administration, this reference work will be useful to managers, IT professionals, and graduate students who seek to gain an understanding in this discipline.
Software development is a complex problem-solving activity with a high level of uncertainty. There are many technical challenges concerning scheduling, cost estimation, reliability, performance, etc, which are further aggravated by weaknesses such as changing requirements, team dynamics, and high staff turnover. Thus the management of knowledge and experience is a key means of systematic software development and process improvement. "Managing Software Engineering Knowledge" illustrates several theoretical examples of this vision and solutions applied to industrial practice. It is structured in four parts addressing the motives for knowledge management, the concepts and models used in knowledge management for software engineering, their application to software engineering, and practical guidelines for managing software engineering knowledge. This book provides a comprehensive overview of the state of the art and best practice in knowledge management applied to software engineering. While researchers and graduate students will benefit from the interdisciplinary approach leading to basic frameworks and methodologies, professional software developers and project managers will also profit from industrial experience reports and practical guidelines.
This book brings together papers presented at the 2016 International Conference on Communications, Signal Processing, and Systems, which provides a venue to disseminate the latest developments and to discuss the interactions and links between these multidisciplinary fields. Spanning topics ranging from communications to signal processing and systems, this book is aimed at undergraduate and graduate students in electrical engineering, computer science and mathematics, researchers and engineers from academia and industry as well as government employees (such as NSF, DOD and DOE). |
You may like...
Advances in Non-volatile Memory and…
Yoshio Nishi, Blanka Magyari-Kope
Paperback
R4,593
Discovery Miles 45 930
Research Anthology on Usage and…
Information R Management Association
Hardcover
R17,611
Discovery Miles 176 110
Computer Systems and Software…
Information Reso Management Association
Hardcover
R8,929
Discovery Miles 89 290
Cases on Lean Thinking Applications in…
Eduardo Guilherme Satolo, Robisom Damasceno Calado
Hardcover
R5,991
Discovery Miles 59 910
|