![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
This book is a comprehensive introduction into Organic Computing (OC), presenting systematically the current state-of-the-art in OC. It starts with motivating examples of self-organising, self-adaptive and emergent systems, derives their common characteristics and explains the fundamental ideas for a formal characterisation of such systems. Special emphasis is given to a quantitative treatment of concepts like self-organisation, emergence, autonomy, robustness, and adaptivity. The book shows practical examples of architectures for OC systems and their applications in traffic control, grid computing, sensor networks, robotics, and smart camera systems. The extension of single OC systems into collective systems consisting of social agents based on concepts like trust and reputation is explained. OC makes heavy use of learning and optimisation technologies; a compact overview of these technologies and related approaches to self-organising systems is provided. So far, OC literature has been published with the researcher in mind. Although the existing books have tried to follow a didactical concept, they remain basically collections of scientific papers. A comprehensive and systematic account of the OC ideas, methods, and achievements in the form of a textbook which lends itself to the newcomer in this field has been missing so far. The targeted reader of this book is the master student in Computer Science, Computer Engineering or Electrical Engineering - or any other newcomer to the field of Organic Computing with some technical or Computer Science background. Readers can seek access to OC ideas from different perspectives: OC can be viewed (1) as a "philosophy" of adaptive and self-organising - life-like - technical systems, (2) as an approach to a more quantitative and formal understanding of such systems, and finally (3) a construction method for the practitioner who wants to build such systems. In this book, we first try to convey to the reader a feeling of the special character of natural and technical self-organising and adaptive systems through a large number of illustrative examples. Then we discuss quantitative aspects of such forms of organisation, and finally we turn to methods of how to build such systems for practical applications.
This textbook serves as an introduction to the subject of embedded systems design, using microcontrollers as core components. It develops concepts from the ground up, covering the development of embedded systems technology, architectural and organizational aspects of controllers and systems, processor models, and peripheral devices. Since microprocessor-based embedded systems tightly blend hardware and software components in a single application, the book also introduces the subjects of data representation formats, data operations, and programming styles. The practical component of the book is tailored around the architecture of a widely used Texas Instrument's microcontroller, the MSP430 and a companion web site offers for download an experimenter's kit and lab manual, along with Powerpoint slides and solutions for instructors.
This book serves as a starting point for people looking for a deeper principled understanding of REST, its applications, its limitations, and current research work in the area and as an architectural style. The authors focus on applying REST beyond Web applications (i.e., in enterprise environments), and in reusing established and well-understood design patterns. The book examines how RESTful systems can be designed and deployed, and what the results are in terms of benefits and challenges encountered in the process. This book is intended for information and service architects and designers who are interested in learning about REST, how it is applied, and how it is being advanced.
This thesis takes an empirical approach to understanding of the behavior and interactions between the two main components of reinforcement learning: the learning algorithm and the functional representation of learned knowledge. The author approaches these entities using design of experiments not commonly employed to study machine learning methods. The results outlined in this work provide insight as to what enables and what has an effect on successful reinforcement learning implementations so that this learning method can be applied to more challenging problems.
This book constitutes the refereed proceedings papers from the 8th International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computing Systems, PMBS 2017, held in Denver, Colorado, USA, in November 2017. The 10 full papers and 3 short papers included in this volume were carefully reviewed and selected from 36 submissions. They were organized in topical sections named: performance evaluation and analysis; performance modeling and simulation; and short papers.
With the development of Very-Deep Sub-Micron technologies, process variability is becoming increasingly important and is a very important issue in the design of complex circuits. Process variability is the statistical variation of process parameters, meaning that these parameters do not have always the same value, but become a random variable, with a given mean value and standard deviation. This effect can lead to several issues in digital circuit design. The logical consequence of this parameter variation is that circuit characteristics, as delay and power, also become random variables. Because of the delay variability, not all circuits will now have the same performance, but some will be faster and some slower. However, the slowest circuits may be so slow that they will not be appropriate for sale. On the other hand, the fastest circuits that could be sold for a higher price can be very leaky, and also not very appropriate for sale. A main consequence of power variability is that the power consumption of some circuits will be different than expected, reducing reliability, average life expectancy and warranty of products. Sometimes the circuits will not work at all, due to reasons associated with process variations. At the end, these effects result in lower yield and lower profitability. To understand these effects, it is necessary to study the consequences of variability in several aspects of circuit design, like logic gates, storage elements, clock distribution, and any other that can be affected by process variations. The main focus of this book will be storage elements.
This book provides a unified treatment of Flip-Flop design and selection in nanometer CMOS VLSI systems. The design aspects related to the energy-delay tradeoff in Flip-Flops are discussed, including their energy-optimal selection according to the targeted application, and the detailed circuit design in nanometer CMOS VLSI systems. Design strategies are derived in a coherent framework that includes explicitly nanometer effects, including leakage, layout parasitics and process/voltage/temperature variations, as main advances over the existing body of work in the field. The related design tradeoffs are explored in a wide range of applications and the related energy-performance targets. A wide range of existing and recently proposed Flip-Flop topologies are discussed. Theoretical foundations are provided to set the stage for the derivation of design guidelines, and emphasis is given on practical aspects and consequences of the presented results. Analytical models and derivations are introduced when needed to gain an insight into the inter-dependence of design parameters under practical constraints. This book serves as a valuable reference for practicing engineers working in the VLSI design area, and as text book for senior undergraduate, graduate and postgraduate students (already familiar with digital circuits and timing).
This book presents a wide-band and technology independent, SPICE-compatible RLC model for through-silicon vias (TSVs) in 3D integrated circuits. This model accounts for a variety of effects, including skin effect, depletion capacitance and nearby contact effects. Readers will benefit from in-depth coverage of concepts and technology such as 3D integration, Macro modeling, dimensional analysis and compact modeling, as well as closed form equations for the through silicon via parasitics. Concepts covered are demonstrated by using TSVs in applications such as a spiral inductor and inductive-based communication system and bandpass filtering.
This book explores the design implications of emerging, non-volatile memory (NVM) technologies on future computer memory hierarchy architecture designs. Since NVM technologies combine the speed of SRAM, the density of DRAM, and the non-volatility of Flash memory, they are very attractive as the basis for future universal memories. This book provides a holistic perspective on the topic, covering modeling, design, architecture and applications. The practical information included in this book will enable designers to exploit emerging memory technologies to improve significantly the performance/power/reliability of future, mainstream integrated circuits.
This book analyzes the challenges in verifying Dynamically Reconfigurable Systems (DRS) with respect to the user design and the physical implementation of such systems. The authors describe the use of a simulation-only layer to emulate the behavior of target FPGAs and accurately model the characteristic features of reconfiguration. Readers are enabled with this simulation-only layer to maintain verification productivity by abstracting away the physical details of the FPGA fabric. Two implementations of the simulation-only layer are included: Extended Re Channel is a System C library that can be used to check DRS designs at a high level; ReSim is a library to support RTL simulation of a DRS reconfiguring both its logic and state. Through a number of case studies, the authors demonstrate how their approach integrates seamlessly with existing, mainstream DRS design flows and with well-established verification methodologies such as top-down modeling and coverage-driven verification.
This new book on mathematical logic by Jeremy Avigad gives a thorough introduction to the fundamental results and methods of the subject from the syntactic point of view, emphasizing logic as the study of formal languages and systems and their proper use. Topics include proof theory, model theory, the theory of computability, and axiomatic foundations, with special emphasis given to aspects of mathematical logic that are fundamental to computer science, including deductive systems, constructive logic, the simply typed lambda calculus, and type-theoretic foundations. Clear and engaging, with plentiful examples and exercises, it is an excellent introduction to the subject for graduate students and advanced undergraduates who are interested in logic in mathematics, computer science, and philosophy, and an invaluable reference for any practicing logician's bookshelf.
This book constitutes the refereed proceedings of the 11th Annual Conference on Advanced Computer Architecture, ACA 2016, held in Weihai, China, in August 2016. The 17 revised full papers presented were carefully reviewed and selected from 89 submissions. The papers address issues such as processors and circuits; high performance computing; GPUs and accelerators; cloud and data centers; energy and reliability; intelligence computing and mobile computing.
This book constitutes the refereed proceedings of the 8th International Symposium on Parallel Architecture, Algorithm and Programming, PAAP 2017, held in Haikou, China, in June 2017. The 50 revised full papers and 7 revised short papers presented were carefully reviewed and selected from 192 submissions. The papers deal with research results and development activities in all aspects of parallel architectures, algorithms and programming techniques.
This book shows readers how to develop energy-efficient algorithms and hardware architectures to enable high-definition 3D video coding on resource-constrained embedded devices. Users of the Multiview Video Coding (MVC) standard face the challenge of exploiting its 3D video-specific coding tools for increasing compression efficiency at the cost of increasing computational complexity and, consequently, the energy consumption. This book enables readers to reduce the multiview video coding energy consumption through jointly considering the algorithmic and architectural levels. Coverage includes an introduction to 3D videos and an extensive discussion of the current state-of-the-art of 3D video coding, as well as energy-efficient algorithms for 3D video coding and energy-efficient hardware architecture for 3D video coding.
This book equips readers with tools for computer architecture of high performance, low power, and high reliability memory hierarchy in computer systems based on emerging memory technologies, such as STTRAM, PCM, FBDRAM, etc. The techniques described offer advantages of high density, near-zero static power, and immunity to soft errors, which have the potential of overcoming the "memory wall." The authors discuss memory design from various perspectives: emerging memory technologies are employed in the memory hierarchy with novel architecture modification; hybrid memory structure is introduced to leverage advantages from multiple memory technologies; an analytical model named "Moguls" is introduced to explore quantitatively the optimization design of a memory hierarchy; finally, the vulnerability of the CMPs to radiation-based soft errors is improved by replacing different levels of on-chip memory with STT-RAMs.
This book first provides a comprehensive coverage of state-of-the-art validation solutions based on real-time signal tracing to guarantee the correctness of VLSI circuits. The authors discuss several key challenges in post-silicon validation and provide automated solutions that are systematic and cost-effective. A series of automatic tracing solutions and innovative design for debug (DfD) techniques are described, including techniques for trace signal selection for enhancing visibility of functional errors, a multiplexed signal tracing strategy for improving functional error detection, a tracing solution for debugging electrical errors, an interconnection fabric for increasing data bandwidth and supporting multi-core debug, an interconnection fabric design and optimization technique to increase transfer flexibility and a DfD design and associated tracing solution for improving debug efficiency and expanding tracing window. The solutions presented in this book improve the validation quality of VLSI circuits, and ultimately enable the design and fabrication of reliable electronic devices.
This book covers key concepts in the design of 2D and 3D Network-on-Chip interconnect. It highlights design challenges and discusses fundamentals of NoC technology, including architectures, algorithms and tools. Coverage focuses on topology exploration for both 2D and 3D NoCs, routing algorithms, NoC router design, NoC-based system integration, verification and testing, and NoC reliability. Case studies are used to illuminate new design methodologies.
This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respect to several techniques. This book also illustrates turbo decoders for 3GPP-LTE/LTE-A and IEEE 802.16e/m standards, which provide a low-complexity but high-flexibility circuit structure to support these standards in multiple parallel modes. Moreover, some solutions that can overcome the limitation upon the speedup of parallel architecture by modification to turbo codec are presented here. Compared to the traditional designs, these methods can lead to at most 33% gain in throughput with similar performance and similar cost.
This book describes the various tradeoffs systems designers face when designing embedded memory. Readers designing multi-core systems and systems on chip will benefit from the discussion of different topics from memory architecture, array organization, circuit design techniques and design for test. The presentation enables a multi-disciplinary approach to chip design, which bridges the gap between the architecture level and circuit level, in order to address yield, reliability and power-related issues for embedded memory.
This book analyzes energy and reliability as major challenges faced by designers of computing frameworks in the nanometer technology regime. The authors describe the existing solutions to address these challenges and then reveal a new reconfigurable computing platform, which leverages high-density nanoscale memory for both data storage and computation to maximize the energy-efficiency and reliability. The energy and reliability benefits of this new paradigm are illustrated and the design challenges are discussed. Various hardware and software aspects of this exciting computing paradigm are described, particularly with respect to hardware-software co-designed frameworks, where the hardware unit can be reconfigured to mimic diverse application behavior. Finally, the energy-efficiency of the paradigm described is compared with other, well-known reconfigurable computing platforms.
This three-volume set of books presents advances in the development of concepts and techniques in the area of new technologies and contemporary information system architectures. It guides readers through solving specific research and analytical problems to obtain useful knowledge and business value from the data. Each chapter provides an analysis of a specific technical problem, followed by the numerical analysis, simulation and implementation of the solution to the problem. The books constitute the refereed proceedings of the 2017 38th International Conference "Information Systems Architecture and Technology," or ISAT 2017, held on September 17-19, 2017 in Szklarska Poreba, Poland. The conference was organized by the Computer Science and Management Systems Departments, Faculty of Computer Science and Management, Wroclaw University of Technology, Poland. The papers have been organized into topical parts: Part I- includes discourses on topics including, but not limited to, Artificial Intelligence Methods, Knowledge Discovery and Data Mining, Big Data, Knowledge Discovery and Data Mining, Knowledge Based Management, Internet of Things, Cloud Computing and High Performance Computing, Distributed Computer Systems, Content Delivery Networks, and Service Oriented Computing. Part II-addresses topics including, but not limited to, System Modelling for Control, Recognition and Decision Support, Mathematical Modelling in Computer System Design, Service Oriented Systems and Cloud Computing and Complex Process Modeling. Part III-deals with topics including, but not limited to, Modeling of Manufacturing Processes, Modeling an Investment Decision Process, Management of Innovation, Management of Organization.
This volume shows how ICT (information and communications technology) can play the role of a driver of business process reengineering (BPR). ICT can aid in enabling improvement in BPR activity cycles as it provides many components that enhance performance that can lead to competitive advantages. IT can interface with BPR to improve business processes in terms of communication, inventory management, data management, management information systems, customer relationship management, computer-aided design, computer-aided manufacturing (CAM), and computer-aided engineering. This volume explores these issues in depth.
Contemporary High Performance Computing: From Petascale toward Exascale, Volume 3 focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. This third volume will be a continuation of the two previous volumes, and will include other HPC ecosystems using the same chapter outline: description of a flagship system, major application workloads, facilities, and sponsors. Features: Describes many prominent, international systems in HPC from 2015 through 2017 including each system's hardware and software architecture Covers facilities for each system including power and cooling Presents application workloads for each site Discusses historic and projected trends in technology and applications Includes contributions from leading experts Designed for researchers and students in high performance computing, computational science, and related areas, this book provides a valuable guide to the state-of-the art research, trends, and resources in the world of HPC.
• Showcases today's most influential architectural voices who have been instrumental in shifting the direction of design in the last decade • Includes perspectives of influential architects, practitioners and academics, as well as critics including philosophers • Case studies and essays engage and deploy a range of topics and technologies from speculative realism and Object Oriented Ontology to high computation, Big Data, parametricism, digital fabrication, artificial intelligence, augmented reality and virtual reality • A rigorous account of architecture's theoretical and technological concerns over the last decade
High-Performance Computing using FPGA covers the area of high performance reconfigurable computing (HPRC). This book provides an overview of architectures, tools and applications for High-Performance Reconfigurable Computing (HPRC). FPGAs offer very high I/O bandwidth and fine-grained, custom and flexible parallelism and with the ever-increasing computational needs coupled with the frequency/power wall, the increasing maturity and capabilities of FPGAs, and the advent of multicore processors which has caused the acceptance of parallel computational models. The Part on architectures will introduce different FPGA-based HPC platforms: attached co-processor HPRC architectures such as the CHREC's Novo-G and EPCC's Maxwell systems; tightly coupled HRPC architectures, e.g. the Convey hybrid-core computer; reconfigurably networked HPRC architectures, e.g. the QPACE system, and standalone HPRC architectures such as EPFL's CONFETTI system. The Part on Tools will focus on high-level programming approaches for HPRC, with chapters on C-to-Gate tools (such as Impulse-C, AutoESL, Handel-C, MORA-C++); Graphical tools (MATLAB-Simulink, NI LabVIEW); Domain-specific languages, languages for heterogeneous computing(for example OpenCL, Microsoft's Kiwi and Alchemy projects). The part on Applications will present case from several application domains where HPRC has been used successfully, such as Bioinformatics and Computational Biology; Financial Computing; Stencil computations; Information retrieval; Lattice QCD; Astrophysics simulations; Weather and climate modeling. |
You may like...
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
Agile Software Architecture - Aligning…
Muhammad Ali Babar, Alan W. Brown, …
Paperback
Architecting High Performing, Scalable…
Shailesh Kumar Shivakumar
Paperback
R1,137
Discovery Miles 11 370
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R5,924
Discovery Miles 59 240
|