![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This book is an introductory text on structural analysis and structural design. While the emphasis is on fundamental concepts, the ideas are reinforced through a combination of limited versatile classical techniques and numerical methods. Structural analysis and structural design including optimal design are strongly linked through design examples. Included computer software enhances the learning experience.
This book provides readers with an insightful guide to the design, testing and optimization of 2.5D integrated circuits. The authors describe a set of design-for-test methods to address various challenges posed by the new generation of 2.5D ICs, including pre-bond testing of the silicon interposer, at-speed interconnect testing, built-in self-test architecture, extest scheduling, and a programmable method for low-power scan shift in SoC dies. This book covers many testing techniques that have already been used in mainstream semiconductor companies. Readers will benefit from an in-depth look at test-technology solutions that are needed to make 2.5D ICs a reality and commercially viable.
This book constitutes the refereed proceedings of the 10th International Conference on Model Transformation, ICMT 2017, held as part of STAF 2017, in Marburg, Germany, in July 2017. The 9 full papers and 2 short papers were carefully reviewed and selected from 31 submissions. The papers are organized in the following topical sections: transformation paradigms, languages, algorithms and strategies; development of transformations; and applications and case studies.
The prefix operation on a set of data is one of the simplest and most useful building blocks in parallel algorithms. This introduction to those aspects of parallel programming and parallel algorithms that relate to the prefix problem emphasizes its use in a broad range of familiar and important problems. The book illustrates how the prefix operation approach to parallel computing leads to fast and efficient solutions to many different kinds of problems. Students, teachers, programmers, and computer scientists will want to read this clear exposition of an important approach.
This book describes novel software concepts to increase reliability under user-defined constraints. The authors' approach bridges, for the first time, the reliability gap between hardware and software. Readers will learn how to achieve increased soft error resilience on unreliable hardware, while exploiting the inherent error masking characteristics and error (stemming from soft errors, aging, and process variations) mitigations potential at different software layers.
This book-presents new methods and tools for the integration and simulation of smart devices. The design approach described in this book explicitly accounts for integration of Smart Systems components and subsystems as a specific constraint. It includes methodologies and EDA tools to enable multi-disciplinary and multi-scale modeling and design, simulation of multi-domain systems, subsystems and components at all levels of abstraction, system integration and exploration for optimization of functional and non-functional metrics. By covering theoretical and practical aspects of smart device design, this book targets people who are working and studying on hardware/software modelling, component integration and simulation under different positions (system integrators, designers, developers, researchers, teachers, students etc.). In particular, it is a good introduction to people who have interest in managing heterogeneous components in an efficient and effective way on different domains and different abstraction levels. People active in smart device development can understand both the current status of practice and future research directions. * Provides a comprehensive overview of smart systems design, focusing on design challenges and cutting-edge solutions; * Enables development of a co-simulation and co-design environment that accounts for the peculiarities of the basic subsystems and components to be integrated; * Describes development of modeling and design techniques, methods and tools that enable multi-domain simulation and optimization at various levels of abstraction and across different technological domains.
This unique text/reference provides a comprehensive review of distributed simulation (DS) from the perspective of Model Driven Engineering (MDE), illustrating how MDE affects the overall lifecycle of the simulation development process. Numerous practical case studies are included to demonstrate the utility and applicability of the methodology, many of which are developed from tools available to download from the public domain. Topics and features: Provides a thorough introduction to the fundamental concepts, principles and processes of modeling and simulation, MDE and high-level architecture Describes a road map for building a DS system in accordance with the MDE perspective, and a technical framework for the development of conceptual models Presents a focus on federate (simulation environment) architectures, detailing a practical approach to the design of federations (i.e., simulation member design) Discusses the main activities related to scenario management in DS, and explores the process of MDE-based implementation, integration and testing Reviews approaches to simulation evolution and modernization, including architecture-driven modernization for simulation modernization Examines the potential synergies between the agent, DS, and MDE methodologies, suggesting avenues for future research at the intersection of these three fields Distributed Simulation - A Model Driven Engineering Approach is an important resource for all researchers and practitioners involved in modeling and simulation, and software engineering, who may be interested in adopting MDE principles when developing complex DS systems.
Thomas Ludwig reveals design characteristics when aiming at researching information infrastructures and their diverse information resources, types of users and systems as well as divergent practices. By conducting empirically-based design case studies in the domain of crisis management, the author uncovers methodological and design challenges in understanding new kinds of interconnected information infrastructures from a praxeological perspective. Based on implemented novel ICT tools, he derives design characteristics that focus on integrating objective and subjective queried insights into situated activities of people as well as emphasizing the subjective nature of information quality.
This book constitutes the thoroughly refereed proceedings of the 11th International Conference on Evaluation of Novel Approaches to Software Engineering, ENASE 2016, held in Rome, Italy, in April 2016. The 11 full papers presented were carefully reviewed and selected from 79 submissions. The mission of ENASE is to be a prime international forum to discuss and publish research findings and IT industry experiences with relation to the evaluation of novel approaches to software engineering. The conference acknowledges necessary changes in systems and software thinking due to contemporary shifts of computing paradigm to e-services, cloud computing, mobile connectivity, business processes, and societal participation.
This textbook serves as an introduction to the subject of embedded systems design, using microcontrollers as core components. It develops concepts from the ground up, covering the development of embedded systems technology, architectural and organizational aspects of controllers and systems, processor models, and peripheral devices. Since microprocessor-based embedded systems tightly blend hardware and software components in a single application, the book also introduces the subjects of data representation formats, data operations, and programming styles. The practical component of the book is tailored around the architecture of a widely used Texas Instrument's microcontroller, the MSP430 and a companion web site offers for download an experimenter's kit and lab manual, along with Powerpoint slides and solutions for instructors.
This book is a comprehensive introduction into Organic Computing (OC), presenting systematically the current state-of-the-art in OC. It starts with motivating examples of self-organising, self-adaptive and emergent systems, derives their common characteristics and explains the fundamental ideas for a formal characterisation of such systems. Special emphasis is given to a quantitative treatment of concepts like self-organisation, emergence, autonomy, robustness, and adaptivity. The book shows practical examples of architectures for OC systems and their applications in traffic control, grid computing, sensor networks, robotics, and smart camera systems. The extension of single OC systems into collective systems consisting of social agents based on concepts like trust and reputation is explained. OC makes heavy use of learning and optimisation technologies; a compact overview of these technologies and related approaches to self-organising systems is provided. So far, OC literature has been published with the researcher in mind. Although the existing books have tried to follow a didactical concept, they remain basically collections of scientific papers. A comprehensive and systematic account of the OC ideas, methods, and achievements in the form of a textbook which lends itself to the newcomer in this field has been missing so far. The targeted reader of this book is the master student in Computer Science, Computer Engineering or Electrical Engineering - or any other newcomer to the field of Organic Computing with some technical or Computer Science background. Readers can seek access to OC ideas from different perspectives: OC can be viewed (1) as a "philosophy" of adaptive and self-organising - life-like - technical systems, (2) as an approach to a more quantitative and formal understanding of such systems, and finally (3) a construction method for the practitioner who wants to build such systems. In this book, we first try to convey to the reader a feeling of the special character of natural and technical self-organising and adaptive systems through a large number of illustrative examples. Then we discuss quantitative aspects of such forms of organisation, and finally we turn to methods of how to build such systems for practical applications.
Contemporary High Performance Computing: From Petascale toward Exascale, Volume 3 focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. This third volume will be a continuation of the two previous volumes, and will include other HPC ecosystems using the same chapter outline: description of a flagship system, major application workloads, facilities, and sponsors. Features: Describes many prominent, international systems in HPC from 2015 through 2017 including each system's hardware and software architecture Covers facilities for each system including power and cooling Presents application workloads for each site Discusses historic and projected trends in technology and applications Includes contributions from leading experts Designed for researchers and students in high performance computing, computational science, and related areas, this book provides a valuable guide to the state-of-the art research, trends, and resources in the world of HPC.
Reconfigurable computing techniques and adaptive systems are some of the most promising architectures for microprocessors. Reconfigurable and Adaptive Computing: Theory and Applications explores the latest research activities on hardware architecture for reconfigurable and adaptive computing systems. The first section of the book covers reconfigurable systems. The book presents a software and hardware codesign flow for coarse-grained systems-on-chip, a video watermarking algorithm for the H.264 standard, a solution for regular expressions matching systems, and a novel field programmable gate array (FPGA)-based acceleration solution with MapReduce framework on multiple hardware accelerators. The second section discusses network-on-chip, including an implementation of a multiprocessor system-on-chip platform with shared memory access, end-to-end quality-of-service metrics modeling based on a multi-application environment in network-on-chip, and a 3D ant colony routing (3D-ACR) for network-on-chip with three different 3D topologies. The final section addresses the methodology of system codesign. The book introduces a new software-hardware codesign flow for embedded systems that models both processors and intellectual property cores as services. It also proposes an efficient algorithm for dependent task software-hardware codesign with the greedy partitioning and insert scheduling method (GPISM) by task graph.
Parallel Computing Architectures and APIs: IoT Big Data Stream Processing commences from the point high-performance uniprocessors were becoming increasingly complex, expensive, and power-hungry. A basic trade-off exists between the use of one or a small number of such complex processors, at one extreme, and a moderate to very large number of simpler processors, at the other. When combined with a high-bandwidth, interprocessor communication facility leads to significant simplification of the design process. However, two major roadblocks prevent the widespread adoption of such moderately to massively parallel architectures: the interprocessor communication bottleneck, and the difficulty and high cost of algorithm/software development. One of the most important reasons for studying parallel computing architectures is to learn how to extract the best performance from parallel systems. Specifically, you must understand its architectures so that you will be able to exploit those architectures during programming via the standardized APIs. This book would be useful for analysts, designers and developers of high-throughput computing systems essential for big data stream processing emanating from IoT-driven cyber-physical systems (CPS). This pragmatic book: Devolves uniprocessors in terms of a ladder of abstractions to ascertain (say) performance characteristics at a particular level of abstraction Explains limitations of uniprocessor high performance because of Moore's Law Introduces basics of processors, networks and distributed systems Explains characteristics of parallel systems, parallel computing models and parallel algorithms Explains the three primary categorical representatives of parallel computing architectures, namely, shared memory, message passing and stream processing Introduces the three primary categorical representatives of parallel programming APIs, namely, OpenMP, MPI and CUDA Provides an overview of Internet of Things (IoT), wireless sensor networks (WSN), sensor data processing, Big Data and stream processing Provides introduction to 5G communications, Edge and Fog computing Parallel Computing Architectures and APIs: IoT Big Data Stream Processing discusses stream processing that enables the gathering, processing and analysis of high-volume, heterogeneous, continuous Internet of Things (IoT) big data streams, to extract insights and actionable results in real time. Application domains requiring data stream management include military, homeland security, sensor networks, financial applications, network management, web site performance tracking, real-time credit card fraud detection, etc.
Gain a strong foundation of core WSO2 ESB concepts and acquire a proven set of guidelines designed to get you started with WSO2 ESB quickly and efficiently. This book focuses on the various enterprises integration capabilities of WSO2 ESB along with a broad range of examples that you can try out. From beginning to the end, Beginning WSO2 ESB effectively guides you in gradually building expertise in enterprise integration with WSO2 ESB for your SOA infrastructure. Nowadays successful enterprises rely heavily on how well the underlying software applications and services work together to produce a unified business functionality. This enterprise integration is facilitated by an Enterprise Service Bus (ESB). This book provides comprehensive coverage of the fundamentals of the WSO2 ESB and its capabilities, through real-world enterprise integration use cases. What You'll Learn Get started with WSO2 ESB Discover message processing techniques with WSO2 ESB Integrate REST and SOAP services Use enterprise messaging techniques: JMS, AMQP, MQTT Manage file-based integration and integrate with proprietary systems such as SAP Extend and administrate WSO2 ESB Who This Book Is For: All levels of IT professionals from developers to integration architects who are interested in using WSO2 ESB for their SOA infrastructure.
Computer Architectures is a collection of multidisciplinary historical works unearthing sites, concepts, and concerns that catalyzed the cross-contamination of computers and architecture in the mid-20th century. Weaving together intellectual, social, cultural, and material histories, this book paints the landscape that brought computing into the imagination, production, and management of the built environment, whilst foregrounding the impact of architecture in shaping technological development. The book is organized into sections corresponding to the classic von Neumann diagram for computer architecture: program (control unit), storage (memory), input/output and computation (arithmetic/logic unit), each acting as a quasi-material category for parsing debates among architects, engineers, mathematicians, and technologists. Collectively, authors bring forth the striking homologies between a computer program and an architectural program, a wall and an interface, computer memory and storage architectures, structures of mathematics and structures of things. The collection initiates new histories of knowledge and technology production that turn an eye toward disciplinary fusions and their institutional and intellectual drives. Constructing the common ground between design and computing, this collection addresses audiences working at the nexus of design, technology, and society, including historians and practitioners of design and architecture, science and technology scholars, and media studies scholars.
This book constitutes the refereed proceedings of the 20th CCF Conference on Computer Engineering and Technology, NCCET 2016, held in Xi'an, China, in August 2016. The 21 full papers presented were carefully reviewed and selected from 120 submissions. They are organized in topical sections on processor architecture; application specific processors; computer application and software optimization; technology on the horizon.
This volume shows how ICT (information and communications technology) can play the role of a driver of business process reengineering (BPR). ICT can aid in enabling improvement in BPR activity cycles as it provides many components that enhance performance that can lead to competitive advantages. IT can interface with BPR to improve business processes in terms of communication, inventory management, data management, management information systems, customer relationship management, computer-aided design, computer-aided manufacturing (CAM), and computer-aided engineering. This volume explores these issues in depth.
Exploring new trends in computer technology, Corporal introduces an innovative and exciting concept: Transport Triggered Architecture (TTAs). Unlike most traditional architectures, where programmed operations trigger internal data transports, TTAs function through programming the data transports themselves. As a result the new architecture alleviates bottlenecks, allows for new code-generation optimizations and exploits hardware more efficiently. Founded on the author’s recent research, this book evaluates the attributes of different classes of architectures. It demonstrates how TTAs can be used as a template for automatic generation of application-specific processors and highlights their suitability for embedded system design. Several commercial TTA implementations have proven its concepts and advantages. Features includes:
Microprocessor Architectures is cutting-edge text which will prove invaluable to both industrial hardware and software engineers involved in embedded system design and to postgraduate electrical engineering and computer science students. This clearly-structured reference demonstrates the versatility of TTAs and explores their influential role in the next generation of computer architecture.
This book covers the latest approaches and results from reconfigurable computing architectures employed in the finance domain. So-called field-programmable gate arrays (FPGAs) have already shown to outperform standard CPU- and GPU-based computing architectures by far, saving up to 99% of energy depending on the compute tasks. Renowned authors from financial mathematics, computer architecture and finance business introduce the readers into today's challenges in finance IT, illustrate the most advanced approaches and use cases and present currently known methodologies for integrating FPGAs in finance systems together with latest results. The complete algorithm-to-hardware flow is covered holistically, so this book serves as a hands-on guide for IT managers, researchers and quants/programmers who think about integrating FPGAs into their current IT systems.
This thesis takes an empirical approach to understanding of the behavior and interactions between the two main components of reinforcement learning: the learning algorithm and the functional representation of learned knowledge. The author approaches these entities using design of experiments not commonly employed to study machine learning methods. The results outlined in this work provide insight as to what enables and what has an effect on successful reinforcement learning implementations so that this learning method can be applied to more challenging problems.
The book presents the state-of-the-art in high performance computing and simulation on modern supercomputer architectures. It covers trends in high performance application software development in general and specifically for parallel vector architectures. The contributions cover among others the field of computational fluid dynamics, physics, chemistry, and meteorology. Innovative application fields like reactive flow simulations and nano technology are presented.
This book provides a unified treatment of Flip-Flop design and selection in nanometer CMOS VLSI systems. The design aspects related to the energy-delay tradeoff in Flip-Flops are discussed, including their energy-optimal selection according to the targeted application, and the detailed circuit design in nanometer CMOS VLSI systems. Design strategies are derived in a coherent framework that includes explicitly nanometer effects, including leakage, layout parasitics and process/voltage/temperature variations, as main advances over the existing body of work in the field. The related design tradeoffs are explored in a wide range of applications and the related energy-performance targets. A wide range of existing and recently proposed Flip-Flop topologies are discussed. Theoretical foundations are provided to set the stage for the derivation of design guidelines, and emphasis is given on practical aspects and consequences of the presented results. Analytical models and derivations are introduced when needed to gain an insight into the inter-dependence of design parameters under practical constraints. This book serves as a valuable reference for practicing engineers working in the VLSI design area, and as text book for senior undergraduate, graduate and postgraduate students (already familiar with digital circuits and timing).
This book describes the life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection. Various trade-offs in the design process are discussed, including those associated with many of the most common memory cores, controller IPs and system-on-chip (SoC) buses. Readers will also benefit from the author's practical coverage of new verification methodologies. such as bug localization, UVM, and scan-chain. A SoC case study is presented to compare traditional verification with the new verification methodologies. Discusses the entire life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection; Introduce a deep introduction for Verilog for both implementation and verification point of view. Demonstrates how to use IP in applications such as memory controllers and SoC buses. Describes a new verification methodology called bug localization; Presents a novel scan-chain methodology for RTL debugging; Enables readers to employ UVM methodology in straightforward, practical terms.
This book presents a detailed review of high-performance computing infrastructures for next-generation big data and fast data analytics. Features: includes case studies and learning activities throughout the book and self-study exercises in every chapter; presents detailed case studies on social media analytics for intelligent businesses and on big data analytics (BDA) in the healthcare sector; describes the network infrastructure requirements for effective transfer of big data, and the storage infrastructure requirements of applications which generate big data; examines real-time analytics solutions; introduces in-database processing and in-memory analytics techniques for data mining; discusses the use of mainframes for handling real-time big data and the latest types of data management systems for BDA; provides information on the use of cluster, grid and cloud computing systems for BDA; reviews the peer-to-peer techniques and tools and the common information visualization techniques, used in BDA. |
![]() ![]() You may like...
Smart and Secure Internet of Healthcare…
Nitin Gupta, Jagdeep Singh, …
Hardcover
R3,581
Discovery Miles 35 810
Rail Economics, Policy and Regulation in…
Matthias Finger, Pierre Messulam
Hardcover
R4,179
Discovery Miles 41 790
Fuzzy and Multiobjective Games for…
Ichiro Nishizaki, Masatoshi Sakawa
Hardcover
R3,030
Discovery Miles 30 300
Problems and Theorems in Classical Set…
Peter Komjath, Vilmos Totik
Hardcover
R2,295
Discovery Miles 22 950
Fuzzy Sets & their Application to…
Beatrice Lazzerini, Lakhmi C. Jain, …
Hardcover
R5,416
Discovery Miles 54 160
|