![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This must-read text presents the pioneering work of the late Professor Jacob (Jack) T. Schwartz on computational logic and set theory and its application to proof verification techniques, culminating in the AEtnaNova system, a prototype computer program designed to verify the correctness of mathematical proofs presented in the language of set theory. Topics and features: describes in depth how a specific first-order theory can be exploited to model and carry out reasoning in branches of computer science and mathematics; presents an unique system for automated proof verification in large-scale software systems; integrates important proof-engineering issues, reflecting the goals of large-scale verifiers; includes an appendix showing formalized proofs of ordinals, of various properties of the transitive closure operation, of finite and transfinite induction principles, and of Zorn's lemma."
Hardware acceleration in the form of customized datapath and control circuitry tuned to specific applications has gained popularity for its promise to utilize transistors more efficiently. Historically, the computer architecture community has focused on general-purpose processors, and extensive research infrastructure has been developed to support research efforts in this domain. Envisioning future computing systems with a diverse set of general-purpose cores and accelerators, computer architects must add accelerator-related research infrastructures to their toolboxes to explore future heterogeneous systems. This book serves as a primer for the field, as an overview of the vast literature on accelerator architectures and their design flows, and as a resource guidebook for researchers working in related areas.
This book aims to achieve the following goals: (1) to provide a high-level survey of key analytics models and algorithms without going into mathematical details; (2) to analyze the usage patterns of these models; and (3) to discuss opportunities for accelerating analytics workloads using software, hardware, and system approaches. The book first describes 14 key analytics models (exemplars) that span data mining, machine learning, and data management domains. For each analytics exemplar, we summarize its computational and runtime patterns and apply the information to evaluate parallelization and acceleration alternatives for that exemplar. Using case studies from important application domains such as deep learning, text analytics, and business intelligence (BI), we demonstrate how various software and hardware acceleration strategies are implemented in practice. This book is intended for both experienced professionals and students who are interested in understanding core algorithms behind analytics workloads. It is designed to serve as a guide for addressing various open problems in accelerating analytics workloads, e.g., new architectural features for supporting analytics workloads, impact on programming models and runtime systems, and designing analytics systems.
How a computational framework can account for the successes and failures of human cognition At the heart of human intelligence rests a fundamental puzzle: How are we incredibly smart and stupid at the same time? No existing machine can match the power and flexibility of human perception, language, and reasoning. Yet, we routinely commit errors that reveal the failures of our thought processes. What Makes Us Smart makes sense of this paradox by arguing that our cognitive errors are not haphazard. Rather, they are the inevitable consequences of a brain optimized for efficient inference and decision making within the constraints of time, energy, and memory-in other words, data and resource limitations. Framing human intelligence in terms of these constraints, Samuel Gershman shows how a deeper computational logic underpins the "stupid" errors of human cognition. Embarking on a journey across psychology, neuroscience, computer science, linguistics, and economics, Gershman presents unifying principles that govern human intelligence. First, inductive bias: any system that makes inferences based on limited data must constrain its hypotheses in some way before observing data. Second, approximation bias: any system that makes inferences and decisions with limited resources must make approximations. Applying these principles to a range of computational errors made by humans, Gershman demonstrates that intelligent systems designed to meet these constraints yield characteristically human errors. Examining how humans make intelligent and maladaptive decisions, What Makes Us Smart delves into the successes and failures of cognition.
This book constitutes the refereed proceedings of the 22nd International Static Analysis Symposium, SAS 2015, held in Saint-Malo, France, in September 2015. The 18 papers presented in this volume were carefully reviewed and selected from 44 submissions. All fields of static analysis as a fundamental tool for program verification, bug detection, compiler optimization, program understanding, and software maintenance are addressed, featuring theoretical, practical, and application advances in the area
System on chips designs have evolved from fairly simple unicore, single memory designs to complex heterogeneous multicore SoC architectures consisting of a large number of IP blocks on the same silicon. To meet high computational demands posed by latest consumer electronic devices, most current systems are based on such paradigm, which represents a real revolution in many aspects in computing. The attraction of multicore processing for power reduction is compelling. By splitting a set of tasks among multiple processor cores, the operating frequency necessary for each core can be reduced, allowing to reduce the voltage on each core. Because dynamic power is proportional to the frequency and to the square of the voltage, we get a big gain, even though we may have more cores running. As more and more cores are integrated into these designs to share the ever increasing processing load, the main challenges lie in efficient memory hierarchy, scalable system interconnect, new programming paradigms, and efficient integration methodology for connecting such heterogeneous cores into a single system capable of leveraging their individual flexibility. Current design methods tend toward mixed HW/SW co-designs targeting multicore systems on-chip for specific applications. To decide on the lowest cost mix of cores, designers must iteratively map the device's functionality to a particular HW/SW partition and target architectures. In addition, to connect the heterogeneous cores, the architecture requires high performance complex communication architectures and efficient communication protocols, such as hierarchical bus, point-to-point connection, or Network-on-Chip. Software development also becomes far more complex due to the difficulties in breaking a single processing task into multiple parts that can be processed separately and then reassembled later. This reflects the fact that certain processor jobs cannot be easily parallelized to run concurrently on multiple processing cores and that load balancing between processing cores - especially heterogeneous cores - is very difficult.
This book offers readers broad coverage of techniques to model, verify and validate the behavior and performance of complex distributed embedded systems. The authors attempt to bridge the gap between the three disciplines of model-based design, real-time analysis and model-driven development, for a better understanding of the ways in which new development flows can be constructed, going from system-level modeling to the correct and predictable generation of a distributed implementation, leveraging current and future research results.
Innovations in hardware architecture, like hyper-threading or multicore processors, mean that parallel computing resources are available for inexpensive desktop computers. In only a few years, many standard software products will be based on concepts of parallel programming implemented on such hardware, and the range of applications will be much broader than that of scientific computing, up to now the main application area for parallel computing. Rauber and Runger take up these recent developments in processor architecture by giving detailed descriptions of parallel programming techniques that are necessary for developing efficient programs for multicore processors as well as for parallel cluster systems and supercomputers. Their book is structured in three main parts, covering all areas of parallel computing: the architecture of parallel systems, parallel programming models and environments, and the implementation of efficient application algorithms. The emphasis lies on parallel programming techniques needed for different architectures. For this second edition, all chapters have been carefully revised. The chapter on architecture of parallel systems has been updated considerably, with a greater emphasis on the architecture of multicore systems and adding new material on the latest developments in computer architecture. Lastly, a completely new chapter on general-purpose GPUs and the corresponding programming techniques has been added. The main goal of the book is to present parallel programming techniques that can be used in many situations for a broad range of application areas and which enable the reader to develop correct and efficient parallel programs. Many examples and exercises are provided to show how to apply the techniques. The book can be used as both a textbook for students and a reference book for professionals. The material presented has been used for courses in parallel programming at different universities for many years.
Since the end of Dennard scaling in the early 2000s, improving the energy efficiency of computation has been the main concern of the research community and industry. The large energy efficiency gap between general-purpose processors and application-specific integrated circuits (ASICs) motivates the exploration of customizable architectures, where one can adapt the architecture to the workload. In this Synthesis lecture, we present an overview and introduction of the recent developments on energy-efficient customizable architectures, including customizable cores and accelerators, on-chip memory customization, and interconnect optimization. In addition to a discussion of the general techniques and classification of different approaches used in each area, we also highlight and illustrate some of the most successful design examples in each category and discuss their impact on performance and energy efficiency. We hope that this work captures the state-of-the-art research and development on customizable architectures and serves as a useful reference basis for further research, design, and implementation for large-scale deployment in future computing systems.
This book addresses challenges faced by both the algorithm designer and the chip designer, who need to deal with the ongoing increase of algorithmic complexity and required data throughput for today's mobile applications. The focus is on implementation aspects and implementation constraints of individual components that are needed in transceivers for current standards, such as UMTS, LTE, WiMAX and DVB-S2. The application domain is the so called outer receiver, which comprises the channel coding, interleaving stages, modulator, and multiple antenna transmission. Throughout the book, the focus is on advanced algorithms that are actually in use in modern communications systems. Their basic principles are always derived with a focus on the resulting communications and implementation performance. As a result, this book serves as a valuable reference for two, typically disparate audiences in communication systems and hardware design.
This book describes an approach for designing Systems-on-Chip such that the system meets precise mathematical requirements. The methodologies presented enable embedded systems designers to reuse intellectual property (IP) blocks from existing designs in an efficient, reliable manner, automatically generating correct SoCs from multiple, possibly mismatching, components.
This book provides a comprehensive guide to the design of sustainable and green computing systems (GSC). Coverage includes important breakthroughs in various aspects of GSC, including multi-core architectures, interconnection technology, data centers, high performance computing (HPC), and sensor networks. The authors address the challenges of power efficiency and sustainability in various contexts, including system design, computer architecture, programming languages, compilers and networking.
The emerging three-dimensional (3D) chip architectures, with their intrinsic capability of reducing the wire length, promise attractive solutions to reduce the delay of interconnects in future microprocessors. 3D memory stacking enables much higher memory bandwidth for future chip-multiprocessor design, mitigating the "memory wall" problem. In addition, heterogenous integration enabled by 3D technology can also result in innovative designs for future microprocessors. This book first provides a brief introduction to this emerging technology, and then presents a variety of approaches to designing future 3D microprocessor systems, by leveraging the benefits of low latency, high bandwidth, and heterogeneous integration capability which are offered by 3D technology.
Building Your Next Big Thing with Google Cloud Platform shows you how to take advantage of the Google Cloud Platform technologies to build all kinds of cloud-hosted software and services for both public and private consumption. Whether you need a simple virtual server to run your legacy application or you need to architect a sophisticated high-traffic web application, Cloud Platform provides all the tools and products required to create innovative applications and a robust infrastructure to manage them. Google is known for the scalability, reliability, and efficiency of its various online products, from Google Search to Gmail. And, the results are impressive. Google Search, for example, returns results literally within fractions of second. How is this possible? Google custom-builds both hardware and software, including servers, switches, networks, data centers, the operating system's stack, application frameworks, applications, and APIs. Have you ever imagined what you could build if you were able to tap the same infrastructure that Google uses to create and manage its products? Now you can! Building Your Next Big Thing with Google Cloud Platform shows you how to take advantage of the Google Cloud Platform technologies to build all kinds of cloud-hosted software and services for both public and private consumption. Whether you need a simple virtual server to run your legacy application or you need to architect a sophisticated high-traffic web application, Cloud Platform provides all the tools and products required to create innovative applications and a robust infrastructure to manage them. Using this book as your compass, you can navigate your way through the Google Cloud Platform and turn your ideas into reality. The authors, both Google Developer Experts in Google Cloud Platform, systematically introduce various Cloud Platform products one at a time and discuss their strengths and scenarios where they are a suitable fit. But rather than a manual-like "tell all" approach, the emphasis is on how to Get Things Done so that you get up to speed with Google Cloud Platform as quickly as possible. You will learn how to use the following technologies, among others: Google Compute Engine Google App Engine Google Container Engine Google App Engine Managed VMs Google Cloud SQL Google Cloud Storage Google Cloud Datastore Google BigQuery Google Cloud Dataflow Google Cloud DNS Google Cloud Pub/Sub Google Cloud Endpoints Google Cloud Deployment Manager Author on Google Cloud Platform Google APIs and Translate API Using real-world examples, the authors first walk you through the basics of cloud computing, cloud terminologies and public cloud services. Then they dive right into Google Cloud Platform and how you can use it to tackle your challenges, build new products, analyze big data, and much more. Whether you're an independent developer, startup, or Fortune 500 company, you have never had easier to access to world-class production, product development, and infrastructure tools. Google Cloud Platform is your ticket to leveraging your skills and knowledge into making reliable, scalable, and efficient products-just like how Google builds its own products.
Having hit power limitations to even more aggressive out-of-order execution in processor cores, many architects in the past decade have turned to single-instruction-multiple-data (SIMD) execution to increase single-threaded performance. SIMD execution, or having a single instruction drive execution of an identical operation on multiple data items, was already well established as a technique to efficiently exploit data parallelism. Furthermore, support for it was already included in many commodity processors. However, in the past decade, SIMD execution has seen a dramatic increase in the set of applications using it, which has motivated big improvements in hardware support in mainstream microprocessors. The easiest way to provide a big performance boost to SIMD hardware is to make it wider-i.e., increase the number of data items hardware operates on simultaneously. Indeed, microprocessor vendors have done this. However, as we exploit more data parallelism in applications, certain challenges can negatively impact performance. In particular, conditional execution, non-contiguous memory accesses, and the presence of some dependences across data items are key roadblocks to achieving peak performance with SIMD execution. This book first describes data parallelism, and why it is so common in popular applications. We then describe SIMD execution, and explain where its performance and energy benefits come from compared to other techniques to exploit parallelism. Finally, we describe SIMD hardware support in current commodity microprocessors. This includes both expected design tradeoffs, as well as unexpected ones, as we work to overcome challenges encountered when trying to map real software to SIMD execution.
This book serves as a hands-on guide to timing constraints in integrated circuit design. Readers will learn to maximize performance of their IC designs, by specifying timing requirements correctly. Coverage includes key aspects of the design flow impacted by timing constraints, including synthesis, static timing analysis and placement and routing. Concepts needed for specifying timing requirements are explained in detail and then applied to specific stages in the design flow, all within the context of Synopsys Design Constraints (SDC), the industry-leading format for specifying constraints.
This book provides techniques to tackle the design challenges raised by the increasing diversity and complexity of emerging, heterogeneous architectures for embedded systems. It describes an approach based on techniques from software engineering called aspect-oriented programming, which allow designers to control today's sophisticated design tool chains, while maintaining a single application source code. Readers are introduced to the basic concepts of an aspect-oriented, domain specific language that enables control of a wide range of compilation and synthesis tools in the partitioning and mapping of an application to a heterogeneous (and possibly multi-core) target architecture. Several examples are presented that illustrate the benefits of the approach developed for applications from avionics and digital signal processing. Using the aspect-oriented programming techniques presented in this book, developers can reuse extensive sections of their designs, while preserving the original application source-code, thus promoting developer productivity as well as architecture and performance portability. Describes an aspect-oriented approach for the compilation and synthesis of applications targeting heterogeneous embedded computing architectures. Includes examples using an integrated tool chain for compilation and synthesis. Provides validation and evaluation for targeted reconfigurable heterogeneous architectures. Enables design portability, given changing target devices* Allows developers to maintain a single application source code when targeting multiple architectures.
This new book on mathematical logic by Jeremy Avigad gives a thorough introduction to the fundamental results and methods of the subject from the syntactic point of view, emphasizing logic as the study of formal languages and systems and their proper use. Topics include proof theory, model theory, the theory of computability, and axiomatic foundations, with special emphasis given to aspects of mathematical logic that are fundamental to computer science, including deductive systems, constructive logic, the simply typed lambda calculus, and type-theoretic foundations. Clear and engaging, with plentiful examples and exercises, it is an excellent introduction to the subject for graduate students and advanced undergraduates who are interested in logic in mathematics, computer science, and philosophy, and an invaluable reference for any practicing logician's bookshelf.
This book constitutes the refereed proceedings of the 18th National Conference on Computer Engineering and Technology, NCCET 2014, held in Guiyang, China, during July/August 2014. The 18 papers presented were carefully reviewed and selected from 85 submissions. They are organized in topical sections on processor architecture; computer application and software optimization; technology on the horizon.
This book describes a model-based development approach for globally-asynchronous locally-synchronous distributed embedded controllers. This approach uses Petri nets as modeling formalism to create platform and network independent models supporting the use of design automation tools. To support this development approach, the Petri nets class in use is extended with time-domains and asynchronous-channels. The authors' approach uses models not only providing a better understanding of the distributed controller and improving the communication among the stakeholders, but also to be ready to support the entire lifecycle, including the simulation, the verification (using model-checking tools), the implementation (relying on automatic code generators), and the deployment of the distributed controller into specific platforms. Uses a graphical and intuitive modeling formalism supported by design automation tools; Enables verification, ensuring that the distributed controller was correctly specified; Provides flexibility in the implementation and maintenance phases to achieve desired constraints (high performance, low power consumption, reduced costs), enabling porting to different platforms using different communication nodes, without changing the underlying behavioral model.
Learn the big skills of C programming by creating bite-size projects! Work your way through these 21 fun and interesting tiny challenges to master essential C techniques you'll use in full-size applications. In Tiny C Projects you will learn how to: Create libraries of functions for handy use and re-use Process input through an I/O filter to generate customized output Use recursion to explore a directory tree and find duplicate files Develop AI for playing simple games Explore programming capabilities beyond the standard C library functions Evaluate and grow the potential of your programs Improve code to better serve users Tiny C Projects is an engaging collection of 21 small programming challenges! Hone and develop your C abilities with lighthearted games like Hunt the Wumpus and tic-tac-toe, utilities like a useful calendar and a mini-editor app, and thought-provoking exercises like encoding and cyphers. Every project encourages you to evolve your code, add new functions, and explore the full capabilities of C. about the technology C is a mature and secure language that's perfect for everything from low-level systems programming to high performance embedded applications. The 21 fun projects in this guide demonstrate the range of C's capabilities and give you hands-on experience with this powerful and flexible language. about the book Tiny C Projects builds and hones your C programming skills with interesting and exciting challenges. You'll expand your C programming portfolio by creating useful utility programs, fun games, password generators, directory utilities, and more. Each program you create starts out simple and then deepens as you explore approaches and alternatives you can use to achieve your goals. Once you're done, you'll find it easy to scale up the skills you've learned from tiny projects into real applications. RETAIL SELLING POINTS * Create libraries of functions for handy use and re-use * Process input through an I/O filter to generate customized output * Use recursion to explore a directory tree and find duplicate files * Develop AI for playing simple games * Explore programming capabilities beyond the standard C library functions * Evaluate and grow the potential of your programs* Improve code to better serve users AUDIENCE For C programmers of all skill levels who want to hone their skills with the language
Traditionally, design space exploration for Systems-on-Chip (SoCs) has focused on the computational aspects of the problem at hand. However, as the number of components on a single chip and their performance continue to increase, the communication architecture plays a major role in the area, performance and energy consumption of the overall system. As a result, a shift from computation-based to communication-based design becomes mandatory. Towards this end, network-on-chip (NoC) communication architectures have emerged recently as a promising alternative to classical bus and point-to-point communication architectures. In this dissertation, we study outstanding research problems related to modeling, analysis and optimization of NoC communication architectures. More precisely, we present novel design methodologies, software tools and FPGA prototypes to aid the design of application-specific NoCs.
This book describes model-based development of adaptive embedded systems, which enable improved functionality using the same resources. The techniques presented facilitate design from a higher level of abstraction, focusing on the problem domain rather than on the solution domain, thereby increasing development efficiency. Models are used to capture system specifications and to implement (manually or automatically) system functionality. The authors demonstrate the real impact of adaptivity on engineering of embedded systems by providing several industrial examples of the models used in the development of adaptive embedded systems.
As Moore's Law and Dennard scaling trends have slowed, the challenges of building high-performance computer architectures while maintaining acceptable power efficiency levels have heightened. Over the past ten years, architecture techniques for power efficiency have shifted from primarily focusing on module-level efficiencies, toward more holistic design styles based on parallelism and heterogeneity. This work highlights and synthesizes recent techniques and trends in power-efficient computer architecture. Table of Contents: Introduction / Voltage and Frequency Management / Heterogeneity and Specialization / Communication and Memory Systems / Conclusions / Bibliography / Authors' Biographies
This book presents the methodologies and for embedded systems design, using field programmable gate array (FPGA) devices, for the most modern applications. Coverage includes state-of-the-art research from academia and industry on a wide range of topics, including applications, advanced electronic design automation (EDA), novel system architectures, embedded processors, arithmetic, and dynamic reconfiguration. |
![]() ![]() You may like...
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey
Hardcover
R7,388
Discovery Miles 73 880
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,844
Discovery Miles 48 440
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare
Hardcover
R3,225
Discovery Miles 32 250
|