![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
If you look around you will find that all computer systems, from your portable devices to the strongest supercomputers, are heterogeneous in nature. The most obvious heterogeneity is the existence of computing nodes of different capabilities (e.g. multicore, GPUs, FPGAs, ...). But there are also other heterogeneity factors that exist in computing systems, like the memory system components, interconnection, etc. The main reason for these different types of heterogeneity is to have good performance with power efficiency. Heterogeneous computing results in both challenges and opportunities. This book discusses both. It shows that we need to deal with these challenges at all levels of the computing stack: from algorithms all the way to process technology. We discuss the topic of heterogeneous computing from different angles: hardware challenges, current hardware state-of-the-art, software issues, how to make the best use of the current heterogeneous systems, and what lies ahead. The aim of this book is to introduce the big picture of heterogeneous computing. Whether you are a hardware designer or a software developer, you need to know how the pieces of the puzzle fit together. The main goal is to bring researchers and engineers to the forefront of the research frontier in the new era that started a few years ago and is expected to continue for decades. We believe that academics, researchers, practitioners, and students will benefit from this book and will be prepared to tackle the big wave of heterogeneous computing that is here to stay.
If you look around you will find that all computer systems, from your portable devices to the strongest supercomputers, are heterogeneous in nature. The most obvious heterogeneity is the existence of computing nodes of different capabilities (e.g. multicore, GPUs, FPGAs, ...). But there are also other heterogeneity factors that exist in computing systems, like the memory system components, interconnection, etc. The main reason for these different types of heterogeneity is to have good performance with power efficiency. Heterogeneous computing results in both challenges and opportunities. This book discusses both. It shows that we need to deal with these challenges at all levels of the computing stack: from algorithms all the way to process technology. We discuss the topic of heterogeneous computing from different angles: hardware challenges, current hardware state-of-the-art, software issues, how to make the best use of the current heterogeneous systems, and what lies ahead. The aim of this book is to introduce the big picture of heterogeneous computing. Whether you are a hardware designer or a software developer, you need to know how the pieces of the puzzle fit together. The main goal is to bring researchers and engineers to the forefront of the research frontier in the new era that started a few years ago and is expected to continue for decades. We believe that academics, researchers, practitioners, and students will benefit from this book and will be prepared to tackle the big wave of heterogeneous computing that is here to stay.
The increasing adoption of Business Process Management (BPM) has inspired pioneering software architects and developers to effectively leverage BPM-based software and process-centric architecture (PCA) to create software systems that enable essential business processes. Reflecting this emerging trend and evolving field, Process-Centric Architecture for Enterprise Software Systems provides a complete and accessible introduction explaining this architecture. The text presents, in detail, the analysis and design principles used in process-centric architecture. Illustrative examples demonstrate how to architect and design enterprise systems based on the business processes central to your organization. It covers the architectural aspects of business process management, the evolution of IT systems in enterprises, the importance of a business process focus, the role of workflows, business rules, enterprise application integration, and business process modeling languages such as WS-BPEL and BPML. It also investigates: Fundamental concepts of process-centric architecture style The PCA approach to architecting enterprise IT systems Business process driven applications and integration Two case studies that illustrate how to architect and design enterprise applications based on PCA SOA in the context of process-centric architecture Standards, technologies, and infrastructure behind PCA Explaining how to architect enterprise systems using a BPMS technology platform, J2EE components, and Web services, this forward-looking book will empower you to create systems centered on business processes and make today's enterprise processes successful and agile.
This book describes a unique approach to bring robotic technology into elders' daily lives. Low cost components and low cost robotic assistants are effectively combined to offer high quality services to elders and people in need. The book presents in a comprehensive way how technology can be used for developing a new healthcare paradigm where high quality services are offered at home, thus reducing the ever-increasing hospitalization cost of the elders and the people with chronic diseases.
Originally published in 1995, Large Deviations for Performance Analysis consists of two synergistic parts. The first half develops the theory of large deviations from the beginning, through recent results on the theory for processes with boundaries, keeping to a very narrow path: continuous-time, discrete-state processes. By developing only what is needed for the applications, the theory is kept to a manageable level, both in terms of length and in terms of difficulty. Within its scope, the treatment is detailed, comprehensive and self-contained. As the book shows, there are sufficiently many interesting applications of jump Markov processes to warrant a special treatment. The second half is a collection of applications developed at Bell Laboratories. The applications cover large areas of the theory of communication networks: circuit switched transmission, packet transmission, multiple access channels, and the M/M/1 queue. Aspects of parallel computation are covered as well including, basics of job allocation, rollback-based parallel simulation, assorted priority queueing models that might be used in performance models of various computer architectures, and asymptotic coupling of processors. These applications are thoroughly analysed using the tools developed in the first half of the book.
This book introduces the FPGA technology used in the laboratory sessions, and provides a step-by-step guide for designing and simulation of digital circuits. It utilizes the VHDL language, which is one of the most common language used to describe the design of digital systems. The Quartus II, Xilinx ISE 14.7 and ModelSim software are used to process the VHDL code and make simulations, and then the Altera and Xilinx FPGA platforms are employed to implement the simulated digital designs. The book is composed of four parts. The first part of this book has two chapters and covers various aspects: FPGA architectures, ASIC vs FPGA comparison, FPGA design flow and basic VHDL concepts necessary to describe the design of digital systems. The second part of the book includes three chapters that deal with the design of digital circuits such as combinational logic circuits, sequential logic circuits and finite state machines. The third part of the book is reserved for laboratory projects carried out on the FPGA platform. It is a largely hands-on lab class for design digital circuits and implementing their designs on the Altera FPGA platform. Finally, the fourth part of this work is devoted to recent applications carried out on FPGAs, in particular advanced techniques in renewable energy systems. The book is primarily intended for students, scholars, and industrial practitioners interested in the design of modern digital systems.
Structured Computer Organization, specifically written for undergraduate students, is a best-selling guide that provides an accessible introduction to computer hardware and architecture. This text will also serve as a useful resource for all computer professionals and engineers who need an overview or introduction to computer architecture. This book takes a modern structured, layered approach to understanding computer systems. It's highly accessible - and it's been thoroughly updated to reflect today's most critical new technologies and the latest developments in computer organization and architecture. Tanenbaum's renowned writing style and painstaking research make this one of the most accessible and accurate books available, maintaining the author's popular method of presenting a computer as a series of layers, each one built upon the ones below it, and understandable as a separate entity.
Provides a truly accessible introduction and a fully integrated approach to fuzzy systems and neural networks—the definitive text for students and practicing engineers Researchers are already applying neural networks and fuzzy systems in series, from the use of fuzzy inputs and outputs for neural networks to the employment of individual neural networks to quantify the shape of a fuzzy membership function. But the integration of these two fields into a "neurofuzzy" technology holds even greater potential benefits in reducing computing time and optimizing results. Fuzzy and Neural Approaches in Engineering presents a detailed examination of the fundamentals of fuzzy systems and neural networks and then joins them synergistically—combining the feature extraction and modeling capabilities of the neural network with the representation capabilities of fuzzy systems. Exploring the value of relating genetic algorithms and expert systems to fuzzy and neural technologies, this forward-thinking text highlights an entire range of dynamic possibilities within soft computing. With examples specifically designed to illuminate key concepts and overcome the obstacles of notation and overly mathematical presentations often encountered in other sources, plus tables, figures, and an up-to-date bibliography, this unique work is both an important reference and a practical guide to neural networks and fuzzy systems.
This book presents as formal papers nearly all of the lectures given at the NATO advanced summer institute on Computer Architecture held at St. Raphael, France from September 12th - 24th 1976. It was not possible to include an important paper by G. Amdahl on the 470V6 System, nor papers by Mde. A. Recoque on distributed processing, Messrs. A. Maison and G. Debruyne on LSI technology, and K. Bowden. Computer architecture is a very diverse and expanding subject, consequently it was decided to limit the scope of the School to five main subject areas. These were: specific computer architectures, language orientated machines, associative processing, computer networks and specification and design methods. In addition an overall emphasis was placed on distributed and parallel processing and the need for an integrated hardware-software approach to design. Though some introductory material is included, this book is primarily intended for workers in the field of computer science and engineering who wish to update themselves on current topics in computer architecture. The main work of the School is well reflected in the collected papers, but it is impossible to convey the benefits obtained from the discussion groups and the continuous dialogue that was maintained throughout the School. The Editors would like to acknowledge with thanks the support of the NATO Scientific Affairs Division, who financed the School, and the European Research Office of the U.S. Army and the National Science Foundation for providing travel grants.
In distributed, open systems like cyberspace, where the behavior of autonomous agents is uncertain and can affect other agents' welfare, trust management is used to allow agents to determine what to expect about the behavior of other agents. The role of trust management is to maximize trust between the parties and thereby provide a basis for cooperation to develop. Bringing together expertise from technology-oriented sciences, law, philosophy, and social sciences, Managing Trust in Cyberspace addresses fundamental issues underpinning computational trust models and covers trust management processes for dynamic open systems and applications in a tutorial style that aids in understanding. Topics include trust in autonomic and self-organized networks, cloud computing, embedded computing, multi-agent systems, digital rights management, security and quality issues in trusting e-government service delivery, and context-aware e-commerce applications. The book also presents a walk-through of online identity management and examines using trust and argumentation in recommender systems. It concludes with a comprehensive survey of anti-forensics for network security and a review of password security and protection. Researchers and practitioners in fields such as distributed computing, Internet technologies, networked systems, information systems, human computer interaction, human behavior modeling, and intelligent informatics especially benefit from a discussion of future trust management research directions including pervasive and ubiquitous computing, wireless ad-hoc and sensor networks, cloud computing, social networks, e-services, P2P networks, near-field communications (NFC), electronic knowledge management, and nano-communication networks.
This book covers several futuristic computing technologies like quantum computing, quantum-dot cellular automata, DNA computing, and optical computing. In turn, it explains them using examples and tutorials on a CAD tool that can help beginners get a head start in QCA layout design. It discusses research on the design of circuits in quantum-dot cellular automata (QCA) with the objectives of obtaining low-complexity, robust designs for various arithmetic operations. The book also investigates the systematic reduction of majority logic in the realization of multi-bit adders, dividers, ALUs, and memory.
While there are sporadic journal articles on socio-technical networks, there's long been a need for an integrated resource that addresses concrete socio-technical network (STN) design issues from algorithmic and engineering perspectives. Filling this need, Socio-Technical Networks: Science and Engineering Design provides a complete introduction to the fundamentals of one of the hottest research areas across the social sciences, networking, and computer science-including its definition, historical background, and models. Covering basic STN architecture from a physical/technological perspective, the book considers the system design process in a typical STN, including inputs, processes/actions, and outputs/products. It covers current applications, including transportation networks, energy systems, tele-healthcare, financial networks, and the World Wide Web. A group of STN expert contributors addresses privacy and security topics in the interdependent context of critical infrastructure, which include risk models, trust models, and privacy preserving schemes. Covers the physical and technological designs in a typical STN Considers STN applications in popular fields, such as healthcare and the virtual community Details a method for mapping and measuring complexity, uncertainty, and interactions among STN components The book examines the most important STN models, including graph theory, inferring agent dynamics, decision theory, and information mining. It also explains structural studies, behavioral studies, and agent/actor system studies and policy studies in different STN contexts. Complete with in-depth case studies, this book supplies the practical insight needed to address contemporary STN design issues.
This book offers a comprehensive introduction to seven commonly used image understanding techniques in modern information technology. Readers of various levels can find suitable techniques to solve their practical problems and discover the latest development in these specific domains. The techniques covered include camera model and calibration, stereo vision, generalized matching, scene analysis and semantic interpretation, multi-sensor image information fusion, content-based visual information retrieval, and understanding spatial-temporal behavior. The book provides aspects from the essential concepts overview and basic principles to detailed introduction, explanation of the current methods and their practical techniques. It also presents discussions on the research trends and latest results in conjunction with new development of technical methods. This is an excellent read for those who do not have a subject background in image technology but need to use these techniques to complete specific tasks. These essential information will also be useful for their further study in the relevant fields.
For any digital TV developer or manager, the maze of standards and specifications related to MHP and OCAP is daunting-you have to patch together pieces from several standards to gather all the necessary knowledge you need to compete worldwide. The standards themselves can be confusing, and contain many inconsistencies and missing pieces. Interactive TV Standards provides a guide for actually deploying these technologies for a broadcaster or product and application developer. Understanding what the APIs do is essential for your job, but understanding how the APIs work and how they relate to each other at a deeper level helps you do it better, faster and easier. Learn how to spot when something that looks like a good solution to a problem really isn't. Understand how the many standards that make up MHP fit together, and implement them effectively and quickly. Two DVB insiders teach you which elements of the standards that are needed for digital TV, highlight those elements that are not needed, and explain the special requirements that MHP places on implementations of these standards. Once you've mastered the basics, you will learn how to develop products for US, European, and Asian markets--saving time and money. By detailing how a team can develop products for both the OCAP and MHP markets, Interactive TV Standards teaches you how to to leverage your experience with one of these standards into the skills and knowledge needed to work with the critical, related standards. Does the team developing a receiver have all the knowledge they need to succeed, or have they missed important information in an apparently unrelated standard? Does an application developer really know how to write a reliable piece of software that runs on any MHP or OCAP receiver? Does the broadcaster understand the business and technical issues well enough to deploy MHP successfully, or will their project fail? Increase your chances of success the first time with Interactive TV Standards.
This book explains in layman's terms how CMOS transistors work. The author explains step-by-step how CMOS transistors are built, along with an explanation of the purpose of each process step. He describes for readers the key inventions and developments in science and engineering that overcame huge obstacles, enabling engineers to shrink transistor area by over 1 million fold and build billions of transistor switches that switch over a billion times a second, all on a piece of silicon smaller than a thumbnail.
Provides detailed introduction to Internet of Healthcare Things (IoHT) and its applications Reviews underlying sensor and hardware technologies Includes recent advances in the IoHT such as remote healthcare monitoring and wearable devices Explores applications of Data Analytics/Data Mining in IoHT, including data management and data governance Focusses on regulatory and compliance issues in IoHT
An introductory text to computer architecture, this comprehensive volume covers the concepts from logic gates to advanced computer architecture. It comes with a full spectrum of exercises and web-downloadable support materials, including assembler and simulator, which can be used in the context of different courses. The authors also make available a hardware description, which can be used in labs and assignments, for hands-on experimentation with an actual, simple processor.This unique compendium is a useful reference for undergraduates, graduates and professionals majoring in computer engineering, circuits and systems, software engineering, biomedical engineering and aerospace engineering.Related Link(s)
Following in the tradition of its popular predecessor, A Practical Guide to Content Delivery Networks, Second Edition offers an accessible and organized approach to implementing networks capable of handling the increasing data requirements of today's always on mobile society. Describing how content delivery networks (CDN) function, it provides an understanding of Web architecture, as well as an overview of the TCP/IP protocol suite. The book reports on the development of the technologies that have evolved over the past decade as distribution mechanisms for various types of Web content. Using a structural and visual approach, it provides step-by-step guidance through the process of setting up a scalable CDN. Supplies a clear understanding of the framework and individual layers of design, including caching and load balancing Describes the terminology, tactics, and potential problems when implementing a CDN Examines cost-effective ways to load balance web service layers Explains how application servers connect to databases and how systems will scale as volume increases Illustrates the impact of video on data storage and delivery, as well as the need for data compression Covers Flash and the emerging HTML5 standard for video Highlighting the advantages and disadvantages associated with these types of networks, the book explains how to use the networks within the Internet operated by various ISPs as mechanisms for effectively delivering Web server based information. It emphasizes a best-of-breed approach to building your network to allow for an effective CDN to be built on practically any budget. To help you get started, this vendor-neutral reference explains how to code Web pages to optimize the delivery of various types of media. It also includes examples of successful approaches, from outsourcing to do it yourself.
• Showcases today's most influential architectural voices who have been instrumental in shifting the direction of design in the last decade • Includes perspectives of influential architects, practitioners and academics, as well as critics including philosophers • Case studies and essays engage and deploy a range of topics and technologies from speculative realism and Object Oriented Ontology to high computation, Big Data, parametricism, digital fabrication, artificial intelligence, augmented reality and virtual reality • A rigorous account of architecture's theoretical and technological concerns over the last decade
Some of our earliest experiences of the conclusive force of an argument come from school mathematics: faced with a mathematical proof, we cannot deny the conclusion once the premises have been accepted. Behind such arguments lies a more general pattern of 'demonstrative arguments' that is studied in the science of logic. Logical reasoning is applied at all levels, from everyday life to advanced sciences, and a remarkable level of complexity is achieved in everyday logical reasoning, even if the principles behind it remain intuitive. Jan von Plato provides an accessible but rigorous introduction to an important aspect of contemporary logic: its deductive machinery. He shows that when the forms of logical reasoning are analysed, it turns out that a limited set of first principles can represent any logical argument. His book will be valuable for students of logic, mathematics and computer science.
This book analyzes the challenges in verifying Dynamically
Reconfigurable Systems (DRS) with respect to the user design and
the physical implementation of such systems. The authors describe
the use of a simulation-only layer to emulate the behavior of
target FPGAs and accurately model the characteristic features of
reconfiguration. Readers are enabled with this simulation-only
layer to maintain verification productivity by abstracting away the
physical details of the FPGA fabric. Two implementations of the
simulation-only layer are included: Extended ReChannel is a SystemC
library that can be used to check DRS designs at a high level;
ReSim is a library to support RTL simulation of a DRS reconfiguring
both its logic and state. Through a number of case studies, the
authors demonstrate how their approach integrates seamlessly with
existing, mainstream DRS design flows and with well-established
verification methodologies such as top-down modeling and
coverage-driven verification.
The main aim of Healthcare 4.0: Health Informatics and Precision Data Management is to improve the services given by the healthcare industry and to bring meaningful patient outcomes by applying the data, information and knowledge in the healthcare domain. Features: * Improves the quality of health data of a patient * Presents a wide range of opportunities and renewed possibilities for healthcare systems * Gives a way for carefully and meticulously tracking the provenance of medical records * Accelerates the process of disease-oriented data and medical data arbitration * Brings meaningful patient health outcomes * Eradicates delayed clinical communications * Helps the research intellectuals to step down further toward the disease and clinical data storage * Creates more patient-centered services The precise focus of this handbook is on the potential applications and use of data informatics in healthcare, including clinical trials, tailored ailment data, patient and ailment record characterization and health records management.
In this book, fundamental theories and engineering designs of NOMA are organically blended, with comprehensive performance evaluations from both link level and system level simulations.
Features Teaches software design by showing programmers how to build the tools they use every day. Each chapter includes exercises to help readers check and deepen their understanding. All the example code can be downloaded, re-used, and modified under an open license.
Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era. The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra , which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression. The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2-5x speedup over the best existing algorithms on 40 cores. This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award. |
You may like...
Best Practices and New Perspectives in…
Patricia Ordonez De Pablos, Robert Tennyson
Hardcover
R4,729
Discovery Miles 47 290
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Clean Architecture - A Craftsman's Guide…
Robert Martin
Paperback
(1)
Computer Architecture Tutorial Using an…
Robert Dunne
Hardcover
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
|