Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
- Curating Social Data - Summarizing Social Data - Analyzing Social Data - Social Data Analytics Applications: Trust, Recommender Systems, Cognitive Analytics
Based on a symposium honoring the extensive work of Allen Newell --
one of the founders of artificial intelligence, cognitive science,
human-computer interaction, and the systematic study of
computational architectures -- this volume demonstrates how
unifying themes may be found in the diversity that characterizes
current research on computers and cognition. The subject matter
includes:
The third edition of Digital Logic Techniques provides a clear and comprehensive treatment of the representation of data, operations on data, combinational logic design, sequential logic, computer architecture, and practical digital circuits. A wealth of exercises and worked examples in each chapter give students valuable experience in applying the concepts and techniques discussed. Beginning with an objective comparison between analogue and digital representation of data, the author presents the Boolean algebra framework for digital electronics, develops combinational logic design from first principles, and presents cellular logic as an alternative structure more relevant than canonical forms to VLSI implementation. He then addresses sequential logic design and develops a strategy for designing finite state machines, giving students a solid foundation for more advanced studies in automata theory. The second half of the book focuses on the digital system as an entity. Here the author examines the implementation of logic systems in programmable hardware, outlines the specification of a system, explores arithmetic processors, and elucidates fault diagnosis. The final chapter examines the electrical properties of logic components, compares the different logic families, and highlights the problems that can arise in constructing practical hardware systems.
In the early days of computing, technicians in white coats
controlled refrigerator-sized computers housed in sealed rooms, far
from ordinary users. Today, computers are inexpensive commodities,
like television sets, Developing User Interfaces is targeted at the programmer who
will actually implement, rather than design, the user interface.
Most user interface books focus on psychology and usability, not
programming techniques. This book recognizes the need for
programmers to collaborate with usability experts and
psychologists, so topics such as the principles of visualization,
human perception, and usability evaluation are touched upon. Yet
the primary focus remains on those tools and techniques required
for programming the complex user interface.
Memory Architecture Exploration for Programmable Embedded Systems
addresses efficient exploration of alternative memory
architectures, assisted by a "compiler-in-the-loop" that allows
effective matching of the target application to the
processor-memory architecture. This new approach for memory
architecture exploration replaces the traditional black-box view of
the memory system and allows for aggressive co-optimization of the
programmable processor together with a customized memory system.
Expert advice from several industrial professionals who have worked for some of the world's biggest tech and interactive companies. Best practices that not only prepare writers on how to apply their craft to new fields, but also prepare them for the common ambiguity they will find in corporate and start-up environments. Breakdown of platforms that shows how tech capabilities can fulfill content expectations and how content can fulfill tech expectations. Basic storytelling mechanics customized to today's popular technologies and traditional gaming platforms.
Concurrent data structures simplify the development of concurrent programs by encapsulating commonly used mechanisms for synchronization and commu nication into data structures. This thesis develops a notation for describing concurrent data structures, presents examples of concurrent data structures, and describes an architecture to support concurrent data structures. Concurrent Smalltalk (CST), a derivative of Smalltalk-80 with extensions for concurrency, is developed to describe concurrent data structures. CST allows the programmer to specify objects that are distributed over the nodes of a concurrent computer. These distributed objects have many constituent objects and thus can process many messages simultaneously. They are the foundation upon which concurrent data structures are built. The balanced cube is a concurrent data structure for ordered sets. The set is distributed by a balanced recursive partition that maps to the subcubes of a binary 7lrcube using a Gray code. A search algorithm, VW search, based on the distance properties of the Gray code, searches a balanced cube in O(log N) time. Because it does not have the root bottleneck that limits all tree-based data structures to 0(1) concurrency, the balanced cube achieves 0C.: N) con currency. Considering graphs as concurrent data structures, graph algorithms are pre sented for the shortest path problem, the max-flow problem, and graph parti tioning. These algorithms introduce new synchronization techniques to achieve better performance than existing algorithms."
This book comprehensively covers the state-of-the-art security applications of machine learning techniques. The first part explains the emerging solutions for anti-tamper design, IC Counterfeits detection and hardware Trojan identification. It also explains the latest development of deep-learning-based modeling attacks on physically unclonable functions and outlines the design principles of more resilient PUF architectures. The second discusses the use of machine learning to mitigate the risks of security attacks on cyber-physical systems, with a particular focus on power plants. The third part provides an in-depth insight into the principles of malware analysis in embedded systems and describes how the usage of supervised learning techniques provides an effective approach to tackle software vulnerabilities.
Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems describes coding approaches for designing fault-tolerant systems, i.e., systems that exhibit structured redundancy that enables them to distinguish between correct and incorrect results or between valid and invalid states. Since redundancy is expensive and counter-intuitive to the traditional notion of system design, the book focuses on resource-efficient methodologies that avoid excessive use of redundancy by exploiting the algorithmic/dynamic structure of a particular combinational or dynamic system. The first part of Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems focuses on fault-tolerant combinational systems providing a review of von Neumann's classical work on Probabilistic Logics (including some more recent work on noisy gates) and describing the use of arithmetic coding and algorithm-based fault-tolerant schemes in algebraic settings. The second part of the book focuses on fault tolerance in dynamic systems. Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems also discusses how, in a dynamic system setting, one can relax the traditional assumption that the error-correcting mechanism is fault-free by using distributed error correcting mechanisms. The final chapter presents a methodology for fault diagnosis in discrete event systems that are described by Petri net models; coding techniques are used to quickly detect and identify failures. From the Foreword "Hadjicostis has significantly expanded the setting to processes occurring in more general algebraic and dynamic systems... The book responds to the growing need to handle faults in complex digital chips and complex networked systems, and to consider the effects of faults at the design stage rather than afterwards." George Verghese, Massachusetts Institute of Technology Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems will be of interest to both researchers and practitioners in the area of fault tolerance, systems design and control.
This book is written as an introduction to annotated logics. It provides logical foundations for annotated logics, discusses some interesting applications of these logics and also includes the authors' contributions to annotated logics. The central idea of the book is to show how annotated logic can be applied as a tool to solve problems of technology and of applied science. The book will be of interest to pure and applied logicians, philosophers and computer scientists as a monograph on a kind of paraconsistent logic. But, the layman will also take profit from its reading.
This book presents several significant advances in algorithms designed to solve the Do-All problem in distributed message-passing settings under various models of adversity, including processor crashes, asynchrony, message delays, network partitions, and malicious processor behaviors. Upper and lower bounds are presented, demonstrating the extent to which efficiency can be combined with fault-tolerance. This book contains the recent advances in the principles of efficient and fault-tolerant cooperative computing, narrowing the gap between abstract models of dependable network computing and realistic distributed systems.
Parallel and distributed computation has been gaining a great lot of attention in the last decades. During this period, the advances attained in computing and communication technologies, and the reduction in the costs of those technolo gies, played a central role in the rapid growth of the interest in the use of parallel and distributed computation in a number of areas of engineering and sciences. Many actual applications have been successfully implemented in various plat forms varying from pure shared-memory to totally distributed models, passing through hybrid approaches such as distributed-shared memory architectures. Parallel and distributed computation differs from dassical sequential compu tation in some of the following major aspects: the number of processing units, independent local dock for each unit, the number of memory units, and the programming model. For representing this diversity, and depending on what level we are looking at the problem, researchers have proposed some models to abstract the main characteristics or parameters (physical components or logical mechanisms) of parallel computers. The problem of establishing a suitable model is to find a reasonable trade-off among simplicity, power of expression and universality. Then, be able to study and analyze more precisely the behavior of parallel applications."
This book discusses the implementation of digital circuits by using MCML gates. Although digital circuit implementation is possible with other elements, such as CMOS gates, MCML implementations can provide superior performance in certain applications. This book provides a complete automation methodology for the implementation of digital circuits in MCML and provides an extensive explanation on the technical details of design of MCML. A systematic methodology is presented to build efficient MCML standard-cell libraries, and a complete top-down design flow is shown to implement complex systems using such building blocks.
With the end of Dennard scaling and Moore's law, IC chips, especially large-scale ones, now face more reliability challenges, and reliability has become one of the mainstay merits of VLSI designs. In this context, this book presents a built-in on-chip fault-tolerant computing paradigm that seeks to combine fault detection, fault diagnosis, and error recovery in large-scale VLSI design in a unified manner so as to minimize resource overhead and performance penalties. Following this computing paradigm, we propose a holistic solution based on three key components: self-test, self-diagnosis and self-repair, or "3S" for short. We then explore the use of 3S for general IC designs, general-purpose processors, network-on-chip (NoC) and deep learning accelerators, and present prototypes to demonstrate how 3S responds to in-field silicon degradation and recovery under various runtime faults caused by aging, process variations, or radical particles. Moreover, we demonstrate that 3S not only offers a powerful backbone for various on-chip fault-tolerant designs and implementations, but also has farther-reaching implications such as maintaining graceful performance degradation, mitigating the impact of verification blind spots, and improving chip yield. This book is the outcome of extensive fault-tolerant computing research pursued at the State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences over the past decade. The proposed built-in on-chip fault-tolerant computing paradigm has been verified in a broad range of scenarios, from small processors in satellite computers to large processors in HPCs. Hopefully, it will provide an alternative yet effective solution to the growing reliability challenges for large-scale VLSI designs.
An Interdisciplinary Approach to Modern Network Security presents the latest methodologies and trends in detecting and preventing network threats. Investigating the potential of current and emerging security technologies, this publication is an all-inclusive reference source for academicians, researchers, students, professionals, practitioners, network analysts and technology specialists interested in the simulation and application of computer network protection. It presents theoretical frameworks and the latest research findings in network security technologies, while analyzing malicious threats which can compromise network integrity. It discusses the security and optimization of computer networks for use in a variety of disciplines and fields. Touching on such matters as mobile and VPN security, IP spoofing and intrusion detection, this edited collection emboldens the efforts of researchers, academics and network administrators working in both the public and private sectors. This edited compilation includes chapters covering topics such as attacks and countermeasures, mobile wireless networking, intrusion detection systems, next-generation firewalls, web security and much more. Information and communication systems are an essential component of our society, forcing us to become dependent on these infrastructures. At the same time, these systems are undergoing a convergence and interconnection process that has its benefits, but also raises specific threats to user interests. Citizens and organizations must feel safe when using cyberspace facilities in order to benefit from its advantages. This book is interdisciplinary in the sense that it covers a wide range of topics like network security threats, attacks, tools and procedures to mitigate the effects of malware and common network attacks, network security architecture and deep learning methods of intrusion detection.
Within the Smart Grid, the combination of automation equipment, communication technology and IT is crucial. Interoperability of devices and systems can be seen as the key enabler of smart grids. Therefore, international initiatives have been started in order to identify interoperability core standards for Smart Grids. IEC 62357, the so called Seamless Integration Architecture, is one of these very core standards, which has been identified by recent Smart Grid initiatives and roadmaps to be essential for building and managing intelligent power systems. The Seamless Integration Architecture provides an overview of the interoperability and relations between further standards from IEC TC 57 like the IEC 61970/61968: Common Information Model - CIM. CIM has proven to be a mature standard for interoperability and engineering; consequently, it is a cornerstone of the IEC Smart Grid Standardization Roadmap. This book provides an overview on how the CIM developed, in which international projects and roadmaps is has already been covered and describes the basic use cases for CIM. This book has been written for both Power Engineers trying to get to know the EMS and business IT part of Smart Grid and for Computer Scientist finding out where ICT technology is applied in EMS and DMS Systems. The book is divided into two parts dealing with the theoretical foundations and a practical part describing tools and use cases for CIM.
This book describes analytical models and estimation methods to enhance performance estimation of pipelined multiprocessor systems-on-chip (MPSoCs). A framework is introduced for both design-time and run-time optimizations. For design space exploration, several algorithms are presented to minimize the area footprint of a pipelined MPSoC under a latency or a throughput constraint. A novel adaptive pipelined MPSoC architecture is described, where idle processors are transitioned into low-power states at run-time to reduce energy consumption. Multi-mode pipelined MPSoCs are introduced, where multiple pipelined MPSoCs optimized separately are merged into a single pipelined MPSoC, enabling further reduction of the area footprint by sharing the processors and communication buffers. Readers will benefit from the authors' combined use of analytical models, estimation methods and exploration algorithms and will be enabled to explore billions of design points in a few minutes.
With the continual development of professional industries in today's modernized world, certain technologies have become increasingly applicable. Cyber-physical systems, specifically, are a mechanism that has seen rapid implementation across numerous fields. This is a technology that is constantly evolving, so specialists need a handbook of research that keeps pace with the advancements and methodologies of these devices. Tools and Technologies for the Development of Cyber-Physical Systems is an essential reference source that discusses recent advancements of cyber-physical systems and its application within the health, information, and computer science industries. Featuring research on topics such as autonomous agents, power supply methods, and software assessment, this book is ideally designed for data scientists, technology developers, medical practitioners, computer engineers, researchers, academicians, and students seeking coverage on the development and various applications of cyber-physical systems.
This book intends to unite studies in different fields related to the development of the relations between logic, law and legal reasoning. Combining historical and philosophical studies on legal reasoning in Civil and Common Law, and on the often neglected Arabic and Talmudic traditions of jurisprudence, this project unites these areas with recent technical developments in computer science. This combination has resulted in renewed interest in deontic logic and logic of norms that stems from the interaction between artificial intelligence and law and their applications to these areas of logic. The book also aims to motivate and launch a more intense interaction between the historical and philosophical work of Arabic, Talmudic and European jurisprudence. The publication discusses new insights in the interaction between logic and law, and more precisely the study of different answers to the question: what role does logic play in legal reasoning? Varying perspectives include that of foundational studies (such as logical principles and frameworks) to applications, and historical perspectives.
This monograph evolved from my Ph. D dissertation completed at the Laboratory of Computer Science, MIT, during the Summer of 1986. In my dissertation I proposed a pipelined code mapping scheme for array operations on static dataflow architectures. The main addition to this work is found in Chapter 12, reflecting new research results developed during the last three years since I joined McGill University-results based upon the principles in my dissertation. The terminology dataflow soft ware pipelining has been consistently used since publication of our 1988 paper on the argument-fetching dataflow architecture model at McGill University 43]. In the first part of this book we describe the static data flow graph model as an operational model for concurrent computation. We look at timing considerations for program graph execution on an ideal static dataflow computer, examine the notion of pipe lining, and characterize its performance. We discuss balancing techniques used to transform certain graphs into fully pipelined data flow graphs. In particular, we show how optimal balancing of an acyclic data flow graph can be formulated as a linear programming problem for which an optimal solution exists. As a major result, we show the optimal balancing problem of acyclic data flow graphs is reduceable to a class of linear programming problem, the net work flow problem, for which well-known efficient algorithms exist. This result disproves the conjecture that such problems are computationally hard."
This book presents a wide-band and technology independent, SPICE-compatible RLC model for through-silicon vias (TSVs) in 3D integrated circuits. This model accounts for a variety of effects, including skin effect, depletion capacitance and nearby contact effects. Readers will benefit from in-depth coverage of concepts and technology such as 3D integration, Macro modeling, dimensional analysis and compact modeling, as well as closed form equations for the through silicon via parasitics. Concepts covered are demonstrated by using TSVs in applications such as a spiral inductorand inductive-based communication system and bandpass filtering."
This book describes the benefits and drawbacks inherent in the use of virtual platforms (VPs) to perform fast and early soft error assessment of multicore systems. The authors show that VPs provide engineers with appropriate means to investigate new and more efficient fault injection and mitigation techniques. Coverage also includes the use of machine learning techniques (e.g., linear regression) to speed-up the soft error evaluation process by pinpointing parameters (e.g., architectural) with the most substantial impact on the software stack dependability. This book provides valuable information and insight through more than 3 million individual scenarios and 2 million simulation-hours. Further, this book explores machine learning techniques usage to navigate large fault injection datasets.
This book provides an introduction to digital storage for consumer electronics. It discusses the various types of digital storage, including emerging non-volatile solid-state storage technologies and their advantages and disadvantages. It discusses the best practices for selecting, integrating, and using storage devices for various applications. It explores the networking of devices into an overall organization that results in always-available home storage combined with digital storage in the cloud to create an infrastructure to support emerging consumer applications and the Internet of Things. It also looks at the role of digital storage devices in creating security and privacy in consumer products. |
You may like...
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare
Hardcover
R3,075
Discovery Miles 30 750
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey
Hardcover
R7,041
Discovery Miles 70 410
|