![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Concurrent data structures simplify the development of concurrent programs by encapsulating commonly used mechanisms for synchronization and commu nication into data structures. This thesis develops a notation for describing concurrent data structures, presents examples of concurrent data structures, and describes an architecture to support concurrent data structures. Concurrent Smalltalk (CST), a derivative of Smalltalk-80 with extensions for concurrency, is developed to describe concurrent data structures. CST allows the programmer to specify objects that are distributed over the nodes of a concurrent computer. These distributed objects have many constituent objects and thus can process many messages simultaneously. They are the foundation upon which concurrent data structures are built. The balanced cube is a concurrent data structure for ordered sets. The set is distributed by a balanced recursive partition that maps to the subcubes of a binary 7lrcube using a Gray code. A search algorithm, VW search, based on the distance properties of the Gray code, searches a balanced cube in O(log N) time. Because it does not have the root bottleneck that limits all tree-based data structures to 0(1) concurrency, the balanced cube achieves 0C.: N) con currency. Considering graphs as concurrent data structures, graph algorithms are pre sented for the shortest path problem, the max-flow problem, and graph parti tioning. These algorithms introduce new synchronization techniques to achieve better performance than existing algorithms."
Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems describes coding approaches for designing fault-tolerant systems, i.e., systems that exhibit structured redundancy that enables them to distinguish between correct and incorrect results or between valid and invalid states. Since redundancy is expensive and counter-intuitive to the traditional notion of system design, the book focuses on resource-efficient methodologies that avoid excessive use of redundancy by exploiting the algorithmic/dynamic structure of a particular combinational or dynamic system. The first part of Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems focuses on fault-tolerant combinational systems providing a review of von Neumann's classical work on Probabilistic Logics (including some more recent work on noisy gates) and describing the use of arithmetic coding and algorithm-based fault-tolerant schemes in algebraic settings. The second part of the book focuses on fault tolerance in dynamic systems. Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems also discusses how, in a dynamic system setting, one can relax the traditional assumption that the error-correcting mechanism is fault-free by using distributed error correcting mechanisms. The final chapter presents a methodology for fault diagnosis in discrete event systems that are described by Petri net models; coding techniques are used to quickly detect and identify failures. From the Foreword "Hadjicostis has significantly expanded the setting to processes occurring in more general algebraic and dynamic systems... The book responds to the growing need to handle faults in complex digital chips and complex networked systems, and to consider the effects of faults at the design stage rather than afterwards." George Verghese, Massachusetts Institute of Technology Coding Approaches to Fault Tolerance in Combinational and Dynamic Systems will be of interest to both researchers and practitioners in the area of fault tolerance, systems design and control.
- Curating Social Data - Summarizing Social Data - Analyzing Social Data - Social Data Analytics Applications: Trust, Recommender Systems, Cognitive Analytics
This book presents several significant advances in algorithms designed to solve the Do-All problem in distributed message-passing settings under various models of adversity, including processor crashes, asynchrony, message delays, network partitions, and malicious processor behaviors. Upper and lower bounds are presented, demonstrating the extent to which efficiency can be combined with fault-tolerance. This book contains the recent advances in the principles of efficient and fault-tolerant cooperative computing, narrowing the gap between abstract models of dependable network computing and realistic distributed systems.
This book is written as an introduction to annotated logics. It provides logical foundations for annotated logics, discusses some interesting applications of these logics and also includes the authors' contributions to annotated logics. The central idea of the book is to show how annotated logic can be applied as a tool to solve problems of technology and of applied science. The book will be of interest to pure and applied logicians, philosophers and computer scientists as a monograph on a kind of paraconsistent logic. But, the layman will also take profit from its reading.
Parallel and distributed computation has been gaining a great lot of attention in the last decades. During this period, the advances attained in computing and communication technologies, and the reduction in the costs of those technolo gies, played a central role in the rapid growth of the interest in the use of parallel and distributed computation in a number of areas of engineering and sciences. Many actual applications have been successfully implemented in various plat forms varying from pure shared-memory to totally distributed models, passing through hybrid approaches such as distributed-shared memory architectures. Parallel and distributed computation differs from dassical sequential compu tation in some of the following major aspects: the number of processing units, independent local dock for each unit, the number of memory units, and the programming model. For representing this diversity, and depending on what level we are looking at the problem, researchers have proposed some models to abstract the main characteristics or parameters (physical components or logical mechanisms) of parallel computers. The problem of establishing a suitable model is to find a reasonable trade-off among simplicity, power of expression and universality. Then, be able to study and analyze more precisely the behavior of parallel applications."
The Internet of Things (IoT) is an interconnection of several devices, networks, technologies, and human resources to achieve a common goal. A variety of IoT-based applications are being used in different sectors and have succeeded in providing huge benefits to the users. As a revolution, IoT overtook the entire global landscape with its presence in almost every sector, including smart cities, smart grid, intelligent transportation, healthcare, education, and so on. This technological revolution also moved to the machines, converting them into intelligent computers that can make real-time decisions and communicate with each other, forming an Internet of Systems/Machines. The use of secure light-weight protocols will help us in developing environment-friendly and energy-efficient IoT systems. IoT is an emerging and recent area of research, adopted for many applications, and there is a need to investigate further challenges in all aspects of it. This book will provide information on fundamentals, architectures, communication protocols, use of AI, existing applications, and emerging research trends in IoT. It follows a theoretical approach to describe the fundamentals for beginners as well as a practical approach with the implementation of case studies for intermediate and advanced readers. The book will be beneficial for academicians, researchers, developers, and engineers who work in or are interested in fields related to IoT. This book serves as a reference for graduate and postgraduate courses in computer science, computer engineering, and information technology streams.
Now in a thoroughly revised second edition, this practical practitioner guide provides a comprehensive overview of the SoC design process. It explains end-to-end system on chip (SoC) design processes and includes updated coverage of design methodology, the design environment, EDA tool flow, design decisions, choice of design intellectual property (IP) cores, sign-off procedures, and design infrastructure requirements. The second edition provides new information on SOC trends and updated design cases. Coverage also includes critical advanced guidance on the latest UPF-based low power design flow, challenges of deep submicron technologies, and 3D design fundamentals, which will prepare the readers for the challenges of working at the nanotechnology scale. A Practical Approach to VLSI System on Chip (SoC) Design: A Comprehensive Guide, Second Edition provides engineers who aspire to become VLSI designers with all the necessary information and details of EDA tools. It will be a valuable professional reference for those working on VLSI design and verification portfolios in complex SoC designs
This book discusses the implementation of digital circuits by using MCML gates. Although digital circuit implementation is possible with other elements, such as CMOS gates, MCML implementations can provide superior performance in certain applications. This book provides a complete automation methodology for the implementation of digital circuits in MCML and provides an extensive explanation on the technical details of design of MCML. A systematic methodology is presented to build efficient MCML standard-cell libraries, and a complete top-down design flow is shown to implement complex systems using such building blocks.
Within the Smart Grid, the combination of automation equipment, communication technology and IT is crucial. Interoperability of devices and systems can be seen as the key enabler of smart grids. Therefore, international initiatives have been started in order to identify interoperability core standards for Smart Grids. IEC 62357, the so called Seamless Integration Architecture, is one of these very core standards, which has been identified by recent Smart Grid initiatives and roadmaps to be essential for building and managing intelligent power systems. The Seamless Integration Architecture provides an overview of the interoperability and relations between further standards from IEC TC 57 like the IEC 61970/61968: Common Information Model - CIM. CIM has proven to be a mature standard for interoperability and engineering; consequently, it is a cornerstone of the IEC Smart Grid Standardization Roadmap. This book provides an overview on how the CIM developed, in which international projects and roadmaps is has already been covered and describes the basic use cases for CIM. This book has been written for both Power Engineers trying to get to know the EMS and business IT part of Smart Grid and for Computer Scientist finding out where ICT technology is applied in EMS and DMS Systems. The book is divided into two parts dealing with the theoretical foundations and a practical part describing tools and use cases for CIM.
Expert advice from several industrial professionals who have worked for some of the world's biggest tech and interactive companies. Best practices that not only prepare writers on how to apply their craft to new fields, but also prepare them for the common ambiguity they will find in corporate and start-up environments. Breakdown of platforms that shows how tech capabilities can fulfill content expectations and how content can fulfill tech expectations. Basic storytelling mechanics customized to today's popular technologies and traditional gaming platforms.
This book describes analytical models and estimation methods to enhance performance estimation of pipelined multiprocessor systems-on-chip (MPSoCs). A framework is introduced for both design-time and run-time optimizations. For design space exploration, several algorithms are presented to minimize the area footprint of a pipelined MPSoC under a latency or a throughput constraint. A novel adaptive pipelined MPSoC architecture is described, where idle processors are transitioned into low-power states at run-time to reduce energy consumption. Multi-mode pipelined MPSoCs are introduced, where multiple pipelined MPSoCs optimized separately are merged into a single pipelined MPSoC, enabling further reduction of the area footprint by sharing the processors and communication buffers. Readers will benefit from the authors' combined use of analytical models, estimation methods and exploration algorithms and will be enabled to explore billions of design points in a few minutes.
This book comprehensively covers the state-of-the-art security applications of machine learning techniques. The first part explains the emerging solutions for anti-tamper design, IC Counterfeits detection and hardware Trojan identification. It also explains the latest development of deep-learning-based modeling attacks on physically unclonable functions and outlines the design principles of more resilient PUF architectures. The second discusses the use of machine learning to mitigate the risks of security attacks on cyber-physical systems, with a particular focus on power plants. The third part provides an in-depth insight into the principles of malware analysis in embedded systems and describes how the usage of supervised learning techniques provides an effective approach to tackle software vulnerabilities.
This monograph evolved from my Ph. D dissertation completed at the Laboratory of Computer Science, MIT, during the Summer of 1986. In my dissertation I proposed a pipelined code mapping scheme for array operations on static dataflow architectures. The main addition to this work is found in Chapter 12, reflecting new research results developed during the last three years since I joined McGill University-results based upon the principles in my dissertation. The terminology dataflow soft ware pipelining has been consistently used since publication of our 1988 paper on the argument-fetching dataflow architecture model at McGill University 43]. In the first part of this book we describe the static data flow graph model as an operational model for concurrent computation. We look at timing considerations for program graph execution on an ideal static dataflow computer, examine the notion of pipe lining, and characterize its performance. We discuss balancing techniques used to transform certain graphs into fully pipelined data flow graphs. In particular, we show how optimal balancing of an acyclic data flow graph can be formulated as a linear programming problem for which an optimal solution exists. As a major result, we show the optimal balancing problem of acyclic data flow graphs is reduceable to a class of linear programming problem, the net work flow problem, for which well-known efficient algorithms exist. This result disproves the conjecture that such problems are computationally hard."
With the continual development of professional industries in today's modernized world, certain technologies have become increasingly applicable. Cyber-physical systems, specifically, are a mechanism that has seen rapid implementation across numerous fields. This is a technology that is constantly evolving, so specialists need a handbook of research that keeps pace with the advancements and methodologies of these devices. Tools and Technologies for the Development of Cyber-Physical Systems is an essential reference source that discusses recent advancements of cyber-physical systems and its application within the health, information, and computer science industries. Featuring research on topics such as autonomous agents, power supply methods, and software assessment, this book is ideally designed for data scientists, technology developers, medical practitioners, computer engineers, researchers, academicians, and students seeking coverage on the development and various applications of cyber-physical systems.
This book presents a wide-band and technology independent, SPICE-compatible RLC model for through-silicon vias (TSVs) in 3D integrated circuits. This model accounts for a variety of effects, including skin effect, depletion capacitance and nearby contact effects. Readers will benefit from in-depth coverage of concepts and technology such as 3D integration, Macro modeling, dimensional analysis and compact modeling, as well as closed form equations for the through silicon via parasitics. Concepts covered are demonstrated by using TSVs in applications such as a spiral inductorand inductive-based communication system and bandpass filtering."
This book intends to unite studies in different fields related to the development of the relations between logic, law and legal reasoning. Combining historical and philosophical studies on legal reasoning in Civil and Common Law, and on the often neglected Arabic and Talmudic traditions of jurisprudence, this project unites these areas with recent technical developments in computer science. This combination has resulted in renewed interest in deontic logic and logic of norms that stems from the interaction between artificial intelligence and law and their applications to these areas of logic. The book also aims to motivate and launch a more intense interaction between the historical and philosophical work of Arabic, Talmudic and European jurisprudence. The publication discusses new insights in the interaction between logic and law, and more precisely the study of different answers to the question: what role does logic play in legal reasoning? Varying perspectives include that of foundational studies (such as logical principles and frameworks) to applications, and historical perspectives.
With the fast pace of change in today's business environment, the need to transform organizations into agile enterprises that can respond quickly to change has never been greater. Methods and computer technologies are needed to enable rapid business and system change, and this practical book shows professionals how to achieve this agility. The solution lies in Enterprise Integration (both business and technology integration). For business integration, the book explains how to use enterprise architecture methods to integrate data, processes, locations, people, events and business plans throughout an organization.
This book covers several aspects of the operational amplifier and includes theoretical explanations with simplified expressions and derivations. The book is designed to serve as a textbook for courses offered to undergraduate and postgraduate students enrolled in electronics and communication engineering. The topics included are DC amplifier, AC/DC analysis of DC amplifier, relevant derivations, a block diagram of the operational amplifier, positive and negative feedbacks, amplitude modulator, current to voltage and voltage to current converters, DAC and ADC, integrator, differentiator, active filters, comparators, sinusoidal and non-sinusoidal waveform generators, phase lock loop (PLL), etc. This book contains two parts-sections A and B. Section A includes theory, methodology, circuit design and derivations. Section B explains the design and study of experiments for laboratory practice. Laboratory experiments enable students to perform a practical activity that demonstrates applications of the operational amplifier. A simplified description of the circuits, working principle and practical approach towards understanding the concept is a unique feature of this book. Simple methods and easy steps of the derivation and lucid presentation are some other traits of this book for readers that do not have any background information about electronics. This book is student-centric towards the basics of the operational amplifier and its applications. The detailed coverage and pedagogical tools make this an ideal textbook for students and researchers enrolled in senior undergraduate and beginning postgraduate electronics and communication engineering courses.
An Interdisciplinary Approach to Modern Network Security presents the latest methodologies and trends in detecting and preventing network threats. Investigating the potential of current and emerging security technologies, this publication is an all-inclusive reference source for academicians, researchers, students, professionals, practitioners, network analysts and technology specialists interested in the simulation and application of computer network protection. It presents theoretical frameworks and the latest research findings in network security technologies, while analyzing malicious threats which can compromise network integrity. It discusses the security and optimization of computer networks for use in a variety of disciplines and fields. Touching on such matters as mobile and VPN security, IP spoofing and intrusion detection, this edited collection emboldens the efforts of researchers, academics and network administrators working in both the public and private sectors. This edited compilation includes chapters covering topics such as attacks and countermeasures, mobile wireless networking, intrusion detection systems, next-generation firewalls, web security and much more. Information and communication systems are an essential component of our society, forcing us to become dependent on these infrastructures. At the same time, these systems are undergoing a convergence and interconnection process that has its benefits, but also raises specific threats to user interests. Citizens and organizations must feel safe when using cyberspace facilities in order to benefit from its advantages. This book is interdisciplinary in the sense that it covers a wide range of topics like network security threats, attacks, tools and procedures to mitigate the effects of malware and common network attacks, network security architecture and deep learning methods of intrusion detection.
This book provides an introduction to digital storage for consumer electronics. It discusses the various types of digital storage, including emerging non-volatile solid-state storage technologies and their advantages and disadvantages. It discusses the best practices for selecting, integrating, and using storage devices for various applications. It explores the networking of devices into an overall organization that results in always-available home storage combined with digital storage in the cloud to create an infrastructure to support emerging consumer applications and the Internet of Things. It also looks at the role of digital storage devices in creating security and privacy in consumer products.
This book describes the benefits and drawbacks inherent in the use of virtual platforms (VPs) to perform fast and early soft error assessment of multicore systems. The authors show that VPs provide engineers with appropriate means to investigate new and more efficient fault injection and mitigation techniques. Coverage also includes the use of machine learning techniques (e.g., linear regression) to speed-up the soft error evaluation process by pinpointing parameters (e.g., architectural) with the most substantial impact on the software stack dependability. This book provides valuable information and insight through more than 3 million individual scenarios and 2 million simulation-hours. Further, this book explores machine learning techniques usage to navigate large fault injection datasets.
With the end of Dennard scaling and Moore's law, IC chips, especially large-scale ones, now face more reliability challenges, and reliability has become one of the mainstay merits of VLSI designs. In this context, this book presents a built-in on-chip fault-tolerant computing paradigm that seeks to combine fault detection, fault diagnosis, and error recovery in large-scale VLSI design in a unified manner so as to minimize resource overhead and performance penalties. Following this computing paradigm, we propose a holistic solution based on three key components: self-test, self-diagnosis and self-repair, or "3S" for short. We then explore the use of 3S for general IC designs, general-purpose processors, network-on-chip (NoC) and deep learning accelerators, and present prototypes to demonstrate how 3S responds to in-field silicon degradation and recovery under various runtime faults caused by aging, process variations, or radical particles. Moreover, we demonstrate that 3S not only offers a powerful backbone for various on-chip fault-tolerant designs and implementations, but also has farther-reaching implications such as maintaining graceful performance degradation, mitigating the impact of verification blind spots, and improving chip yield. This book is the outcome of extensive fault-tolerant computing research pursued at the State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences over the past decade. The proposed built-in on-chip fault-tolerant computing paradigm has been verified in a broad range of scenarios, from small processors in satellite computers to large processors in HPCs. Hopefully, it will provide an alternative yet effective solution to the growing reliability challenges for large-scale VLSI designs.
High Performance Computational Methods for Biological Sequence Analysis presents biological sequence analysis using an interdisciplinary approach that integrates biological, mathematical and computational concepts. These concepts are presented so that computer scientists and biomedical scientists can obtain the necessary background for developing better algorithms and applying parallel computational methods. This book will enable both groups to develop the depth of knowledge needed to work in this interdisciplinary field. This work focuses on high performance computational approaches that are used to perform computationally intensive biological sequence analysis tasks: pairwise sequence comparison, multiple sequence alignment, and sequence similarity searching in large databases. These computational methods are becoming increasingly important to the molecular biology community allowing researchers to explore the increasingly large amounts of sequence data generated by the Human Genome Project and other related biological projects. The approaches presented by the authors are state-of-the-art and show how to reduce analysis times significantly, sometimes from days to minutes. High Performance Computational Methods for Biological Sequence Analysis is tremendously important to biomedical science students and researchers who are interested in applying sequence analyses to their studies, and to computational science students and researchers who are interested in applying new computational approaches to biological sequence analyses. |
![]() ![]() You may like...
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare
Hardcover
R3,225
Discovery Miles 32 250
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,355
Discovery Miles 23 550
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey
Hardcover
R7,388
Discovery Miles 73 880
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,885
Discovery Miles 58 850
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R6,562
Discovery Miles 65 620
|