![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
Within the Smart Grid, the combination of automation equipment, communication technology and IT is crucial. Interoperability of devices and systems can be seen as the key enabler of smart grids. Therefore, international initiatives have been started in order to identify interoperability core standards for Smart Grids. IEC 62357, the so called Seamless Integration Architecture, is one of these very core standards, which has been identified by recent Smart Grid initiatives and roadmaps to be essential for building and managing intelligent power systems. The Seamless Integration Architecture provides an overview of the interoperability and relations between further standards from IEC TC 57 like the IEC 61970/61968: Common Information Model - CIM. CIM has proven to be a mature standard for interoperability and engineering; consequently, it is a cornerstone of the IEC Smart Grid Standardization Roadmap. This book provides an overview on how the CIM developed, in which international projects and roadmaps is has already been covered and describes the basic use cases for CIM. This book has been written for both Power Engineers trying to get to know the EMS and business IT part of Smart Grid and for Computer Scientist finding out where ICT technology is applied in EMS and DMS Systems. The book is divided into two parts dealing with the theoretical foundations and a practical part describing tools and use cases for CIM.
This book discusses the implementation of digital circuits by using MCML gates. Although digital circuit implementation is possible with other elements, such as CMOS gates, MCML implementations can provide superior performance in certain applications. This book provides a complete automation methodology for the implementation of digital circuits in MCML and provides an extensive explanation on the technical details of design of MCML. A systematic methodology is presented to build efficient MCML standard-cell libraries, and a complete top-down design flow is shown to implement complex systems using such building blocks.
This book describes analytical models and estimation methods to enhance performance estimation of pipelined multiprocessor systems-on-chip (MPSoCs). A framework is introduced for both design-time and run-time optimizations. For design space exploration, several algorithms are presented to minimize the area footprint of a pipelined MPSoC under a latency or a throughput constraint. A novel adaptive pipelined MPSoC architecture is described, where idle processors are transitioned into low-power states at run-time to reduce energy consumption. Multi-mode pipelined MPSoCs are introduced, where multiple pipelined MPSoCs optimized separately are merged into a single pipelined MPSoC, enabling further reduction of the area footprint by sharing the processors and communication buffers. Readers will benefit from the authors' combined use of analytical models, estimation methods and exploration algorithms and will be enabled to explore billions of design points in a few minutes.
This second edition focuses on the thought process of digital design and implementation in the context of VLSI and system design. It covers the Verilog 2001 and Verilog 2005 RTL design styles, constructs and the optimization at the RTL and synthesis level. The book also covers the logic synthesis, low power, multiple clock domain design concepts and design performance improvement techniques. The book includes 250 design examples/illustrations and 100 exercise questions. This volume can be used as a core or supplementary text in undergraduate courses on logic design and as a text for professional and vocational coursework. In addition, it will be a hands-on professional reference and a self-study aid for hobbyists.
Now in a thoroughly revised second edition, this practical practitioner guide provides a comprehensive overview of the SoC design process. It explains end-to-end system on chip (SoC) design processes and includes updated coverage of design methodology, the design environment, EDA tool flow, design decisions, choice of design intellectual property (IP) cores, sign-off procedures, and design infrastructure requirements. The second edition provides new information on SOC trends and updated design cases. Coverage also includes critical advanced guidance on the latest UPF-based low power design flow, challenges of deep submicron technologies, and 3D design fundamentals, which will prepare the readers for the challenges of working at the nanotechnology scale. A Practical Approach to VLSI System on Chip (SoC) Design: A Comprehensive Guide, Second Edition provides engineers who aspire to become VLSI designers with all the necessary information and details of EDA tools. It will be a valuable professional reference for those working on VLSI design and verification portfolios in complex SoC designs
This book presents a wide-band and technology independent, SPICE-compatible RLC model for through-silicon vias (TSVs) in 3D integrated circuits. This model accounts for a variety of effects, including skin effect, depletion capacitance and nearby contact effects. Readers will benefit from in-depth coverage of concepts and technology such as 3D integration, Macro modeling, dimensional analysis and compact modeling, as well as closed form equations for the through silicon via parasitics. Concepts covered are demonstrated by using TSVs in applications such as a spiral inductorand inductive-based communication system and bandpass filtering."
With the fast pace of change in today's business environment, the need to transform organizations into agile enterprises that can respond quickly to change has never been greater. Methods and computer technologies are needed to enable rapid business and system change, and this practical book shows professionals how to achieve this agility. The solution lies in Enterprise Integration (both business and technology integration). For business integration, the book explains how to use enterprise architecture methods to integrate data, processes, locations, people, events and business plans throughout an organization.
This book comprehensively covers the state-of-the-art security applications of machine learning techniques. The first part explains the emerging solutions for anti-tamper design, IC Counterfeits detection and hardware Trojan identification. It also explains the latest development of deep-learning-based modeling attacks on physically unclonable functions and outlines the design principles of more resilient PUF architectures. The second discusses the use of machine learning to mitigate the risks of security attacks on cyber-physical systems, with a particular focus on power plants. The third part provides an in-depth insight into the principles of malware analysis in embedded systems and describes how the usage of supervised learning techniques provides an effective approach to tackle software vulnerabilities.
With the continual development of professional industries in today's modernized world, certain technologies have become increasingly applicable. Cyber-physical systems, specifically, are a mechanism that has seen rapid implementation across numerous fields. This is a technology that is constantly evolving, so specialists need a handbook of research that keeps pace with the advancements and methodologies of these devices. Tools and Technologies for the Development of Cyber-Physical Systems is an essential reference source that discusses recent advancements of cyber-physical systems and its application within the health, information, and computer science industries. Featuring research on topics such as autonomous agents, power supply methods, and software assessment, this book is ideally designed for data scientists, technology developers, medical practitioners, computer engineers, researchers, academicians, and students seeking coverage on the development and various applications of cyber-physical systems.
This book provides an introduction to digital storage for consumer electronics. It discusses the various types of digital storage, including emerging non-volatile solid-state storage technologies and their advantages and disadvantages. It discusses the best practices for selecting, integrating, and using storage devices for various applications. It explores the networking of devices into an overall organization that results in always-available home storage combined with digital storage in the cloud to create an infrastructure to support emerging consumer applications and the Internet of Things. It also looks at the role of digital storage devices in creating security and privacy in consumer products.
Securing Cloud Services - A pragmatic guide gives an overview of security architecture processes and explains how they may be used to derive an appropriate set of security controls to manage the risks associated with working in the Cloud. Manage the risks associated with Cloud computing - buy this book today!
High Performance Computational Methods for Biological Sequence Analysis presents biological sequence analysis using an interdisciplinary approach that integrates biological, mathematical and computational concepts. These concepts are presented so that computer scientists and biomedical scientists can obtain the necessary background for developing better algorithms and applying parallel computational methods. This book will enable both groups to develop the depth of knowledge needed to work in this interdisciplinary field. This work focuses on high performance computational approaches that are used to perform computationally intensive biological sequence analysis tasks: pairwise sequence comparison, multiple sequence alignment, and sequence similarity searching in large databases. These computational methods are becoming increasingly important to the molecular biology community allowing researchers to explore the increasingly large amounts of sequence data generated by the Human Genome Project and other related biological projects. The approaches presented by the authors are state-of-the-art and show how to reduce analysis times significantly, sometimes from days to minutes. High Performance Computational Methods for Biological Sequence Analysis is tremendously important to biomedical science students and researchers who are interested in applying sequence analyses to their studies, and to computational science students and researchers who are interested in applying new computational approaches to biological sequence analyses.
This book intends to unite studies in different fields related to the development of the relations between logic, law and legal reasoning. Combining historical and philosophical studies on legal reasoning in Civil and Common Law, and on the often neglected Arabic and Talmudic traditions of jurisprudence, this project unites these areas with recent technical developments in computer science. This combination has resulted in renewed interest in deontic logic and logic of norms that stems from the interaction between artificial intelligence and law and their applications to these areas of logic. The book also aims to motivate and launch a more intense interaction between the historical and philosophical work of Arabic, Talmudic and European jurisprudence. The publication discusses new insights in the interaction between logic and law, and more precisely the study of different answers to the question: what role does logic play in legal reasoning? Varying perspectives include that of foundational studies (such as logical principles and frameworks) to applications, and historical perspectives.
The advent of very large scale integrated circuit technology has enabled the construction of very complex and large interconnection networks. By most accounts, the next generation of supercomputers will achieve its gains by increasing the number of processing elements, rather than by using faster processors. The most difficult technical problem in constructing a supercom puter will be the design of the interconnection network through which the processors communicate. Selecting an appropriate and adequate topological structure of interconnection networks will become a critical issue, on which many research efforts have been made over the past decade. The book is aimed to attract the readers' attention to such an important research area. Graph theory is a fundamental and powerful mathematical tool for de signing and analyzing interconnection networks, since the topological struc ture of an interconnection network is a graph. This fact has been univer sally accepted by computer scientists and engineers. This book provides the most basic problems, concepts and well-established results on the topological structure and analysis of interconnection networks in the language of graph theory. The material originates from a vast amount of literature, but the theory presented is developed carefully and skillfully. The treatment is gen erally self-contained, and most stated results are proved. No exercises are explicitly exhibited, but there are some stated results whose proofs are left to the reader to consolidate his understanding of the material."
The proliferation of multicore processors in the embedded market for Internet-of-Things (IoT) and Cyber-Physical Systems (CPS) makes developing real-time embedded applications increasingly difficult. What is the underlying theory that makes multicore real-time possible? How does theory influence application design? When is a real-time operating system (RTOS) useful? What RTOS features do applications need? How does a mature RTOS help manage the complexity of multicore hardware? Real-Time Systems Development with RTEMS and Multicore Processors answers these questions and more with exemplar Real-Time Executive for Multiprocessor Systems (RTEMS) RTOS to provide concrete advice and examples for constructing useful, feature-rich applications. RTEMS is free, open-source software that supports multi-processor systems for over a dozen CPU architectures and over 150 specific system boards in applications spanning the range of IoT and CPS domains such as satellites, particle accelerators, robots, racing motorcycles, building controls, medical devices, and more. The focus of this book is on enabling real-time embedded software engineering while providing sufficient theoretical foundations and hardware background to understand the rationale for key decisions in RTOS and application design and implementation. The topics covered in this book include: Cross-compilation for embedded systems development Concurrent programming models used in real-time embedded software Real-time scheduling theory and algorithms used in wide practice Usage and comparison of two application programmer interfaces (APIs) in real-time embedded software: POSIX and the RTEMS Classic APIs Design and implementation in RTEMS of commonly found RTOS features for schedulers, task management, time-keeping, inter-task synchronization, inter-task communication, and networking The challenges introduced by multicore hardware, advances in multicore real-time theory, and software engineering multicore real-time systems with RTEMS All the authors of this book are experts in the academic field of real-time embedded systems. Two of the authors are primary open-source maintainers of the RTEMS software project.
This book describes the benefits and drawbacks inherent in the use of virtual platforms (VPs) to perform fast and early soft error assessment of multicore systems. The authors show that VPs provide engineers with appropriate means to investigate new and more efficient fault injection and mitigation techniques. Coverage also includes the use of machine learning techniques (e.g., linear regression) to speed-up the soft error evaluation process by pinpointing parameters (e.g., architectural) with the most substantial impact on the software stack dependability. This book provides valuable information and insight through more than 3 million individual scenarios and 2 million simulation-hours. Further, this book explores machine learning techniques usage to navigate large fault injection datasets.
Fault-Tolerant Parallel Computation presents recent advances in algorithmic ways of introducing fault-tolerance in multiprocessors under the constraint of preserving efficiency. The difficulty associated with combining fault-tolerance and efficiency is that the two have conflicting means: fault-tolerance is achieved by introducing redundancy, while efficiency is achieved by removing redundancy. This monograph demonstrates how in certain models of parallel computation it is possible to combine efficiency and fault-tolerance and shows how it is possible to develop efficient algorithms without concern for fault-tolerance, and then correctly and efficiently execute these algorithms on parallel machines whose processors are subject to arbitrary dynamic fail-stop errors. The efficient algorithmic approaches to multiprocessor fault-tolerance presented in this monograph make a contribution towards bridging the gap between the abstract models of parallel computation and realizable parallel architectures. Fault-Tolerant Parallel Computation presents the state of the art in algorithmic approaches to fault-tolerance in efficient parallel algorithms. The monograph synthesizes work that was presented in recent symposia and published in refereed journals by the authors and other leading researchers. This is the first text that takes the reader on the grand tour of this new field summarizing major results and identifying hard open problems. This monograph will be of interest to academic and industrial researchers and graduate students working in the areas of fault-tolerance, algorithms and parallel computation and may also be used as a text in a graduate course on parallel algorithmic techniques and fault-tolerance.
In Symbolic Analysis for Parallelizing Compilers the author presents an excellent demonstration of the effectiveness of symbolic analysis in tackling important optimization problems, some of which inhibit loop parallelization. The framework that Haghighat presents has proved extremely successful in induction and wraparound variable analysis, strength reduction, dead code elimination and symbolic constant propagation. The approach can be applied to any program transformation or optimization problem that uses properties and value ranges of program names. Symbolic analysis can be used on any transformational system or optimization problem that relies on compile-time information about program variables. This covers the majority of, if not all optimization and parallelization techniques. The book makes a compelling case for the potential of symbolic analysis, applying it for the first time - and with remarkable results - to a number of classical optimization problems: loop scheduling, static timing or size analysis, and dependence analysis. It demonstrates how symbolic analysis can solve these problems faster and more accurately than existing hybrid techniques.
Efficient parallel solutions have been found to many problems. Some of them can be obtained automatically from sequential programs, using compilers. However, there is a large class of problems - irregular problems - that lack efficient solutions. IRREGULAR 94 - a workshop and summer school organized in Geneva - addressed the problems associated with the derivation of efficient solutions to irregular problems. This book, which is based on the workshop, draws on the contributions of outstanding scientists to present the state of the art in irregular problems, covering aspects ranging from scientific computing, discrete optimization, and automatic extraction of parallelism. Audience: This first book on parallel algorithms for irregular problems is of interest to advanced graduate students and researchers in parallel computer science.
An Interdisciplinary Approach to Modern Network Security presents the latest methodologies and trends in detecting and preventing network threats. Investigating the potential of current and emerging security technologies, this publication is an all-inclusive reference source for academicians, researchers, students, professionals, practitioners, network analysts and technology specialists interested in the simulation and application of computer network protection. It presents theoretical frameworks and the latest research findings in network security technologies, while analyzing malicious threats which can compromise network integrity. It discusses the security and optimization of computer networks for use in a variety of disciplines and fields. Touching on such matters as mobile and VPN security, IP spoofing and intrusion detection, this edited collection emboldens the efforts of researchers, academics and network administrators working in both the public and private sectors. This edited compilation includes chapters covering topics such as attacks and countermeasures, mobile wireless networking, intrusion detection systems, next-generation firewalls, web security and much more. Information and communication systems are an essential component of our society, forcing us to become dependent on these infrastructures. At the same time, these systems are undergoing a convergence and interconnection process that has its benefits, but also raises specific threats to user interests. Citizens and organizations must feel safe when using cyberspace facilities in order to benefit from its advantages. This book is interdisciplinary in the sense that it covers a wide range of topics like network security threats, attacks, tools and procedures to mitigate the effects of malware and common network attacks, network security architecture and deep learning methods of intrusion detection.
It has been widely recognized that artificial intelligence computations offer large potential for distributed and parallel processing. Unfortunately, not much is known about designing parallel AI algorithms and efficient, easy-to-use parallel computer architectures for AI applications. The field of parallel computation and computers for AI is in its infancy, but some significant ideas have appeared and initial practical experience has become available. The purpose of this book has been to collect in one volume contributions from several leading researchers and pioneers of AI that represent a sample of these ideas and experiences. This sample does not include all schools of thought nor contributions from all leading researchers, but it covers a relatively wide variety of views and topics and in this sense can be helpful in assessing the state ofthe art. We hope that the book will serve, at least, as a pointer to more specialized literature and that it will stimulate interest in the area of parallel AI processing. It has been a great pleasure and a privilege to cooperate with all contributors to this volume. They have my warmest thanks and gratitude. Mrs. Birgitta Knapp has assisted me in the editorial task and demonstrated a great deal of skill and patience. Janusz S. Kowalik vii INTRODUCTION Artificial intelligence (AI) computer programs can be very time-consuming.
This book covers several aspects of the operational amplifier and includes theoretical explanations with simplified expressions and derivations. The book is designed to serve as a textbook for courses offered to undergraduate and postgraduate students enrolled in electronics and communication engineering. The topics included are DC amplifier, AC/DC analysis of DC amplifier, relevant derivations, a block diagram of the operational amplifier, positive and negative feedbacks, amplitude modulator, current to voltage and voltage to current converters, DAC and ADC, integrator, differentiator, active filters, comparators, sinusoidal and non-sinusoidal waveform generators, phase lock loop (PLL), etc. This book contains two parts-sections A and B. Section A includes theory, methodology, circuit design and derivations. Section B explains the design and study of experiments for laboratory practice. Laboratory experiments enable students to perform a practical activity that demonstrates applications of the operational amplifier. A simplified description of the circuits, working principle and practical approach towards understanding the concept is a unique feature of this book. Simple methods and easy steps of the derivation and lucid presentation are some other traits of this book for readers that do not have any background information about electronics. This book is student-centric towards the basics of the operational amplifier and its applications. The detailed coverage and pedagogical tools make this an ideal textbook for students and researchers enrolled in senior undergraduate and beginning postgraduate electronics and communication engineering courses.
Based on the Lectures given during the Eurocourse on 'Computing with Parallel Architectures' held at the Joint Research Centre Ispra, Italy, September 10-14, 1990
With the end of Dennard scaling and Moore's law, IC chips, especially large-scale ones, now face more reliability challenges, and reliability has become one of the mainstay merits of VLSI designs. In this context, this book presents a built-in on-chip fault-tolerant computing paradigm that seeks to combine fault detection, fault diagnosis, and error recovery in large-scale VLSI design in a unified manner so as to minimize resource overhead and performance penalties. Following this computing paradigm, we propose a holistic solution based on three key components: self-test, self-diagnosis and self-repair, or "3S" for short. We then explore the use of 3S for general IC designs, general-purpose processors, network-on-chip (NoC) and deep learning accelerators, and present prototypes to demonstrate how 3S responds to in-field silicon degradation and recovery under various runtime faults caused by aging, process variations, or radical particles. Moreover, we demonstrate that 3S not only offers a powerful backbone for various on-chip fault-tolerant designs and implementations, but also has farther-reaching implications such as maintaining graceful performance degradation, mitigating the impact of verification blind spots, and improving chip yield. This book is the outcome of extensive fault-tolerant computing research pursued at the State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences over the past decade. The proposed built-in on-chip fault-tolerant computing paradigm has been verified in a broad range of scenarios, from small processors in satellite computers to large processors in HPCs. Hopefully, it will provide an alternative yet effective solution to the growing reliability challenges for large-scale VLSI designs.
This book puts in focus various techniques for checking modeling fidelity of Cyber Physical Systems (CPS), with respect to the physical world they represent. The authors' present modeling and analysis techniques representing different communities, from very different angles, discuss their possible interactions, and discuss the commonalities and differences between their practices. Coverage includes model driven development, resource-driven development, statistical analysis, proofs of simulator implementation, compiler construction, power/temperature modeling of digital devices, high-level performance analysis, and code/device certification. Several industrial contexts are covered, including modeling of computing and communication, proof architectures models and statistical based validation techniques. |
![]() ![]() You may like...
Stochastic Processes - Lectures given at…
Ole E. Barndorff-Nielsen
Hardcover
R2,400
Discovery Miles 24 000
Thermodynamics of Materials with Memory…
Giovambattista Amendola, Mauro Fabrizio, …
Hardcover
R4,301
Discovery Miles 43 010
Low-temperature Technologies
Tatiana Morosuk, Muhammad Sultan
Hardcover
R3,369
Discovery Miles 33 690
Turbulent Combustion Modeling…
Tarek Echekki, Epaminondas Mastorakos
Hardcover
R5,875
Discovery Miles 58 750
Stochastic Modeling and Optimization…
David D. Yao, Han-Qin Zhang, …
Hardcover
R1,617
Discovery Miles 16 170
|