![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This two-volume set focuses on fundamental concepts and design goals (i.e., a switch/router's key features), architectures, and practical applications of switch/routers in IP networks. The discussion includes practical design examples to illustrate how switch/routers are designed and how the key features are implemented. Designing Switch/Routers: Fundamental Concepts, Design Methods, Architectures, and Applications begins by providing an introductory level discussion that covers the functions and architectures of the switch/router. The first book considers the switch/router as a generic Layer 2 and Layer 3 forwarding device without placing emphasis on any particular manufacturer's device. The underlining concepts and design methods are not only positioned to be applicable to this generic switch/router, but also to the typical switch/router seen in the industry. The discussion provides a better insight into the protocols, methods, processes, and tools that go into designing switch/routers. The second volume explains the design and architectural considerations, as well as, the typical processes and steps used to build practical switch/routers. It then discusses the advantages of using Ethernet in today's networks and why Ethernet continues to play a bigger role in Local Area Network (LAN), Metropolitan Area Network (MAN), and Wide Area Network (WAN) design. This book set provides a discussion of the design of switch/routers and is written in a style to appeal to undergraduate and graduate-level students, engineers, and researchers in the networking and telecoms industry, as well as academics and other industry professionals. The material and discussion are structured in such a way that they could serve as standalone teaching material for networking and telecom courses and/or supplementary material for such courses.
This book presents a design methodology that is practically applicable to the architectural design of a broad range of systems. It is based on fundamental design concepts to conceive and specify the required functional properties of a system, while abstracting from the specific implementation functions and technologies that can be chosen to build the system. Abstraction and precision are indispensable when it comes to understanding complex systems and precisely creating and representing them at a high functional level. Once understood, these concepts appear natural, self-evident and extremely powerful, since they can directly, precisely and concisely reflect what is considered essential for the functional behavior of a system. The first two chapters present the global views on how to design systems and how to interpret terms and meta-concepts. This informal introduction provides the general context for the remainder of the book. On a more formal level, Chapters 3 through 6 present the main basic design concepts, illustrating them with examples. Language notations are introduced along with the basic design concepts. Lastly, Chapters 7 to 12 discuss the more intricate basic design concepts of interactive systems by focusing on their common functional goal. These chapters are recommended to readers who have a particular interest in the design of protocols and interfaces for various systems. The didactic approach makes it suitable for graduate students who want to develop insights into and skills in developing complex systems, as well as practitioners in industry and large organizations who are responsible for the design and development of large and complex systems. It includes numerous tangible examples from various fields, and several appealing exercises with their solutions.
This book precisely formulates and simplifies the presentation of Instruction Level Parallelism (ILP) compilation techniques. It uniquely offers consistent and uniform descriptions of the code transformations involved. Due to the ubiquitous nature of ILP in virtually every processor built today, from general purpose CPUs to application-specific and embedded processors, this book is useful to the student, the practitioner and also the researcher of advanced compilation techniques. With an emphasis on fine-grain instruction level parallelism, this book will also prove interesting to researchers and students of parallelism at large, in as much as the techniques described yield insights that go beyond superscalar and VLIW (Very Long Instruction Word) machines compilation and are more widely applicable to optimizing compilers in general. ILP techniques have found wide and crucial application in Design Automation, where they have been used extensively in the optimization of performance as well as area and power minimization of computer designs.
This book presents the state-of-the art of one of the main concerns with microprocessors today, a phenomenon known as "dark silicon". Readers will learn how power constraints (both leakage and dynamic power) limit the extent to which large portions of a chip can be powered up at a given time, i.e. how much actual performance and functionality the microprocessor can provide. The authors describe their research toward the future of microprocessor development in the dark silicon era, covering a variety of important aspects of dark silicon-aware architectures including design, management, reliability, and test. Readers will benefit from specific recommendations for mitigating the dark silicon phenomenon, including energy-efficient, dedicated solutions and technologies to maximize the utilization and reliability of microprocessors.
Computational Frameworks: Systems, Models and Applications provides an overview of advanced perspectives that bridges the gap between frontline research and practical efforts. It is unique in showing the interdisciplinary nature of this area and the way in which it interacts with emerging technologies and techniques. As computational systems are a dominating part of daily lives and a required support for most of the engineering sciences, this book explores their usage (e.g. big data, high performance clusters, databases and information systems, integrated and embedded hardware/software components, smart devices, mobile and pervasive networks, cyber physical systems, etc.).
This book describes the integrated circuit supply chain flow and discusses security issues across the flow, which can undermine the trustworthiness of final design. The author discusses and analyzes the complexity of the flow, along with vulnerabilities of digital circuits to malicious modifications (i.e. hardware Trojans) at the register-transfer level, gate level and layout level. Various metrics are discussed to quantify circuit vulnerabilities to hardware Trojans at different levels. Readers are introduced to design techniques for preventing hardware Trojan insertion and to facilitate hardware Trojan detection. Trusted testing is also discussed, enabling design trustworthiness at different steps of the integrated circuit design flow. Coverage also includes hardware Trojans in mixed-signal circuits.
This book describes new and effective methodologies for modeling, analyzing and mitigating cell-internal signal electromigration in nanoCMOS, with significant circuit lifetime improvements and no impact on performance, area and power. The authors are the first to analyze and propose a solution for the electromigration effects inside logic cells of a circuit. They show in this book that an interconnect inside a cell can fail reducing considerably the circuit lifetime and they demonstrate a methodology to optimize the lifetime of circuits, by placing the output, Vdd and Vss pin of the cells in the less critical regions, where the electromigration effects are reduced. Readers will be enabled to apply this methodology only for the critical cells in the circuit, avoiding impact in the circuit delay, area and performance, thus increasing the lifetime of the circuit without loss in other characteristics.
The relevant techniques, vocabulary, currently available hardware architectures, and programming languages which provide the basic concepts of parallel computing are introduced in this book. In the future, we can expect to see massively parallel teraflop machines. These machines will be supported by gigabit network which allow grand-challenge problems to be solved by using several supercomputers and parallel machines concurrently.
This book describes RTL design using Verilog, synthesis and timing closure for System On Chip (SOC) design blocks. It covers the complex RTL design scenarios and challenges for SOC designs and provides practical information on performance improvements in SOC, as well as Application Specific Integrated Circuit (ASIC) designs. Prototyping using modern high density Field Programmable Gate Arrays (FPGAs) is discussed in this book with the practical examples and case studies. The book discusses SOC design, performance improvement techniques, testing and system level verification, while also describing the modern Intel FPGA/XILINX FPGA architectures and their use in SOC prototyping. Further, the book covers the Synopsys Design Compiler (DC) and Prime Time (PT) commands, and how they can be used to optimize complex ASIC/SOC designs. The contents of this book will be useful to students and professionals alike.
Our society continues to depend upon systems that are built in a way that they end up being inflexible and intolerant to change. Therefore there is an urgent need to investigate innovations and approaches to the management of adaptive and dependable systems. These studies are usually implemented through design, development, and the evaluation of techniques and models to structure computer systems as adaptive systems. Innovations and Approaches for Resilient and Adaptive Systems is a comprehensive collection of knowledge on increasing the notions and models in adaptive and dependable systems. This book aims to enhance the awareness of the role of adaptability and resilience in system environments for researchers, practitioners, educators, and professionals alike.
This book describes a flexible and largely automated methodology for adding the estimation of power consumption to high level simulations at the electronic system level (ESL). This method enables the inclusion of power consumption considerations from the very start of a design. This ability can help designers of electronic systems to create devices with low power consumption. The authors also demonstrate the implementation of the method, using the popular ESL language "SystemC". This implementation enables most existing SystemC ESL simulations for power estimation with very little manual work. Extensive case-studies of a Network on Chip communication architecture and a dual-core application processor "ARM Cortex-A9" showcase the applicability and accuracy of the method to different types of electronic devices. The evaluation compares various trade-offs regarding amount of manual work, types of ESL models, achieved estimation accuracy and impact on the simulation speed. Describes a flexible and largely automated ESL power estimation method; Shows implementation of power estimation methodology in SystemC; Uses two extensive case studies to demonstrate method introduced.
In the last few years, courses on parallel computation have been developed and offered in many institutions in the UK, Europe and US as a recognition of the growing significance of this topic in mathematics and computer science. There is a clear need for texts that meet the needs of students and lecturers and this book, based on the author's lecture at ETH Zurich, is an ideal practical student guide to scientific computing on parallel computers working up from a hardware instruction level, to shared memory machines, and finally to distributed memory machines. Aimed at advanced undergraduate and graduate students in applied mathematics, computer science, and engineering, subjects covered include linear algebra, fast Fourier transform, and Monte-Carlo simulations, including examples in C and, in some cases, Fortran. This book is also ideal for practitioners and programmers.
Managing the Web of Things: Linking the Real World to the Web presents a consolidated and holistic coverage of engineering, management, and analytics of the Internet of Things. The web has gone through many transformations, from traditional linking and sharing of computers and documents (i.e., Web of Data), to the current connection of people (i.e., Web of People), and to the emerging connection of billions of physical objects (i.e., Web of Things). With increasing numbers of electronic devices and systems providing different services to people, Web of Things applications present numerous challenges to research institutions, companies, governments, international organizations, and others. This book compiles the newest developments and advances in the area of the Web of Things, ranging from modeling, searching, and data analytics, to software building, applications, and social impact. Its coverage will enable effective exploration, understanding, assessment, comparison, and the selection of WoT models, languages, techniques, platforms, and tools. Readers will gain an up-to-date understanding of the Web of Things systems that accelerates their research.
This book provides readers with a comprehensive introduction to the formal verification of hardware and software. World-leading experts from the domain of formal proof techniques show the latest developments starting from electronic system level (ESL) descriptions down to the register transfer level (RTL). The authors demonstrate at different abstraction layers how formal methods can help to ensure functional correctness. Coverage includes the latest academic research results, as well as descriptions of industrial tools and case studies.
This book provides a comprehensive analysis of the most important topics in parallel computation. It is written so that it may be used as a self-study guide to the field, and researchers in parallel computing will find it a useful reference for many years to come. The first half of the book consists of an introduction to many fundamental issues in parallel computing. The second half provides lists of P-complete- and open problems. These lists will have lasting value to researchers in both industry and academia. The lists of problems, with their corresponding remarks, the thorough index, and the hundreds of references add to the exceptional value of this resource. While the exciting field of parallel computation continues to expand rapidly, this book serves as a guide to research done through 1994 and also describes the fundamental concepts that new workers will need to know in coming years. It is intended for anyone interested in parallel computing, including senior level undergraduate students, graduate students, faculty, and people in industry. As an essential reference, the book will be needed in all academic libraries.
* The ELS model of enterprise security is endorsed by the Secretary of the Air Force for Air Force computing systems and is a candidate for DoD systems under the Joint Information Environment Program. * The book is intended for enterprise IT architecture developers, application developers, and IT security professionals. * This is a unique approach to end-to-end security and fills a niche in the market.
This book focuses on two of the most relevant problems related to power management on multicore and manycore systems. Specifically, one part of the book focuses on maximizing/optimizing computational performance under power or thermal constraints, while another part focuses on minimizing energy consumption under performance (or real-time) constraints.
Massively Parallel Systems (MPSs) with their scalable computation and storage space promises are becoming increasingly important for high-performance computing. The growing acceptance of MPSs in academia is clearly apparent. However, in industrial companies, their usage remains low. The programming of MPSs is still the big obstacle, and solving this software problem is sometimes referred to as one of the most challenging tasks of the 1990's. The 1994 working conference on "Programming Environments for Massively Parallel Systems" was the latest event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) in this field. It succeeded the 1992 conference in Edinburgh on "Programming Environments for Parallel Computing." The research and development work discussed at the conference addresses the entire spectrum of software problems including virtual machines which are less cumbersome to program; more convenient programming models; advanced programming languages, and especially more sophisticated programming tools; but also algorithms and applications.
This book describes state-of-the-art techniques for designing real-time computer systems. The author shows how to estimate precisely the effect of cache architecture on the execution time of a program, how to dispatch workload on multicore processors to optimize resources, while meeting deadline constraints, and how to use closed-form mathematical approaches to characterize highly variable workloads and their interaction in a networked environment. Readers will learn how to deal with unpredictable timing behaviors of computer systems on different levels of system granularity and abstraction.
This book examines some of the underlying processes behind different forms of information management, including how we store information in our brains, the impact of new technologies such as computers and robots on our efficiency in storing information, and how information is stored in families and in society. The editors brought together experts from a variety of disciplines. While it is generally agreed that information reduces uncertainties and that the ability to store it safely is of vital importance, these authors are open to different meanings of "information": computer science considers the bit as the information block; neuroscience emphasizes the importance of information as sensory inputs that are processed and transformed in the brain; theories in psychology focus more on individual learning and on the acquisition of knowledge; and finally sociology looks at how interpersonal processes within groups or society itself come to the fore. The book will be of value to researchers and students in the areas of information theory, artificial intelligence, and computational neuroscience.
This book discusses the design and performance analysis of SDRAM controllers that cater to both real-time and best-effort applications, i.e. mixed-time-criticality memory controllers. The authors describe the state of the art, and then focus on an architecture template for reconfigurable memory controllers that addresses effectively the quickly evolving set of SDRAM standards, in terms of worst-case timing and power analysis, as well as implementation. A prototype implementation of the controller in SystemC and synthesizable VHDL for an FPGA development board are used as a proof of concept of the architecture template.
This volume gives an overview of the state-of-the-art with respect to the development of all types of parallel computers and their application to a wide range of problem areas.
Modern applications of logic, in mathematics, theoretical computer science, and linguistics, require combined systems involving many different logics working together. In this book the author offers a basic methodology for combining - or fibring - systems. This means that many existing complex systems can be broken down into simpler components, hence making them much easier to manipulate.
This book introduces a new level of abstraction that closes the gap between the textual specification of embedded systems and the executable model at the Electronic System Level (ESL). Readers will be enabled to operate at this new, Formal Specification Level (FSL), using models which not only allow significant verification tasks in this early stage of the design flow, but also can be extracted semi-automatically from the textual specification in an interactive manner. The authors explain how to use these verification tasks to check conceptual properties, e.g. whether requirements are in conflict, as well as dynamic behavior, in terms of execution traces. |
You may like...
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R5,924
Discovery Miles 59 240
Clean Architecture - A Craftsman's Guide…
Robert Martin
Paperback
(1)
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
|