0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (5)
  • R250 - R500 (19)
  • R500+ (2,681)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General

Protecting Chips Against Hold Time Violations Due to Variability (Hardcover, 2012): Gustavo Neuberger, Gilson Wirth, Ricardo... Protecting Chips Against Hold Time Violations Due to Variability (Hardcover, 2012)
Gustavo Neuberger, Gilson Wirth, Ricardo Reis
R2,854 Discovery Miles 28 540 Ships in 10 - 15 working days

With the development of Very-Deep Sub-Micron technologies, process variability is becoming increasingly important and is a very important issue in the design of complex circuits. Process variability is the statistical variation of process parameters, meaning that these parameters do not have always the same value, but become a random variable, with a given mean value and standard deviation. This effect can lead to several issues in digital circuit design.

The logical consequence of this parameter variation is that circuit characteristics, as delay and power, also become random variables. Because of the delay variability, not all circuits will now have the same performance, but some will be faster and some slower. However, the slowest circuits may be so slow that they will not be appropriate for sale. On the other hand, the fastest circuits that could be sold for a higher price can be very leaky, and also not very appropriate for sale. A main consequence of power variability is that the power consumption of some circuits will be different than expected, reducing reliability, average life expectancy and warranty of products. Sometimes the circuits will not work at all, due to reasons associated with process variations. At the end, these effects result in lower yield and lower profitability.

To understand these effects, it is necessary to study the consequences of variability in several aspects of circuit design, like logic gates, storage elements, clock distribution, and any other that can be affected by process variations. The main focus of this book will be storage elements.

Dual Quaternions and Their Associated Clifford Algebras (Paperback): Ronald Goldman Dual Quaternions and Their Associated Clifford Algebras (Paperback)
Ronald Goldman
R1,451 Discovery Miles 14 510 Ships in 12 - 19 working days

Amid recent interest in Clifford algebra for dual quaternions as a more suitable method for Computer Graphics than standard matrix algebra, this book presents dual quaternions and their associated Clifford algebras in a new light, accessible to and geared towards the Computer Graphics community. Collating all the associated formulas and theorems in one place, this book provides an extensive and rigorous treatment of dual quaternions, as well as showing how two models of Clifford algebras emerge naturally from the theory of dual quaternions. Each chapter comes complete with a set of exercises to help readers sharpen and practice their knowledge. This book is accessible to anyone with a basic knowledge of quaternion algebra and is of particular use to forward-thinking members of the Computer Graphics community. .

Modelling Distributed Systems (Hardcover, 2007 ed.): Wan Fokkink Modelling Distributed Systems (Hardcover, 2007 ed.)
Wan Fokkink
R1,514 Discovery Miles 15 140 Ships in 10 - 15 working days

This textbook guides students through algebraic specification and verification of distributed systems, and some of the most prominent formal verification techniques. The author employs uCRL as the vehicle, a language developed to combine process algebra and abstract data types. The book evolved from introductory courses on protocol verification taught to undergraduate and graduate students of computer science, and the text is supported throughout with examples and exercises. Full solutions are provided in an appendix, while exercise sheets, lab exercises, example specifications and lecturer slides are available on the author's website."

Digital Blood on Their Hands - The Ukraine Cyberwar Attacks (Paperback): Andrew Jenkinson Digital Blood on Their Hands - The Ukraine Cyberwar Attacks (Paperback)
Andrew Jenkinson
R933 Discovery Miles 9 330 Ships in 9 - 17 working days

- Totally unique, and incredibly damning, concerning information and overview of the world's first Cyberwar. - The first ever Cyberwar and the precursor to the first war in Europe since 1945, it will be discussed for decades to come and go down in history as a defining point. - Will be of interest to all citizens of the world, literally.

Handbook on Enterprise Architecture (Hardcover, 2003 ed.): Peter Bernus, Laszlo Nemes, Gunter Schmidt Handbook on Enterprise Architecture (Hardcover, 2003 ed.)
Peter Bernus, Laszlo Nemes, Gunter Schmidt
R8,454 Discovery Miles 84 540 Ships in 10 - 15 working days

 This Handbook is about methods, tools and examples of how to architect an enterprise through considering all life cycle aspects of Enterprise Entities (such as individual enterprises, enterprise networks, virtual enterprises, projects and other complex systems including a mixture of automated and human processes). The book is based on ISO15704:2000, or the GERAM Framework (Generalised Enterprise Reference Architecture and Methodology) that generalises the requirements of Enterprise Reference Architectures. Various Architecture Frameworks (PERA, CIMOSA, Grai-GIM, Zachman, C4ISR/DoDAF) are shown in light of GERAM to allow a deeper understanding of their contributions and therefore their correct and knowledgeable use. The handbook addresses a wide variety of audience, and covers methods and tools necessary to design or redesign enterprises, as well as to structure the implementation into manageable projects. 

Johan van Benthem on Logic and Information Dynamics (Hardcover, 2014 ed.): Alexandru Baltag, Sonja Smets Johan van Benthem on Logic and Information Dynamics (Hardcover, 2014 ed.)
Alexandru Baltag, Sonja Smets
R5,816 Discovery Miles 58 160 Ships in 10 - 15 working days

This book illustrates the program of Logical-Informational Dynamics. Rational agents exploit the information available in the world in delicate ways, adopt a wide range of epistemic attitudes, and in that process, constantly change the world itself. Logical-Informational Dynamics is about logical systems putting such activities at center stage, focusing on the events by which we acquire information and change attitudes. Its contributions show many current logics of information and change at work, often in multi-agent settings where social behavior is essential, and often stressing Johan van Benthem's pioneering work in establishing this program. However, this is not a Festschrift, but a rich tapestry for a field with a wealth of strands of its own. The reader will see the state of the art in such topics as information update, belief change, preference, learning over time, and strategic interaction in games. Moreover, no tight boundary has been enforced, and some chapters add more general mathematical or philosophical foundations or links to current trends in computer science.


The theme of this book lies at the interface of many disciplines. Logic is the main methodology, but the various chapters cross easily between mathematics, computer science, philosophy, linguistics, cognitive and social sciences, while also ranging from pure theory to empirical work. Accordingly, the authors of this book represent a wide variety of original thinkers from different research communities. And their
interconnected themes challenge at the same time how we think of logic, philosophy and computation.

Thus, very much in line with van Benthem's work over many decades, the volume shows how all these disciplines form a natural unity in the perspective of dynamic logicians (broadly conceived) exploring their new themes today. And at the same time, in doing so, it offers a broader conception of logic with a certain grandeur, moving its horizons beyond the traditional study of consequence relations.

Stream Processor Architecture (Hardcover, 2001 ed.): Scott Rixner Stream Processor Architecture (Hardcover, 2001 ed.)
Scott Rixner
R2,923 Discovery Miles 29 230 Ships in 10 - 15 working days

Media processing applications, such as three-dimensional graphics, video compression, and image processing, currently demand 10-100 billion operations per second of sustained computation. Fortunately, hundreds of arithmetic units can easily fit on a modestly sized 1cm2 chip in modern VLSI. The challenge is to provide these arithmetic units with enough data to enable them to meet the computation demands of media processing applications. Conventional storage hierarchies, which frequently include caches, are unable to bridge the data bandwidth gap between modern DRAM and tens to hundreds of arithmetic units. A data bandwidth hierarchy, however, can bridge this gap by scaling the provided bandwidth across the levels of the storage hierarchy. The stream programming model enables media processing applications to exploit a data bandwidth hierarchy effectively. Media processing applications can naturally be expressed as a sequence of computation kernels that operate on data streams. This programming model exposes the locality and concurrency inherent in these applications and enables them to be mapped efficiently to the data bandwidth hierarchy. Stream programs are able to utilize inexperience local data bandwidth when possible and consume expensive global data bandwidth only when necessary. Stream Processor Architecture presents the architecture of the Imagine streaming media processor, which delivers a peak performance of 20 billion floating-point operations per second. Imagine efficiently supports 48 arithmetic units with a three-tiered data bandwidth hierarchy. At the base of the hierarchy, the streaming memory system employs memory access scheduling to maximize the sustained bandwidth of external DRAM. At the center of the hierarchy, the global stream register file enables streams of data to be recirculated directly from one computation kernel to the next without returning data to memory. Finally, local distributed register files that directly feed the arithmetic units enable temporary data to be stored locally so that it does not need to consume costly global register bandwidth. The bandwidth hierarchy enables Imagine to achieve up to 96% of the performance of a stream processor with infinite bandwidth from memory and the global register file.

Post-Silicon Validation and Debug (Hardcover, 1st ed. 2019): Prabhat Mishra, Farimah Farahmandi Post-Silicon Validation and Debug (Hardcover, 1st ed. 2019)
Prabhat Mishra, Farimah Farahmandi
R4,258 Discovery Miles 42 580 Ships in 12 - 19 working days

This book provides a comprehensive coverage of System-on-Chip (SoC) post-silicon validation and debug challenges and state-of-the-art solutions with contributions from SoC designers, academic researchers as well as SoC verification experts. The readers will get a clear understanding of the existing debug infrastructure and how they can be effectively utilized to verify and debug SoCs.

Time-Constrained Transaction Management - Real-Time Constraints in Database Transaction Systems (Hardcover, 1996 ed.): Nandit... Time-Constrained Transaction Management - Real-Time Constraints in Database Transaction Systems (Hardcover, 1996 ed.)
Nandit R. Soparkar, Henry F. Korth, Abraham Silberschatz
R2,959 Discovery Miles 29 590 Ships in 10 - 15 working days

Transaction processing is an established technique for the concurrent and fault tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "legal" (i.e., logically correct) man ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod els, and we briefly outline how logical and temporal constraints may be ex pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules."

Low Power Interconnect Design (Hardcover, 2012): Sandeep Saini Low Power Interconnect Design (Hardcover, 2012)
Sandeep Saini
R2,879 Discovery Miles 28 790 Ships in 10 - 15 working days

This book provides practical solutions for delay and power reduction for on-chip interconnects and buses. It provides an in depth description of the problem of signal delay and extra power consumption, possible solutions for delay and glitch removal, while considering the power reduction of the total system. Coverage focuses on use of the Schmitt Trigger as an alternative approach to buffer insertion for delay and power reduction in VLSI interconnects. In the last section of the book, various bus coding techniques are discussed to minimize delay and power in address and data buses.

Theory of Digital Automata (Hardcover, 2013 ed.): Bohdan Borowik, Mykola Karpinskyy, Valery Lahno, Oleksandr Petrov Theory of Digital Automata (Hardcover, 2013 ed.)
Bohdan Borowik, Mykola Karpinskyy, Valery Lahno, Oleksandr Petrov
R4,410 R3,553 Discovery Miles 35 530 Save R857 (19%) Ships in 12 - 19 working days

This book serves a dual purpose: firstly to combine the treatment of circuits and digital electronics, and secondly, to establish a strong connection with the contemporary world of digital systems. The need for this approach arises from the observation that introducing digital electronics through a course in traditional circuit analysis is fast becoming obsolete. Our world has gone digital. Automata theory helps with the design of digital circuits such as parts of computers, telephone systems and control systems. A complete perspective is emphasized, because even the most elegant computer architecture will not function without adequate supporting circuits. The focus is on explaining the real-world implementation of complete digital systems. In doing so, the reader is prepared to immediately begin design and implementation work. This work serves as a bridge to take readers from the theoretical world to the everyday design world where solutions must be complete to be successful.

Architecture and Design of Distributed Embedded Systems - IFIP WG10.3/WG10.4/WG10.5 International Workshop on Distributed and... Architecture and Design of Distributed Embedded Systems - IFIP WG10.3/WG10.4/WG10.5 International Workshop on Distributed and Parallel Embedded Systems (DIPES 2000) October 18-19, 2000, Schloss Eringerfeld, Germany (Hardcover, 2001 ed.)
Bernd Kleinjohann
R4,489 Discovery Miles 44 890 Ships in 10 - 15 working days

Due to the decreasing production costs of IT systems, applications that had to be realised as expensive PCBs formerly, can now be realised as a system-on-chip. Furthermore, low cost broadband communication media for wide area communication as well as for the realisation of local distributed systems are available. Typically the market requires IT systems that realise a set of specific features for the end user in a given environment, so called embedded systems. Some examples for such embedded systems are control systems in cars, airplanes, houses or plants, information and communication devices like digital TV, mobile phones or autonomous systems like service- or edutainment robots. For the design of embedded systems the designer has to tackle three major aspects: The application itself including the man-machine interface, The (target) architecture of the system including all functional and non-functional constraints and, the design methodology including modelling, specification, synthesis, test and validation. The last two points are a major focus of this book. This book documents the high quality approaches and results that were presented at the International Workshop on Distributed and Parallel Embedded Systems (DIPES 2000), which was sponsored by the International Federation for Information Processing (IFIP), and organised by IFIP working groups WG10.3, WG10.4 and WG10.5. The workshop took place on October 18-19, 2000, in Schloss Eringerfeld near Paderborn, Germany. Architecture and Design of Distributed Embedded Systems is organised similar to the workshop. Chapters 1 and 4 (Methodology I and II) deal with different modelling and specification paradigms and the corresponding design methodologies. Generic system architectures for different classes of embedded systems are presented in Chapter 2. In Chapter 3 several design environments for the support of specific design methodologies are presented. Problems concerning test and validation are discussed in Chapter 5. The last two chapters include distribution and communication aspects (Chapter 6) and synthesis techniques for embedded systems (Chapter 7). This book is essential reading for computer science researchers and application developers."

Design for Testability, Debug and Reliability - Next Generation Measures Using Formal Techniques (Hardcover, 1st ed. 2021):... Design for Testability, Debug and Reliability - Next Generation Measures Using Formal Techniques (Hardcover, 1st ed. 2021)
Sebastian Huhn, Rolf Drechsler
R3,119 Discovery Miles 31 190 Ships in 10 - 15 working days

This book introduces several novel approaches to pave the way for the next generation of integrated circuits, which can be successfully and reliably integrated, even in safety-critical applications. The authors describe new measures to address the rising challenges in the field of design for testability, debug, and reliability, as strictly required for state-of-the-art circuit designs. In particular, this book combines formal techniques, such as the Satisfiability (SAT) problem and the Bounded Model Checking (BMC), to address the arising challenges concerning the increase in test data volume, as well as test application time and the required reliability. All methods are discussed in detail and evaluated extensively, while considering industry-relevant benchmark candidates. All measures have been integrated into a common framework, which implements standardized software/hardware interfaces.

The Architecture of Information - Architecture, Interaction Design and the Patterning of Digital Information (Hardcover):... The Architecture of Information - Architecture, Interaction Design and the Patterning of Digital Information (Hardcover)
Martyn Dade-Robertson
R5,825 Discovery Miles 58 250 Ships in 12 - 19 working days

This book looks at relationships between the organisation of physical objects in space and the organisation of ideas. Historical, philosophical, psychological and architectural knowledge are united to develop an understanding of the relationship between information and its representation. Despite its potential to break the mould, digital information has relied on metaphors from a pre-digital era. In particular, architectural ideas have pervaded discussions of digital information, from the urbanisation of cyberspace in science fiction, through to the adoption of spatial visualisations in the design of graphical user interfaces. This book tackles: * the historical importance of physical places to the organisation and expression of knowledge * the limitations of using the physical organisation of objects as the basis for systems of categorisation and taxonomy * the emergence of digital technologies and the 20th century new conceptual understandings of knowledge and its organisation * the concept of disconnecting storage of information objects from their presentation and retrieval * ideas surrounding semantic space' * the realities of the types of user interface which now dominate modern computing.

VHDL: A logic synthesis approach (Hardcover, 1997 ed.): D. Naylor, S. Jones VHDL: A logic synthesis approach (Hardcover, 1997 ed.)
D. Naylor, S. Jones
R4,552 Discovery Miles 45 520 Ships in 10 - 15 working days

This book is structured in a practical, example-driven, manner. The use of VHDL for constructing logic synthesisers is one of the aims of the book; the second is the application of the tools to the design process. Worked examples, questions and answers are provided together with do and don'ts of good practice. An appendix on logic design the source code are available free of charge over the Internet.

A High Performance Architecture for Prolog (Hardcover, 1990 ed.): T.P. Dobry A High Performance Architecture for Prolog (Hardcover, 1990 ed.)
T.P. Dobry
R3,008 Discovery Miles 30 080 Ships in 10 - 15 working days

Artificial Intelligence is entering the mainstream of com- puter applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the pro- cessing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an archi- tecture specialized for Prolog can achieve a ten-fold improve- ment in performance over conventional, general-purpose architec- tures. This book presents such an architecture for high perfor- mance execution of Prolog programs. The architecture is based on the abstract machine descrip- tion introduced by David H.D. Warren known as the Warren Abstract Machine (W AM). The execution model of the W AM is described and extended to provide a complete Instruction Set Architecture (lSA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design. The work described here represents one of the first efforts to implement the W AM model in hardware. The approach taken is that of direct implementation of the high level WAM instruction set in hardware resulting in a elSe style archi- tecture.

Hardware Protection through Obfuscation (Hardcover, 1st ed. 2017): Domenic Forte, Swarup Bhunia, Mark M. Tehranipoor Hardware Protection through Obfuscation (Hardcover, 1st ed. 2017)
Domenic Forte, Swarup Bhunia, Mark M. Tehranipoor
R4,716 Discovery Miles 47 160 Ships in 12 - 19 working days

This book introduces readers to various threats faced during design and fabrication by today's integrated circuits (ICs) and systems. The authors discuss key issues, including illegal manufacturing of ICs or "IC Overproduction," insertion of malicious circuits, referred as "Hardware Trojans", which cause in-field chip/system malfunction, and reverse engineering and piracy of hardware intellectual property (IP). The authors provide a timely discussion of these threats, along with techniques for IC protection based on hardware obfuscation, which makes reverse-engineering an IC design infeasible for adversaries and untrusted parties with any reasonable amount of resources. This exhaustive study includes a review of the hardware obfuscation methods developed at each level of abstraction (RTL, gate, and layout) for conventional IC manufacturing, new forms of obfuscation for emerging integration strategies (split manufacturing, 2.5D ICs, and 3D ICs), and on-chip infrastructure needed for secure exchange of obfuscation keys- arguably the most critical element of hardware obfuscation.

Paraconsistent Intelligent-Based Systems - New Trends in the Applications of Paraconsistency (Hardcover, 2015 ed.): Jair Minoro... Paraconsistent Intelligent-Based Systems - New Trends in the Applications of Paraconsistency (Hardcover, 2015 ed.)
Jair Minoro Abe
R4,494 R3,636 Discovery Miles 36 360 Save R858 (19%) Ships in 12 - 19 working days

This book presents some of the latest applications of new theories based on the concept of paraconsistency and correlated topics in informatics, such as pattern recognition (bioinformatics), robotics, decision-making themes, and sample size. Each chapter is self-contained, and an introductory chapter covering the logic theoretical basis is also included. The aim of the text is twofold: to serve as an introductory text on the theories and applications of new logic, and as a textbook for undergraduate or graduate-level courses in AI. Today AI frequently has to cope with problems of vagueness, incomplete and conflicting (inconsistent) information. One of the most notable formal theories for addressing them is paraconsistent (paracomplete and non-alethic) logic.

Twenty Five Years of Constructive Type Theory (Hardcover): Giovanni Sambin, Jan M Smith Twenty Five Years of Constructive Type Theory (Hardcover)
Giovanni Sambin, Jan M Smith
R2,905 Discovery Miles 29 050 Ships in 12 - 19 working days

Per Martin-Loef's work on the development of constructive type theory has been of huge significance in the fields of logic and the foundations of mathematics. It is also of broader philosophical significance, and has important applications in areas such as computing science and linguistics. This volume draws together contributions from researchers whose work builds on the theory developed by Martin-Loef over the last twenty-five years. As well as celebrating the anniversary of the birth of the subject it covers many of the diverse fields which are now influenced by type theory. It is an invaluable record of areas of current activity, but also contains contributions from N. G. de Bruijn and William Tait, both important figures in the early development of the subject. Also published for the first time is one of Per Martin-Loef's earliest papers.

The Interaction of Compilation Technology and Computer Architecture (Hardcover, 1994 ed.): David J. Lilja, Peter L. Bird The Interaction of Compilation Technology and Computer Architecture (Hardcover, 1994 ed.)
David J. Lilja, Peter L. Bird
R3,046 Discovery Miles 30 460 Ships in 10 - 15 working days

In brief summary, the following results were presented in this work: * A linear time approach was developed to find register requirements for any specified CS schedule or filled MRT. * An algorithm was developed for finding register requirements for any kernel that has a dependence graph that is acyclic and has no data reuse on machines with depth independent instruction templates. * We presented an efficient method of estimating register requirements as a function of pipeline depth. * We developed a technique for efficiently finding bounds on register require ments as a function of pipeline depth. * Presented experimental data to verify these new techniques. * discussed some interesting design points for register file size on a number of different architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix, John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW Architecture for a Trace Scheduling Com piler. In Architectural Support for Programming Languages and Operating Systems, pages 180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky. Compile-Time Optimization of Memory and Register Usage on the Cray-2. In Proceedings of the Second Workshop on Languages and Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of a Cray-2 by Vector Block Scheduling. In Proceedings of Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very High-Speed Computing Systems. Proceedings of the IEEE, 54:1901-1909, December 1966.

Quality-Driven SystemC Design (Hardcover, 2010 ed.): Daniel Grosse, Rolf Drechsler Quality-Driven SystemC Design (Hardcover, 2010 ed.)
Daniel Grosse, Rolf Drechsler
R2,979 Discovery Miles 29 790 Ships in 10 - 15 working days

A quality-driven design and verification flow for digital systems is developed and presented in Quality-Driven SystemC Design. Two major enhancements characterize the new flow: First, dedicated verification techniques are integrated which target the different levels of abstraction. Second, each verification technique is complemented by an approach to measure the achieved verification quality. The new flow distinguishes three levels of abstraction (namely system level, top level and block level) and can be incorporated in existing approaches. After reviewing the preliminary concepts, in the following chapters the three levels for modeling and verification are considered in detail. At each level the verification quality is measured. In summary, following the new design and verification flow a high overall quality results.

Physical Assurance - For Electronic Devices and Systems (Hardcover, 1st ed. 2021): Navid Asadizanjani, Mir Tanjidur Rahman,... Physical Assurance - For Electronic Devices and Systems (Hardcover, 1st ed. 2021)
Navid Asadizanjani, Mir Tanjidur Rahman, Mark Tehranipoor
R3,125 Discovery Miles 31 250 Ships in 10 - 15 working days

This book provides readers with a comprehensive introduction to physical inspection-based approaches for electronics security. The authors explain the principles of physical inspection techniques including invasive, non-invasive and semi-invasive approaches and how they can be used for hardware assurance, from IC to PCB level. Coverage includes a wide variety of topics, from failure analysis and imaging, to testing, machine learning and automation, reverse engineering and attacks, and countermeasures.

Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems (Hardcover, 2011): Paul Lokuciejewski, Peter... Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems (Hardcover, 2011)
Paul Lokuciejewski, Peter Marwedel
R4,507 Discovery Miles 45 070 Ships in 10 - 15 working days

For real-time systems, the worst-case execution time (WCET) is the key objective to be considered. Traditionally, code for real-time systems is generated without taking this objective into account and the WCET is computed only after code generation. Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems presents the first comprehensive approach integrating WCET considerations into the code generation process. Based on the proposed reconciliation between a compiler and a timing analyzer, a wide range of novel optimization techniques is provided. Among others, the techniques cover source code and assembly level optimizations, exploit machine learning techniques and address the design of modern systems that have to meet multiple objectives.

Using these optimizations, the WCET of real-time applications can be reduced by about 30% to 45% on the average. This opens opportunities for decreasing clock speeds, costs and energy consumption of embedded processors. The proposed techniques can be used for all types real-time systems, including automotive and avionics IT systems.

SystemVerilog Assertions and Functional Coverage - Guide to Language, Methodology and Applications (Hardcover, 2014 ed.): Ashok... SystemVerilog Assertions and Functional Coverage - Guide to Language, Methodology and Applications (Hardcover, 2014 ed.)
Ashok B. Mehta
R4,748 Discovery Miles 47 480 Ships in 12 - 19 working days

This book provides a hands-on, application-oriented guide to the language and methodology of both SystemVerilog Assertions and SytemVerilog Functional Coverage. Readers will benefit from the step-by-step approach to functional hardware verification, which will enable them to uncover hidden and hard to find bugs, point directly to the source of the bug, provide for a clean and easy way to model complex timing checks and objectively answer the question 'have we functionally verified everything'. Written by a professional end-user of both SystemVerilog Assertions and SystemVerilog Functional Coverage, this book explains each concept with easy to understand examples, simulation logs and applications derived from real projects. Readers will be empowered to tackle the modeling of complex checkers for functional verification, thereby drastically reducing their time to design and debug.

System Specification and Design Languages - Selected Contributions from FDL 2010 (Hardcover, 2012 ed.): Tom J. Kazmierski, Adam... System Specification and Design Languages - Selected Contributions from FDL 2010 (Hardcover, 2012 ed.)
Tom J. Kazmierski, Adam Morawiec
R4,371 Discovery Miles 43 710 Ships in 10 - 15 working days

This book brings together a selection of the best papers from the thirteenth edition of the Forum on specification and Design Languages Conference (FDL), which was held in Southampton, UK in September 2010. FDL is a well established international forum devoted to dissemination of research results, practical experiences and new ideas in the application of specification, design and verification languages to the design, modelling and verification of integrated circuits, complex hardware/software embedded systems, and mixed-technology systems.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Elements Of Quantum Mechanics Of…
Franco Strocchi Paperback R1,186 Discovery Miles 11 860
Elementary Particles and Their…
Quang Ho-Kim, Xuan-Yem Pham Hardcover R3,274 Discovery Miles 32 740
Fields, Symmetries, and Quarks
Ulrich Mosel Hardcover R1,707 Discovery Miles 17 070
Dark Matter in Astro- and Particle…
H.V. Klapdor-Kleingrothaus Hardcover R2,646 Discovery Miles 26 460
LOVE YOUR BULLDOG AND PLAY SUDOKU…
Loving Puzzles Paperback R502 Discovery Miles 5 020
Tradition and Creativity in Korean…
Hyelim Kim Hardcover R4,474 Discovery Miles 44 740
LOVE YOUR AKBASH DOG AND PLAY SUDOKU…
Loving Puzzles Paperback R502 Discovery Miles 5 020
The Early Horn - A Practical Guide
John Humphries Hardcover R2,811 Discovery Miles 28 110
Bullsh!t - 50 Fibs That Made South…
Jonathan Ancer Paperback  (2)
R280 R250 Discovery Miles 2 500
Stellenbosch: Murder Town - Two Decades…
Julian Jansen Paperback R360 R337 Discovery Miles 3 370

 

Partners