![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > General
PIC32 Microcontrollers and the Digilent chipKIT: Introductory to Advanced Projects will teach you about the architecture of 32-bit processors and the hardware details of the chipKIT development boards, with a focus on the chipKIT MX3 microcontroller development board. Once the basics are covered, the book then moves on to describe the MPLAB and MPIDE packages using the C language for program development. The final part of the book is based on project development, with techniques learned in earlier chapters, using projects as examples. Each projectwill have a practical approach, with in-depth descriptions and program flow-charts with block diagrams, circuit diagrams, a full program listing and a follow up on testing and further development. With this book you will learn: State-of-the-art PIC32 32-bit microcontroller architecture How to program 32-bit PIC microcontrollers using MPIDE, MPLAB, and C language Core features of the chipKIT series development boards How to develop simple projects using the chipKIT MX3 development board and Pmod interface cards how to develop advanced projects using the chipKIT MX3 development boards
This book introduces state-of-the-art verification techniques for real-time embedded systems, based on the inverse method for parametric timed automata. It reviews popular formalisms for the specification and verification of timed concurrent systems and, in particular, timed automata as well as several extensions such as timed automata equipped with stopwatches, linear hybrid automata and affine hybrid automata.The inverse method is introduced, and its benefits for guaranteeing robustness in real-time systems are shown. Then, it is shown how an iteration of the inverse method can solve the good parameters problem for parametric timed automata by computing a behavioral cartography of the system. Different extensions are proposed particularly for hybrid systems and applications to scheduling problems using timed automata with stopwatches. Various examples, both from the literature and industry, illustrate the techniques throughout the book.Various parametric verifications are performed, in particular of abstractions of a memory circuit sold by the chipset manufacturer ST-Microelectronics, as well as of the prospective flight control system of the next generation of spacecraft designed by ASTRIUM Space Transportation. Contents: 1. Parametric Timed Automata.2. The Inverse Method for Parametric Timed Automata.3. The Inverse Method in Practice: Application to Case Studies.4. Behavioral Cartography of Timed Automata.5. Parameter Synthesis for Hybrid Automata.6. Application to the Robustness Analysis of Scheduling Problems.7. Conclusion and Perspectives. About the Authors etienne Andre is Associate Professor in the Laboratoire d'Informatique de Paris Nord, in the University of Paris 13 (Sorbonne Paris Cite) in France. His current research interests focus on the verification of real-time systems.Romain Soulat is currently completing his PhD at the LSV laboratory at ENS-Cachan in France, focusing on the modeling and verification of hybrid temporal systems.
This Expert Guide gives you the techniques and technologies in embedded multicore to optimally design and implement your embedded system. Written by experts with a solutions focus, this encyclopedic reference gives you an indispensable aid to tackling the day-to-day problems when building and managing multicore embedded systems. Following an embedded system design path from start to finish, our team of experts takes you from architecture, through hardware implementation to software programming and debug. With this book you will learn: What motivates multicore The architectural options and tradeoffs; when to use what How to deal with the unique hardware challenges that multicore presents How to manage the software infrastructure in a multicore environment How to write effective multicore programs How to port legacy code into a multicore system and partition legacy software How to optimize both the system and software The particular challenges of debugging multicore hardware and software Examples demonstrating timeless implementation details
An epic account of the decades-long battle to control what has emerged as the world's most critical resource—microchip technology—with the United States and China increasingly in conflict. You may be surprised to learn that microchips are the new oil—the scarce resource on which the modern world depends. Today, military, economic, and geopolitical power are built on a foundation of computer chips. Virtually everything—from missiles to microwaves, smartphones to the stock market—runs on chips. Until recently, America designed and built the fastest chips and maintained its lead as the #1 superpower. Now, America's edge is slipping, undermined by competitors in Taiwan, Korea, Europe, and, above all, China. Today, as Chip War reveals, China, which spends more money each year importing chips than it spends importing oil, is pouring billions into a chip-building initiative to catch up to the US. At stake is America's military superiority and economic prosperity. Economic historian Chris Miller explains how the semiconductor came to play a critical role in modern life and how the U.S. become dominant in chip design and manufacturing and applied this technology to military systems. America's victory in the Cold War and its global military dominance stems from its ability to harness computing power more effectively than any other power. But here, too, China is catching up, with its chip-building ambitions and military modernization going hand in hand. America has let key components of the chip-building process slip out of its grasp, contributing not only to a worldwide chip shortage but also a new Cold War with a superpower adversary that is desperate to bridge the gap. Illuminating, timely, and fascinating, Chip War shows that, to make sense of the current state of politics, economics, and technology, we must first understand the vital role played by chips.
"Models of Computation for Heterogeneous Embedded Systems" presents a model of computation for heterogeneous embedded systems called DFCharts. It targets heterogeneous systems by combining finite state machines (FSM) with synchronous dataflow graphs (SDFG). FSMs are connected in the same way as in Argos (a Statecharts variant with purely synchronous semantics) using three operators: synchronous parallel, refinement and hiding. The fourth operator, called asynchronous parallel, is introduced in DFCharts to connect FSMs with SDFGs. In the formal semantics of DFCharts, the operation of an SDFG is represented as an FSM. Using this representation, SDFGs are merged with FSMs so that the behaviour of a complete DFCharts specification can be expressed as a single, flat FSM. This allows system properties to be verified globally. The practical application of DFCharts has been demonstrated by linking it to widely used system-level languages Java, Esterel and SystemC.
This book offers readers broad coverage of techniques to model, verify and validate the behavior and performance of complex distributed embedded systems. The authors attempt to bridge the gap between the three disciplines of model-based design, real-time analysis and model-driven development, for a better understanding of the ways in which new development flows can be constructed, going from system-level modeling to the correct and predictable generation of a distributed implementation, leveraging current and future research results.
The Heinz Nixdorf Museum Forum (HNF) is the world's largest c- puter museum and is dedicated to portraying the past, present and future of information technology. In the "Year of Informatics 2006" the HNF was particularly keen to examine the history of this still quite young discipline. The short-lived nature of information technologies means that individuals, inventions, devices, institutes and companies"age" more rapidly than in many other specialties. And in the nature of things the group of computer pioneers from the early days is growing smaller all the time. To supplement a planned new exhibit on "Software and Inform- ics" at the HNF, the idea arose of recording the history of informatics in an accompanying publication. Mysearchforsuitablesourcesandauthorsveryquickly cameupwith the right answer, the very rst name in Germany: Friedrich L. Bauer, Professor Emeritus of Mathematics at the TU in Munich, one of the - thers of informatics in Germany and for decades the indefatigable author of the"Historical Notes" column of the journal Informatik Spektrum. Friedrich L. Bauer was already the author of two works on the history of informatics, published in different decades and in different books. Both of them are notable for their knowledgeable, extremely comp- hensive and yet compact style. My obvious course was to motivate this author to amalgamate, supplement and illustrate his previous work.
Wafer-scale integration has long been the dream of system designers. Instead of chopping a wafer into a few hundred or a few thousand chips, one would just connect the circuits on the entire wafer. What an enormous capability wafer-scale integration would offer: all those millions of circuits connected by high-speed on-chip wires. Unfortunately, the best known optical systems can provide suitably ?ne resolution only over an area much smaller than a whole wafer. There is no known way to pattern a whole wafer with transistors and wires small enough for modern circuits. Statistical defects present a ?rmer barrier to wafer-scale integration. Flaws appear regularly in integrated circuits; the larger the circuit area, the more probable there is a ?aw. If such ?aws were the result only of dust one might reduce their numbers, but ?aws are also the inevitable result of small scale. Each feature on a modern integrated circuit is carved out by only a small number of photons in the lithographic process. Each transistor gets its electrical properties from only a small number of impurity atoms in its tiny area. Inevitably, the quantized nature of light and the atomic nature of matter produce statistical variations in both the number of photons de?ning each tiny shape and the number of atoms providing the electrical behavior of tiny transistors. No known way exists to eliminate such statistical variation, nor may any be possible.
This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade. Coverage includes security and trust issues in all types of electronic devices and systems such as ASICs, COTS, FPGAs, microprocessors/DSPs, and embedded systems. This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of, and trust in, modern society's microelectronic-supported infrastructures.
This book is intended as a system engineer's compendium, explaining the dependencies and technical interactions between the onboard computer hardware, the onboard software and the spacecraft operations from ground. After a brief introduction on the subsequent development in all three fields over the spacecraft engineering phases each of the main topis is treated in depth in a separate part. The features of today's onboard computers are explained at hand of their historic evolution over the decades from the early days of spaceflight up to today. Latest system-on-chip processor architectures are treated as well as all onboard computer major components. After the onboard computer hardware the corresponding software is treated in a separate part. Both the software static architecture as well as the dynamic architecture are covered, and development technologies as well as software verification approaches are included. Following these two parts on the onboard architecture, the last part covers the concepts of spacecraft operations from ground. This includes the nominal operations concepts, the redundancy concept and the topic of failure detection, isolation and recovery. The baseline examples in the book are taken from the domain of satellites and deep space probes. The principles and many cited standards on spacecraft commanding, hardware and software however also apply to other space applications like launchers. The book is equally applicable for students as well for system engineers in space industry.
Industrial machines, automobiles, airplanes, robots, and machines are among the myriad possible hosts of embedded systems. The author researches robotic vehicles and remote operated vehicles (ROVs), especially Underwater Robotic Vehicles (URVs), used for a wide range of applications such as exploring oceans, monitoring environments, and supporting operations in extreme environments. Embedded Mechatronics System Design for Uncertain Environments has been prepared for those who seek to easily develop and design embedded systems for control purposes in robotic vehicles. It reflects the multidisciplinarily of embedded systems from initial concepts (mechanical and electrical) to the modelling and simulation (mathematical relationships), creating graphical-user interface (software) and their actual implementations (mechatronics system testing). The author proposes new solutions for the prototyping, simulation, testing, and design of real-time systems using standard PC hardware including Linux (R), Raspbian (R), ARDUINO (R), and MATLAB (R) xPC Target.
The Fibre Channel Association is an international organization
devoted to educating and promoting the Fibre Channel standard.
I love virtual machines (VMs) and I have done for a long time.If that makes me "sad" or an "anorak," so be it. I love them because they are so much fun, as well as being so useful. They have an element of original sin (writing assembly programs and being in control of an entire machine), while still being able to claim that one is being a respectable member of the community (being structured, modular, high-level, object-oriented, and so on). They also allow one to design machines of one's own, unencumbered by the restrictions of a starts optimising it for some physical particular processor (at least, until one processor or other). I have been building virtual machines, on and off, since 1980 or there abouts. It has always been something of a hobby for me; it has also turned out to be a technique of great power and applicability. I hope to continue working on them, perhaps on some of the ideas outlined in the last chapter (I certainly want to do some more work with register-based VMs and concur rency). I originally wanted to write the book from a purely semantic viewpoint."
This year, the IFIP Working Conference on Distributed and Parallel Embedded Sys tems (DIPES 2008) is held as part of the IFIP World Computer Congress, held in Milan on September 7 10, 2008. The embedded systems world has a great deal of experience with parallel and distributed computing. Many embedded computing systems require the high performance that can be delivered by parallel computing. Parallel and distributed computing are often the only ways to deliver adequate real time performance at low power levels. This year's conference attracted 30 submissions, of which 21 were accepted. Prof. Jor ] g Henkel of the University of Karlsruhe graciously contributed a keynote address on embedded computing and reliability. We would like to thank all of the program committee members for their diligence. Wayne Wolf, Bernd Kleinjohann, and Lisa Kleinjohann Acknowledgements We would like to thank all people involved in the organization of the IFIP World Computer Congress 2008, especially the IPC Co Chairs Judith Bishop and Ivo De Lotto, the Organization Chair Giulio Occhini, as well as the Publications Chair John Impagliazzo. Further thanks go to the authors for their valuable contributions to DIPES 2008. Last but not least we would like to acknowledge the considerable amount of work and enthusiasm spent by our colleague Claudius Stern in preparing theproceedingsofDIPES2008. Hemadeitpossibletoproducethemintheircurrent professional and homogeneous style."
Automated and semi-automated manipulation of so-called labelled transition systems has become an important means in discovering flaws in software and hardware systems. Process algebra has been developed to express such labelled transition systems algebraically, which enhances the ways of manipulation by means of equational logic and term rewriting.The theory of process algebra has developed rapidly over the last twenty years, and verification tools have been developed on the basis of process algebra, often in cooperation with techniques related to model checking. This textbook gives a thorough introduction into the basics of process algebra and its applications.
Real-time systems are of importance to a large number of university laboratories and research institutes worldwide, and without the proper integration of real-time into distributed computing, institutions simply could not function. Achieving Real-Time in Distributed Computing: From Grids to Clouds offers over 400 accounts from a wide range of specific research efforts. Major focus is given to the need for methodologies, tools, and architectures for complex distributed systems that address the practical issues of performance guarantees, timed execution, real-time management of resources, synchronized communication under various load conditions, satisfaction of QoS constraints, and dealing with the trade-offs between these aspects.
This book describes fault tolerance techniques based on software and hardware to create hybrid techniques. They are able to reduce overall performance degradation and increase error detection when associated with applications implemented in embedded processors. Coverage begins with an extensive discussion of the current state-of-the-art in fault tolerance techniques. The authors then discuss the best trade-off between software-based and hardware-based techniques and introduce novel hybrid techniques. Proposed techniques increase existing fault detection rates up to 100%, while maintaining low performance overheads in area and application execution time."
This textbook serves as an introduction to the subject of embedded
systems design, using microcontrollers as core components. It
develops concepts from the ground up, covering the development of
embedded systems technology, architectural and organizational
aspects of controllers and systems, processor models, and
peripheral devices. Since microprocessor-based embedded systems
tightly blend hardware and software components in a single
application, the book also introduces the subjects of data
representation formats, data operations, and programming styles.
The practical component of the book is tailored around the
architecture of a widely used
Logic Synthesis for Low Power VLSI Designs presents a systematic and comprehensive treatment of power modeling and optimization at the logic level. More precisely, this book provides a detailed presentation of methodologies, algorithms and CAD tools for power modeling, estimation and analysis, synthesis and optimization at the logic level. Logic Synthesis for Low Power VLSI Designs contains detailed descriptions of technology-dependent logic transformations and optimizations, technology decomposition and mapping, and post-mapping structural optimization techniques for low power. It also emphasizes the trade-off techniques for two-level and multi-level logic circuits that involve power dissipation and circuit speed, in the hope that the readers can better understand the issues and ways of achieving their power dissipation goal while meeting the timing constraints. Logic Synthesis for Low Power VLSI Designs is written for VLSI design engineers, CAD professionals, and students who have had a basic knowledge of CMOS digital design and logic synthesis.
The major thrust of this book is the realisation of an all optical computer. To that end it discusses optoelectronic devices and applications, transmission systems, integrated optoelectronic systems and, of course, all optical computers. The chapters on heterostructure light emitting devices' quantum well carrier transport optoelectronic devices' present the most recent advances in device physics, together with modern devices and their applications. The chapter on microcavity lasers' is essential to the discussion of present and future developments in solid-state laser physics and technology and puts into perspective the present state of research into and the technology of optoelectronic devices, within the context of their use in advanced systems. A significant part of the book deals with problems of propagation in quantum structures. soliton-based switching, gating and transmission systems' presents the basics of controlling the propagation of photons in solids and the use of this control in devices. The chapters on optoelectronic processing using smart pixels' and all optical computers' are preceded by introductory material in fundamentals of quantum structures for optoelectronic devices and systems' and linear and nonlinear absorption and reflection in quantum well structures'. It is clear that new architectures will be necessary if we are to fully utilise the potentiality of electrooptic devices in computing, but even current architectures and structures demonstrate the feasibility of the all optical computer: one that is possible today.
This book presents the cyber culture of micro, macro, cosmological, and virtual computing. The book shows how these work to formulate, explain, and predict the current processes and phenomena monitoring and controlling technology in the physical and virtual space.The authors posit a basic proposal to transform description of the function truth table and structure adjacency matrix to a qubit vector that focuses on memory-driven computing based on logic parallel operations performance. The authors offer a metric for the measurement of processes and phenomena in a cyberspace, and also the architecture of logic associative computing for decision-making and big data analysis.The book outlines an innovative theory and practice of design, test, simulation, and diagnosis of digital systems based on the use of a qubit coverage-vector to describe the functional components and structures. Authors provide a description of the technology for SoC HDL-model diagnosis, based on Test Assertion Blocks Activated Graph. Examples of cyber-physical systems for digital monitoring and cloud management of social objects and transport are proposed. A presented automaton model of cosmological computing explains the cyclical and harmonious evolution of matter-energy essence, and also a space-time form of the Universe.
DAPSYS (International Conference on Distributed and Parallel Systems) is an international biannual conference series dedicated to all aspects of distributed and parallel computing. DAPSYS 2008, the 7th International Conference on Distributed and Parallel Systems was held in September 2008 in Hungary. Distributed and Parallel Systems: Desktop Grid Computing, based on DAPSYS 2008, presents original research, novel concepts and methods, and outstanding results. Contributors investigate parallel and distributed techniques, algorithms, models and applications; present innovative software tools, environments and middleware; focus on various aspects of grid computing; and introduce novel methods for development, deployment, testing and evaluation. This volume features a special focus on desktop grid computing as well. Designed for a professional audience composed of practitioners and researchers in industry, this book is also suitable for advanced-level students in computer science.
I am very pleased to play even a small part in the publication of this book on the SIGNAL language and its environment POLYCHRONY. I am sure it will be a s- ni?cant milestone in the development of the SIGNAL language, of synchronous computing in general, and of the data?ow approach to computation. In data?ow, the computation takes place in a producer-consumer network of - dependent processing stations. Data travels in streams and is transformed as these streams pass through the processing stations (often called ?lters). Data?ow is an attractive model for many reasons, not least because it corresponds to the way p- duction, transportation, andcommunicationare typicallyorganizedin the real world (outside cyberspace). I myself stumbled into data?ow almost against my will. In the mid-1970s, Ed Ashcroft and I set out to design a "super" structured programming language that, we hoped, would radically simplify proving assertions about programs. In the end, we decided that it had to be declarative. However, we also were determined that iterative algorithms could be expressed directly, without circumlocutions such as the use of a tail-recursive function. The language that resulted, which we named LUCID, was much less traditional then we would have liked. LUCID statements are equations in a kind of executable temporallogic thatspecifythe (time)sequencesof variablesinvolvedin aniteration.
No other area of biology has grown as fast and become as relevant over the last decade as virology. It is with no little amount of amaze ment, that the more we learn about fundamental biological questions and mechanisms of diseases, the more obvious it becomes that viruses perme ate all facets of our lives. While on one hand viruses are known to cause acute and chronic, mild and fatal, focal and generalized diseases, on the other hand, they are used as tools for gaining an understanding of the structure and function of higher organisms, and as vehicles for carrying protective or curative therapies. The wide scope of approaches to different biological and medical virological questions was well rep resented by the speakers that participated in this year's Symposium. While the epidemic by the human immunodeficiency virus type 1 continues to spread without hope for much relief in sight, intriguing questions and answers in the area of diagnostics, clinical manifestations and therapeutical approaches to viral infections are unveiled daily. Let us hope, that with the increasing awareness by our society of the role played by viruses, not only as causative agents of diseases, but also as models for better understanding basic biological principles, more efforts and resources are placed into their study. Luis M. de la Maza Irvine, California Ellena M."
Hardware verification is a hot topic in circuit and system design due to rising circuit complexity. This advanced textbook presents an almost complete overview of techniques for hardware verification. It covers all approaches used in existing tools, such as binary and word-level decision diagrams, symbolic methods for equivalence checking, and temporal logic model checking, and introduces the use of higher-order logic theorem proving for verifying circuit correctness. It enables the reader to understand the advantages and limitations of each technique. Each chapter contains an introduction and a summary as well as a section for the advanced reader. Thus a broad audience is addressed, from beginners in system design to experts. |
You may like...
Energy-Efficient Fault-Tolerant Systems
Jimson Mathew, Rishad A. Shafik, …
Hardcover
R4,795
Discovery Miles 47 950
Artificial Life Models in Hardware
Andrew Adamatzky, Maciej Komosinski
Hardcover
R2,802
Discovery Miles 28 020
Model Driven Development for Embedded…
Jean-Aime Maxa, Mohamed Slim Ben Mahmoud, …
Hardcover
|