![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > General
Grid and cloud computing both facilitate an increase in computing resources by the development of new connections to existing systems. Evolving Developments in Grid and Cloud Computing: Advancing Research contains investigations of grid and cloud evolution, workflow management, and the impact new computing systems have on education and industry. Targeted at both researchers and IT professionals, this book provides current trends and emerging issues in cloud and grid architectures, standards and performance analysis.
This book-presents new methods and tools for the integration and simulation of smart devices. The design approach described in this book explicitly accounts for integration of Smart Systems components and subsystems as a specific constraint. It includes methodologies and EDA tools to enable multi-disciplinary and multi-scale modeling and design, simulation of multi-domain systems, subsystems and components at all levels of abstraction, system integration and exploration for optimization of functional and non-functional metrics. By covering theoretical and practical aspects of smart device design, this book targets people who are working and studying on hardware/software modelling, component integration and simulation under different positions (system integrators, designers, developers, researchers, teachers, students etc.). In particular, it is a good introduction to people who have interest in managing heterogeneous components in an efficient and effective way on different domains and different abstraction levels. People active in smart device development can understand both the current status of practice and future research directions. * Provides a comprehensive overview of smart systems design, focusing on design challenges and cutting-edge solutions; * Enables development of a co-simulation and co-design environment that accounts for the peculiarities of the basic subsystems and components to be integrated; * Describes development of modeling and design techniques, methods and tools that enable multi-domain simulation and optimization at various levels of abstraction and across different technological domains.
Addresses innovations in technology relating to the energy efficiency of a wide variety of contemporary computer systems and networks With concerns about global energy consumption at an all-time high, improving computer networks energy efficiency is becoming an increasingly important topic. Large-Scale Distributed Systems and Energy Efficiency: A Holistic View addresses innovations in technology relating to the energy efficiency of a wide variety of contemporary computer systems and networks. After an introductory overview of the energy demands of current Information and Communications Technology (ICT), individual chapters offer in-depth analyses of such topics as cloud computing, green networking (both wired and wireless), mobile computing, power modeling, the rise of green data centers and high-performance computing, resource allocation, and energy efficiency in peer-to-peer (P2P) computing networks. Discusses measurement and modeling of the energy consumption method Includes methods for energy consumption reduction in diverse computing environments Features a variety of case studies and examples of energy reduction and assessment Timely and important, Large-Scale Distributed Systems and Energy Efficiency is an invaluable resource for ways of increasing the energy efficiency of computing systems and networks while simultaneously reducing the carbon footprint.
This book introduces state-of-the-art verification techniques for real-time embedded systems, based on the inverse method for parametric timed automata. It reviews popular formalisms for the specification and verification of timed concurrent systems and, in particular, timed automata as well as several extensions such as timed automata equipped with stopwatches, linear hybrid automata and affine hybrid automata.The inverse method is introduced, and its benefits for guaranteeing robustness in real-time systems are shown. Then, it is shown how an iteration of the inverse method can solve the good parameters problem for parametric timed automata by computing a behavioral cartography of the system. Different extensions are proposed particularly for hybrid systems and applications to scheduling problems using timed automata with stopwatches. Various examples, both from the literature and industry, illustrate the techniques throughout the book.Various parametric verifications are performed, in particular of abstractions of a memory circuit sold by the chipset manufacturer ST-Microelectronics, as well as of the prospective flight control system of the next generation of spacecraft designed by ASTRIUM Space Transportation. Contents: 1. Parametric Timed Automata.2. The Inverse Method for Parametric Timed Automata.3. The Inverse Method in Practice: Application to Case Studies.4. Behavioral Cartography of Timed Automata.5. Parameter Synthesis for Hybrid Automata.6. Application to the Robustness Analysis of Scheduling Problems.7. Conclusion and Perspectives. About the Authors etienne Andre is Associate Professor in the Laboratoire d'Informatique de Paris Nord, in the University of Paris 13 (Sorbonne Paris Cite) in France. His current research interests focus on the verification of real-time systems.Romain Soulat is currently completing his PhD at the LSV laboratory at ENS-Cachan in France, focusing on the modeling and verification of hybrid temporal systems.
"Models of Computation for Heterogeneous Embedded Systems" presents a model of computation for heterogeneous embedded systems called DFCharts. It targets heterogeneous systems by combining finite state machines (FSM) with synchronous dataflow graphs (SDFG). FSMs are connected in the same way as in Argos (a Statecharts variant with purely synchronous semantics) using three operators: synchronous parallel, refinement and hiding. The fourth operator, called asynchronous parallel, is introduced in DFCharts to connect FSMs with SDFGs. In the formal semantics of DFCharts, the operation of an SDFG is represented as an FSM. Using this representation, SDFGs are merged with FSMs so that the behaviour of a complete DFCharts specification can be expressed as a single, flat FSM. This allows system properties to be verified globally. The practical application of DFCharts has been demonstrated by linking it to widely used system-level languages Java, Esterel and SystemC.
This book offers readers broad coverage of techniques to model, verify and validate the behavior and performance of complex distributed embedded systems. The authors attempt to bridge the gap between the three disciplines of model-based design, real-time analysis and model-driven development, for a better understanding of the ways in which new development flows can be constructed, going from system-level modeling to the correct and predictable generation of a distributed implementation, leveraging current and future research results.
Managing Systems Migrations and Upgrades is the perfect book for
technology managers who want a rational guide to evaluating the
business aspects of various possible technical solutions.
Enterprises today are in the middle of the R&D race for
technology leadership, with providers who increasingly need to
create markets for new technologies while shortening development,
implementation, and life cycles. The cost for the current tempo of
technology life cycles is endless change-management controls,
organizational chaos, production use of high-risk beta products,
and greater potential for failure of existing systems during
migration.
The Heinz Nixdorf Museum Forum (HNF) is the world's largest c- puter museum and is dedicated to portraying the past, present and future of information technology. In the "Year of Informatics 2006" the HNF was particularly keen to examine the history of this still quite young discipline. The short-lived nature of information technologies means that individuals, inventions, devices, institutes and companies"age" more rapidly than in many other specialties. And in the nature of things the group of computer pioneers from the early days is growing smaller all the time. To supplement a planned new exhibit on "Software and Inform- ics" at the HNF, the idea arose of recording the history of informatics in an accompanying publication. Mysearchforsuitablesourcesandauthorsveryquickly cameupwith the right answer, the very rst name in Germany: Friedrich L. Bauer, Professor Emeritus of Mathematics at the TU in Munich, one of the - thers of informatics in Germany and for decades the indefatigable author of the"Historical Notes" column of the journal Informatik Spektrum. Friedrich L. Bauer was already the author of two works on the history of informatics, published in different decades and in different books. Both of them are notable for their knowledgeable, extremely comp- hensive and yet compact style. My obvious course was to motivate this author to amalgamate, supplement and illustrate his previous work.
Wafer-scale integration has long been the dream of system designers. Instead of chopping a wafer into a few hundred or a few thousand chips, one would just connect the circuits on the entire wafer. What an enormous capability wafer-scale integration would offer: all those millions of circuits connected by high-speed on-chip wires. Unfortunately, the best known optical systems can provide suitably ?ne resolution only over an area much smaller than a whole wafer. There is no known way to pattern a whole wafer with transistors and wires small enough for modern circuits. Statistical defects present a ?rmer barrier to wafer-scale integration. Flaws appear regularly in integrated circuits; the larger the circuit area, the more probable there is a ?aw. If such ?aws were the result only of dust one might reduce their numbers, but ?aws are also the inevitable result of small scale. Each feature on a modern integrated circuit is carved out by only a small number of photons in the lithographic process. Each transistor gets its electrical properties from only a small number of impurity atoms in its tiny area. Inevitably, the quantized nature of light and the atomic nature of matter produce statistical variations in both the number of photons de?ning each tiny shape and the number of atoms providing the electrical behavior of tiny transistors. No known way exists to eliminate such statistical variation, nor may any be possible.
This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade. Coverage includes security and trust issues in all types of electronic devices and systems such as ASICs, COTS, FPGAs, microprocessors/DSPs, and embedded systems. This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of, and trust in, modern society's microelectronic-supported infrastructures.
This book is intended as a system engineer's compendium, explaining the dependencies and technical interactions between the onboard computer hardware, the onboard software and the spacecraft operations from ground. After a brief introduction on the subsequent development in all three fields over the spacecraft engineering phases each of the main topis is treated in depth in a separate part. The features of today's onboard computers are explained at hand of their historic evolution over the decades from the early days of spaceflight up to today. Latest system-on-chip processor architectures are treated as well as all onboard computer major components. After the onboard computer hardware the corresponding software is treated in a separate part. Both the software static architecture as well as the dynamic architecture are covered, and development technologies as well as software verification approaches are included. Following these two parts on the onboard architecture, the last part covers the concepts of spacecraft operations from ground. This includes the nominal operations concepts, the redundancy concept and the topic of failure detection, isolation and recovery. The baseline examples in the book are taken from the domain of satellites and deep space probes. The principles and many cited standards on spacecraft commanding, hardware and software however also apply to other space applications like launchers. The book is equally applicable for students as well for system engineers in space industry.
Industrial machines, automobiles, airplanes, robots, and machines are among the myriad possible hosts of embedded systems. The author researches robotic vehicles and remote operated vehicles (ROVs), especially Underwater Robotic Vehicles (URVs), used for a wide range of applications such as exploring oceans, monitoring environments, and supporting operations in extreme environments. Embedded Mechatronics System Design for Uncertain Environments has been prepared for those who seek to easily develop and design embedded systems for control purposes in robotic vehicles. It reflects the multidisciplinarily of embedded systems from initial concepts (mechanical and electrical) to the modelling and simulation (mathematical relationships), creating graphical-user interface (software) and their actual implementations (mechatronics system testing). The author proposes new solutions for the prototyping, simulation, testing, and design of real-time systems using standard PC hardware including Linux (R), Raspbian (R), ARDUINO (R), and MATLAB (R) xPC Target.
The Fibre Channel Association is an international organization
devoted to educating and promoting the Fibre Channel standard.
I love virtual machines (VMs) and I have done for a long time.If that makes me "sad" or an "anorak," so be it. I love them because they are so much fun, as well as being so useful. They have an element of original sin (writing assembly programs and being in control of an entire machine), while still being able to claim that one is being a respectable member of the community (being structured, modular, high-level, object-oriented, and so on). They also allow one to design machines of one's own, unencumbered by the restrictions of a starts optimising it for some physical particular processor (at least, until one processor or other). I have been building virtual machines, on and off, since 1980 or there abouts. It has always been something of a hobby for me; it has also turned out to be a technique of great power and applicability. I hope to continue working on them, perhaps on some of the ideas outlined in the last chapter (I certainly want to do some more work with register-based VMs and concur rency). I originally wanted to write the book from a purely semantic viewpoint."
This year, the IFIP Working Conference on Distributed and Parallel Embedded Sys tems (DIPES 2008) is held as part of the IFIP World Computer Congress, held in Milan on September 7 10, 2008. The embedded systems world has a great deal of experience with parallel and distributed computing. Many embedded computing systems require the high performance that can be delivered by parallel computing. Parallel and distributed computing are often the only ways to deliver adequate real time performance at low power levels. This year's conference attracted 30 submissions, of which 21 were accepted. Prof. Jor ] g Henkel of the University of Karlsruhe graciously contributed a keynote address on embedded computing and reliability. We would like to thank all of the program committee members for their diligence. Wayne Wolf, Bernd Kleinjohann, and Lisa Kleinjohann Acknowledgements We would like to thank all people involved in the organization of the IFIP World Computer Congress 2008, especially the IPC Co Chairs Judith Bishop and Ivo De Lotto, the Organization Chair Giulio Occhini, as well as the Publications Chair John Impagliazzo. Further thanks go to the authors for their valuable contributions to DIPES 2008. Last but not least we would like to acknowledge the considerable amount of work and enthusiasm spent by our colleague Claudius Stern in preparing theproceedingsofDIPES2008. Hemadeitpossibletoproducethemintheircurrent professional and homogeneous style."
Automated and semi-automated manipulation of so-called labelled transition systems has become an important means in discovering flaws in software and hardware systems. Process algebra has been developed to express such labelled transition systems algebraically, which enhances the ways of manipulation by means of equational logic and term rewriting.The theory of process algebra has developed rapidly over the last twenty years, and verification tools have been developed on the basis of process algebra, often in cooperation with techniques related to model checking. This textbook gives a thorough introduction into the basics of process algebra and its applications.
Real-time systems are of importance to a large number of university laboratories and research institutes worldwide, and without the proper integration of real-time into distributed computing, institutions simply could not function. Achieving Real-Time in Distributed Computing: From Grids to Clouds offers over 400 accounts from a wide range of specific research efforts. Major focus is given to the need for methodologies, tools, and architectures for complex distributed systems that address the practical issues of performance guarantees, timed execution, real-time management of resources, synchronized communication under various load conditions, satisfaction of QoS constraints, and dealing with the trade-offs between these aspects.
This book describes fault tolerance techniques based on software and hardware to create hybrid techniques. They are able to reduce overall performance degradation and increase error detection when associated with applications implemented in embedded processors. Coverage begins with an extensive discussion of the current state-of-the-art in fault tolerance techniques. The authors then discuss the best trade-off between software-based and hardware-based techniques and introduce novel hybrid techniques. Proposed techniques increase existing fault detection rates up to 100%, while maintaining low performance overheads in area and application execution time."
This textbook serves as an introduction to the subject of embedded
systems design, using microcontrollers as core components. It
develops concepts from the ground up, covering the development of
embedded systems technology, architectural and organizational
aspects of controllers and systems, processor models, and
peripheral devices. Since microprocessor-based embedded systems
tightly blend hardware and software components in a single
application, the book also introduces the subjects of data
representation formats, data operations, and programming styles.
The practical component of the book is tailored around the
architecture of a widely used
Logic Synthesis for Low Power VLSI Designs presents a systematic and comprehensive treatment of power modeling and optimization at the logic level. More precisely, this book provides a detailed presentation of methodologies, algorithms and CAD tools for power modeling, estimation and analysis, synthesis and optimization at the logic level. Logic Synthesis for Low Power VLSI Designs contains detailed descriptions of technology-dependent logic transformations and optimizations, technology decomposition and mapping, and post-mapping structural optimization techniques for low power. It also emphasizes the trade-off techniques for two-level and multi-level logic circuits that involve power dissipation and circuit speed, in the hope that the readers can better understand the issues and ways of achieving their power dissipation goal while meeting the timing constraints. Logic Synthesis for Low Power VLSI Designs is written for VLSI design engineers, CAD professionals, and students who have had a basic knowledge of CMOS digital design and logic synthesis.
The major thrust of this book is the realisation of an all optical computer. To that end it discusses optoelectronic devices and applications, transmission systems, integrated optoelectronic systems and, of course, all optical computers. The chapters on heterostructure light emitting devices' quantum well carrier transport optoelectronic devices' present the most recent advances in device physics, together with modern devices and their applications. The chapter on microcavity lasers' is essential to the discussion of present and future developments in solid-state laser physics and technology and puts into perspective the present state of research into and the technology of optoelectronic devices, within the context of their use in advanced systems. A significant part of the book deals with problems of propagation in quantum structures. soliton-based switching, gating and transmission systems' presents the basics of controlling the propagation of photons in solids and the use of this control in devices. The chapters on optoelectronic processing using smart pixels' and all optical computers' are preceded by introductory material in fundamentals of quantum structures for optoelectronic devices and systems' and linear and nonlinear absorption and reflection in quantum well structures'. It is clear that new architectures will be necessary if we are to fully utilise the potentiality of electrooptic devices in computing, but even current architectures and structures demonstrate the feasibility of the all optical computer: one that is possible today.
This book presents the cyber culture of micro, macro, cosmological, and virtual computing. The book shows how these work to formulate, explain, and predict the current processes and phenomena monitoring and controlling technology in the physical and virtual space.The authors posit a basic proposal to transform description of the function truth table and structure adjacency matrix to a qubit vector that focuses on memory-driven computing based on logic parallel operations performance. The authors offer a metric for the measurement of processes and phenomena in a cyberspace, and also the architecture of logic associative computing for decision-making and big data analysis.The book outlines an innovative theory and practice of design, test, simulation, and diagnosis of digital systems based on the use of a qubit coverage-vector to describe the functional components and structures. Authors provide a description of the technology for SoC HDL-model diagnosis, based on Test Assertion Blocks Activated Graph. Examples of cyber-physical systems for digital monitoring and cloud management of social objects and transport are proposed. A presented automaton model of cosmological computing explains the cyclical and harmonious evolution of matter-energy essence, and also a space-time form of the Universe.
DAPSYS (International Conference on Distributed and Parallel Systems) is an international biannual conference series dedicated to all aspects of distributed and parallel computing. DAPSYS 2008, the 7th International Conference on Distributed and Parallel Systems was held in September 2008 in Hungary. Distributed and Parallel Systems: Desktop Grid Computing, based on DAPSYS 2008, presents original research, novel concepts and methods, and outstanding results. Contributors investigate parallel and distributed techniques, algorithms, models and applications; present innovative software tools, environments and middleware; focus on various aspects of grid computing; and introduce novel methods for development, deployment, testing and evaluation. This volume features a special focus on desktop grid computing as well. Designed for a professional audience composed of practitioners and researchers in industry, this book is also suitable for advanced-level students in computer science. |
You may like...
The Disordered Cosmos - A Journey Into…
Chanda Prescod-Weinstein
Hardcover
R548
Discovery Miles 5 480
A Brief History of Black Holes - And why…
Dr. Becky Smethurst
Hardcover
|