![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
High Performance Scientific And Engineering Computing:
Hardware/Software Support contains selected chapters on
hardware/software support for high performance scientific and
engineering computing from prestigious workshops in the fields such
as PACT-SHPSEC, IPDPS-PDSECA and ICPP-HPSECA. This edited volume is
basically divided into six main sections which include invited
material from prominent researchers around the world. We believe
all of these contributed chapters and topics not only provide novel
ideas, new results and state-of-the-art techniques in this field,
but also stimulate the future research activities in the area of
high performance computing for science and engineering
applications.
A one-of-a-kind survey of the field of Reconfigurable Computing Gives a comprehensive introduction to a discipline that offers a 10X-100X acceleration of algorithms over microprocessors Discusses the impact of reconfigurable hardware on a wide range of applications: signal and image processing, network security, bioinformatics, and supercomputing Includes the history of the field as well as recent advances Includes an extensive bibliography of primary sources
Grid Middleware and Services: Challenges and Solutions is the eighth volume of the CoreGRID series. The CoreGrid Proceedings is the premiere European event on Grid Computing. This book aims to strengthen and advance scientific and technological excellence in the area of Grid Computing. The main focus in this volume is on Grid middleware and service level agreement. Grid middleware and Grid services are two pillars of grid computing systems and applications. This book includes high-level contributions by leading researchers in both areas and presents current solutions together with future challenges. This volume includes sections on knowledge and data management on grids, Grid resource management and scheduling, Grid information, resource and workflow monitoring services, and service level agreements. Grid Middleware and Services: Challenges and Solutions is designed for a professional audience, composed of researchers and practitioners in industry. This volume is also suitable for graduate-level students in computer science.
Effective compilers allow for a more efficient execution of application programs for a given computer architecture, while well-conceived architectural features can support more effective compiler optimization techniques. A well thought-out strategy of trade-offs between compilers and computer architectures is the key to the successful designing of highly efficient and effective computer systems. From embedded micro-controllers to large-scale multiprocessor systems, it is important to understand the interaction between compilers and computer architectures. The goal of the Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT) is to promote new ideas and to present recent developments in compiler techniques and computer architectures that enhance each other's capabilities and performance. Interaction Between Compilers and Computer Architectures is an updated and revised volume consisting of seven papers originally presented at the Fifth Workshop on Interaction between Compilers and Computer Architectures (INTERACT-5), which was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico in 2001. This volume explores recent developments and ideas for better integration of the interaction between compilers and computer architectures in designing modern processors and computer systems. Interaction Between Compilers and Computer Architectures is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
The SCAN conference, the International Symposium on Scientific Com puting, Computer Arithmetic and Validated Numerics, takes place bian nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. It is important that the possibly first participation of 6 young researchers was made possible due to the obtained support. The number of East-European participants was relatively high. These results are especially valuable, since in contrast to the usual 2 years period, the present meeting was organized just one year after the last SCAN-xx conference."
How do you design personalized user experiences that delight and
provide value to the customers of an eCommerce site?
Personalization does not guarantee high quality user experience: a
personalized user experience has the best chance of success if it
is developed using a set of best practices in HCI. In this book 35
experts from academia, industry and government focus on issues in
the design of personalized web sites. The topics range from the
design and evaluation of user interfaces and tools to information
architecture and computer programming related to commercial web
sites. The book covers four main areas:
Developing NoC based interconnect tailored to a particular application domain, satisfying the application performance constraints with minimum power-area overhead is a major challenge. With technology scaling, as the geometries of on-chip devices reach the physical limits of operation, another important design challenge for NoCs will be to provide dynamic (run-time) support against permanent and intermittent faults that can occur in the system. The purpose of Designing Reliable and Efficient Networks on Chips is to provide state-of-the-art methods to solve some of the most important and time-intensive problems encountered during NoC design.
Functional Design Errors in Digital Circuits Diagnosis covers a wide spectrum of innovative methods to automate the debugging process throughout the design flow: from Register-Transfer Level (RTL) all the way to the silicon die. In particular, this book describes: (1) techniques for bug trace minimization that simplify debugging; (2) an RTL error diagnosis method that identifies the root cause of errors directly; (3) a counterexample-guided error-repair framework to automatically fix errors in gate-level and RTL designs; (4) a symmetry-based rewiring technology for fixing electrical errors; (5) an incremental verification system for physical synthesis; and (6) an integrated framework for post-silicon debugging and layout repair. The solutions provided in this book can greatly reduce debugging effort, enhance design quality, and ultimately enable the design and manufacture of more reliable electronic devices.
This book presents the most recent concerns and research results in industrial fault diagnosis using intelligent techniques. It focuses on computational intelligence applications to fault diagnosis with real-world applications used in different chapters to validate the different diagnosis methods. The book includes one chapter dealing with a novel coherent fault diagnosis distributed methodology for complex systems.
Many-valued logics are becoming increasingly important in all areas of computer science. This is the second volume of an authoritative two-volume handbook on many valued logics by two leading figures in the field. While the first volume was mainly concerned with theoretical foundations, this volume emphasizes automated reasoning, practical applications, and the latest developments in fuzzy logic and rough set theory. Among the applications presented are those in software specification and electronic circuit verification.
In the world of information technology, it is no longer the computer in the classical sense where the majority of IT applications is executed; computing is everywhere. More than 20 billion processors have already been fabricated and the majority of them can be assumed to still be operational. At the same time, virtually every PC worldwide is connected via the Internet. This combination of traditional and embedded computing creates an artifact of a complexity, heterogeneity, and volatility unmanageable by classical means. Each of our technical artifacts with a built-in processor can be seen as a ''Thing that Thinks," a term introduced by MIT's Thinglab. It can be expected that in the near future these billions of Things that Think will become an ''Internet of Things," a term originating from ETH Zurich. This means that we will be constantly surrounded by a virtual "organism" of Things that Think. This organism needs novel, adequate design, evolution, and management means which is also one of the core challenges addressed by the recent German priority research program on Organic Computing.
The authors of this Festschrift prepared these papers to honour and express their friendship to Klaus Ritter on the occasion of his sixtieth birthday. Be cause of Ritter's many friends and his international reputation among math ematicians, finding contributors was easy. In fact, constraints on the size of the book required us to limit the number of papers. Klaus Ritter has done important work in a variety of areas, especially in var ious applications of linear and nonlinear optimization and also in connection with statistics and parallel computing. For the latter we have to mention Rit ter's development of transputer workstation hardware. The wide scope of his research is reflected by the breadth of the contributions in this Festschrift. After several years of scientific research in the U.S., Klaus Ritter was ap pointed as full professor at the University of Stuttgart. Since then, his name has become inextricably connected with the regularly scheduled conferences on optimization in Oberwolfach. In 1981 he became full professor of Applied Mathematics and Mathematical Statistics at the Technical University of Mu nich. In addition to his university teaching duties, he has made the activity of applying mathematical methods to problems of industry to be centrally important."
Integrating formal property verification (FPV) into an existing design process raises several interesting questions. Have I written enough properties? Have I written a consistent set of properties? What should I do when the FPV tool runs into capacity issues? This book develops the answers to these questions and fits them into a roadmap for formal property verification a roadmap that shows how to glue FPV technology into the traditional validation flow. A Roadmap for Formal Property Verification explores the key issues in this powerful technology through simple examples you do not need any background on formal methods to read most parts of this book. "
A genuinely useful text that gives an overview of the state-of-the-art in system-level design trade-off explorations for concurrent tasks running on embedded heterogeneous multiple processors. The targeted application domain covers complex embedded real-time multi-media and communication applications. This material is mainly based on research at IMEC and its international university network partners in this area over the last decade. In all, the material those in the digital signal processing industry will find here is bang up-to-date.
The memory system is increasingly turning into a bottleneck in the design of embedded systems. The speed improvements of memory systems are lower than the speed improvements of processors, eventually leading to embedded systems whose performance is limited by the memory. This problem is known as the "memory wall" problem. Furthermore, memory systems may consume the largest share of the system s energy budget and may be the source of unpredictable timing behaviour. Hence, the design of the memory system deserves an increasing amount of attention. Fast, Efficient and Predictable Memory Accesses presents techniques for designing fast, energy-efficient and timing predictable memory systems. By using a careful combination of compiler optimizations and architectural improvements, we can achieve more than what would be feasible at one of the levels in isolation. The described optimization algorithms achieve the goals of high performance and low energy consumption. In addition to these benefits, the use of scratchpad memories significantly improves the timing predictability of the entire system, leading to tighter worst case execution time bounds (WCET). The WCET is a relevant design parameter for all timing critical systems. In addition, the book covers algorithms to exploit the power down modes of main memories in SDRAM technology, as well as the execute-in-place feature of Flash memories. The final chapter considers the impact of the register file, which is also part of the memory hierarchy."
This is the first book dedicated to direct continuous-time model identification for 15 years. It cuts down on time spent hunting through journals by providing an overview of much recent research in an increasingly busy field. The CONTSID toolbox discussed in the final chapter gives an overview of developments and practical examples in which MATLAB(r) can be used for direct time-domain identification of continuous-time systems. This is a valuable reference for a broad audience.
This book introduces the area of image processing and data-parallel processing. It covers a number of standard algorithms in image processing and describes their parallel implementation. The programming language chosen for all examples is a structured parallel programming language which is ideal for educational purposes. It has a number of advantages over C, and since all image processing tasks are inherently parallel, using a parallel language for presentation actually simplifies the subject matter. This results in shorter source codes and a better understanding. Sample programs and a free compiler are available on an accompanying Web site.
One of the very important parts of any digital system is the control unit, coordin- ing interplay of other system blocks. As a rule, control units have irregular str- ture, which makes process of their logic circuits design very sophisticated. In case of complex logic controllers, the problem of system design is reduced practically to the design of control units. Actually, we observe a real technical boom connected with achievements in semiconductor technology. One of these is the development of integrated circuit known as the "systems-on-a-programmable- chip" (SoPC), where the number of elements approaches one billion. Because of the extreme complexity of microchips, it is very important to develop effective design methods oriented on particular properties of logical elements. Solution of this problem permits impr- ing functional capabilities of the target digital system inside single SoPC chip. As majority of researches point out, design methods used in case of industrial packages are, in case of complex digital system design, far from optimal. Similar problems concern the design of control units with standard ?eld-programmable logic devices (FPLD), such as PLA, PAL, GAL, CPLD, and FPGA. Let us point out that modern SoPC are based on CPLD or FPGA technology. Thus, the development of eff- tive design methods oriented on FPLD implementation of logic circuits used in the control units still remains the problem of great importance.
Introduction in Reconfigurable Computing provides a comprehensive study of the field Reconfigurable Computing. It provides an entry point to the novice willing to move in the research field reconfigurable computing, FPGA and system on programmable chip design. The book can also be used as teaching reference for a graduate course in computer engineering, or as reference to advance electrical and computer engineers. It provides a very strong theoretical and practical background to the field of reconfigurable computing, from the early Estrin s machine to the very modern architecture like coarse-grained reconfigurable device and the embedded logic devices. Apart from the introduction and the conclusion, the main chapter of the book are the following:
This book contains papers presented at the fifth and sixth Teraflop Workshop. It presents the state-of-the-art in high performance computing and simulation on modern supercomputer architectures. It covers trends in hardware and software development in general and specifically the future of vector-based systems and heterogeneous architectures. It covers computational fluid dynamics, fluid-structure interaction, physics, chemistry, astrophysics, and climate research.
Web caching and content delivery technologies provide the
infrastructure on which systems are built for the scalable
distribution of information. This proceedings of the eighth annual
workshop, captures a cross-section of the latest issues and
techniques of interest to network architects and researchers in
large-scale content delivery. Topics covered include the
distribution of streaming multimedia, edge caching and computation,
multicast, delivery of dynamic content, enterprise content
delivery, streaming proxies and servers, content transcoding,
replication and caching strategies, peer-to-peer content delivery,
and Web prefetching.
"Introduction to Embedded System Design Using Field Programmable Gate Arrays" provides a starting point for the use of field programmable gate arrays in the design of embedded systems. The text considers a hypothetical robot controller as an embedded application and weaves around it related concepts of FPGA-based digital design. The book details: use of FPGA vis-a-vis general purpose processor and microcontroller; design using Verilog hardware description language; digital design synthesis using Verilog and Xilinx(r) SpartanTM 3 FPGA; FPGA-based embedded processors and peripherals; overview of serial data communications and signal conditioning using FPGA; FPGA-based motor drive controllers; and prototyping digital systems using FPGA. The book is a good introductory text for FPGA-based design for both students and digital systems designers. Its end-of-chapter exercises and frequent use of example can be used for teaching or for self-study."
Natural Language Processing and Text Mining not only discusses applications of Natural Language Processing techniques to certain Text Mining tasks, but also the converse, the use of Text Mining to assist NLP. It assembles a diverse views from internationally recognized researchers and emphasizes caveats in the attempt to apply Natural Language Processing to text mining. This state-of-the-art survey is a must-have for advanced students, professionals, and researchers.
The development of any Software (Industrial) Intensive System, e.g. critical embedded software, requires both different notations, and a strong devel- ment process. Different notations are mandatory because different aspects of the Software System have to be tackled. A strong development process is mandatory as well because without a strong organization we cannot warrantee the system will meet its requirements. Unfortunately, much more is needed! The different notations that can be used must all possess at least one property: formality. The development process must also have important properties: a exha- tive coverage of the development phases, and a set of well integrated support tools. In Computer Science it is now widely accepted that only formal notations can guarantee a perfect de?ned meaning. This becomes a more and more important issue since software systems tend to be distributed in large systems (for instance in safe public transportation systems), and in small ones (for instance numerous processors in luxury cars). Distribution increases the complexity of embedded software while safety criteria get harder to be met. On the other hand, during the past decade Software Engineering techniques have been improved a lot, and are now currently used to conduct systematic and rigorous development of large software systems. UML has become the de facto standard notation for documenting Software Engineering projects. UML is supported by many CASE tools that offer graphical means for the UML notation.
This book is concerned with studying the co-design methodology in general, and how to determine the more suitable interface mechanism in a co-design system in particular. This is based on the characteristics of the application and those of the target architecture of the system. Guidelines are provided to support the designer's choice of the interface mechanism. Some new trends in co-design and system acceleration are also introduced. |
You may like...
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
Agile Software Architecture - Aligning…
Muhammad Ali Babar, Alan W. Brown, …
Paperback
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R5,924
Discovery Miles 59 240
|