![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems
Multiprocessor Execution of Logic Programs addresses the problem of efficient implementation of logic programming languages, specifically Prolog, on multiprocessor architectures. The approaches and implementations developed attempt to take full advantage of sequential implementation technology developed for Prolog (such as the WAM) while exploiting all forms of control parallelism present in logic programs, namely, or-parallelism, independent and-parallelism and dependent and-parallelism. Coverage includes a thorough survey of parallel implementation techniques and parallel systems developed for Prolog. Multiprocessor Execution of Logic Programs is recommended for people implementing parallel logic programming systems, parallel symbolic systems, parallel AI systems, and parallel theorem proving systems. It will also be useful to people who wish to learn about the implementation of parallel logic programming systems.
During the last three decades, breakthroughs in computer technology have made a tremendous impact on optimization. In particular, parallel computing has made it possible to solve larger and computationally more difficult problems. The book covers recent developments in novel programming and algorithmic aspects of parallel computing as well as technical advances in parallel optimization. Each contribution is essentially expository in nature, but of scholarly treatment. In addition, each chapter includes a collection of carefully selected problems. The first two chapters discuss theoretical models for parallel algorithm design and their complexity. The next chapter gives the perspective of the programmer practicing parallel algorithm development on real world platforms. Solving systems of linear equations efficiently is of great importance not only because they arise in many scientific and engineering applications but also because algorithms for solving many optimization problems need to call system solvers and subroutines (chapters four and five). Chapters six through thirteen are dedicated to optimization problems and methods. They include parallel algorithms for network problems, parallel branch and bound techniques, parallel heuristics for discrete and continuous problems, decomposition methods, parallel algorithms for variational inequality problems, parallel algorithms for stochastic programming, and neural networks. Audience: Parallel Computing in Optimization is addressed not only to researchers of mathematical programming, but to all scientists in various disciplines who use optimization methods in parallel and multiprocessing environments to model and solve problems.
Grid Computing: Achievements and Prospects, the 9th edited volume of the CoreGRID series, includes selected papers from the CoreGRID Integration Workshop, held April 2008 in Heraklion-Crete, Greece. This event brings together representatives of the academic and industrial communities performing Grid research in Europe. The workshop was organized in the context of the CoreGRID Network of Excellence in order to provide a forum for the presentation and exchange of views on the latest developments in grid technology research. Grid Computing: Achievements and Prospects is designed for a professional audience, composed of researchers and practitioners in industry. This volume is also suitable for graduate-level students in computer science.
A compact guide to knowledge management, this book makes the subject accessible without oversimplifying it. Organizational issues like strategy and culture are discussed in the context of typical knowledge management processes. The focus is always on pointing out all the issues that need to be taken into account in order to make knowledge management a success. The book then goes on to explore the role of information technology as an enabler of knowledge management relating various technologies to the knowledge management processes, showing the reader what can, and what cannot, be achieved through technology. Throughout the book, references to lessons learned from past projects underline the arguments. Managers will find this book a valuable guide for implementing their own initiatives, while researchers and system designers will find plenty of ideas for future work.
This book is a comprehensive guide to assertion-based verification of hardware designs using System Verilog Assertions (SVA). It enables readers to minimize the cost of verification by using assertion-based techniques in simulation testing, coverage collection and formal analysis. The book provides detailed descriptions of all the language features of SVA, accompanied by step-by-step examples of how to employ them to construct powerful and reusable sets of properties. The book also shows how SVA fits into the broader System Verilog language, demonstrating the ways that assertions can interact with other System Verilog components. The reader new to hardware verification will benefit from general material describing the nature of design models and behaviors, how they are exercised, and the different roles that assertions play. This second edition covers the features introduced by the recent IEEE 1800-2012. System Verilog standard, explaining in detail the new and enhanced assertion constructs. The book makes SVA usable and accessible for hardware designers, verification engineers, formal verification specialists and EDA tool developers. With numerous exercises, ranging in depth and difficulty, the book is also suitable as a text for students.
This book describes fault tolerance techniques based on software and hardware to create hybrid techniques. They are able to reduce overall performance degradation and increase error detection when associated with applications implemented in embedded processors. Coverage begins with an extensive discussion of the current state-of-the-art in fault tolerance techniques. The authors then discuss the best trade-off between software-based and hardware-based techniques and introduce novel hybrid techniques. Proposed techniques increase existing fault detection rates up to 100%, while maintaining low performance overheads in area and application execution time."
Information security concerns the confidentiality, integrity, and availability of information processed by a computer system. With an emphasis on prevention, traditional information security research has focused little on the ability to survive successful attacks, which can seriously impair the integrity and availability of a system. Trusted Recovery And Defensive Information Warfare uses database trusted recovery, as an example, to illustrate the principles of trusted recovery in defensive information warfare. Traditional database recovery mechanisms do not address trusted recovery, except for complete rollbacks, which undo the work of benign transactions as well as malicious ones, and compensating transactions, whose utility depends on application semantics. Database trusted recovery faces a set of unique challenges. In particular, trusted database recovery is complicated mainly by (a) the presence of benign transactions that depend, directly or indirectly on malicious transactions; and (b) the requirement by many mission-critical database applications that trusted recovery should be done on-the-fly without blocking the execution of new user transactions. Trusted Recovery And Defensive Information Warfare proposes a new model and a set of innovative algorithms for database trusted recovery. Both read-write dependency based and semantics based trusted recovery algorithms are proposed. Both static and dynamic database trusted recovery algorithms are proposed. These algorithms can typically save a lot of work by innocent users and can satisfy a variety of attack recovery requirements of real world database applications. Trusted Recovery And Defensive Information Warfare is suitable as a secondary text for a graduate level course in computer science, and as a reference for researchers and practitioners in information security.
This book describes recent findings in the domain of Boolean logic and Boolean algebra, covering application domains in circuit and system design, but also basic research in mathematics and theoretical computer science. Content includes invited chapters and a selection of the best papers presented at the 13th annual International Workshop on Boolean Problems. Provides a single-source reference to the state-of-the-art research in the field of logic synthesis and Boolean techniques; Includes a selection of the best papers presented at the 13th annual International Workshop on Boolean Problems; Covers Boolean algebras, Boolean logic, Boolean modeling, Combinatorial Search, Boolean and bitwise arithmetic, Software and tools for the solution of Boolean problems, Applications of Boolean logic and algebras, Applications to real-world problems, Boolean constraint solving, and Extensions of Boolean logic.
Pipelined ADCs have seen phenomenal improvements in performance over the last few years. As such, when designing a pipelined ADC a clear understanding of the design tradeoffs, and state of the art techniques is required to implement today's high performance low power ADCs.
This state-of-the-art survey gives a systematic presentation of recent advances in the design and validation of computer architectures. Based on advanced research ideas and approaches, and written by eminent researchers in the field, seven chapters cover the whole range from computer aided high-level design of VLSI circuits and systems to layout and testable design, including modeling and synthesis of behavior, of control, and of dataflow, cell based logic optimization, machine assisted verification, and virtual machine design. The chapters presuppose only basic familiarity with computer architecture. They are self-contained and lead the reader gently and informatively to the forefront of current research. A special feature of the book is the comprehensive range of architecture design and validation topics covered, giving the reader a clear view of the problems and of advanced techniques for their solution.
This book presents the theory behind software-implemented hardware fault tolerance, as well as the practical aspects needed to put it to work on real examples. By evaluating accurately the advantages and disadvantages of the already available approaches, the book provides a guide to developers willing to adopt software-implemented hardware fault tolerance in their applications. Moreover, the book identifies open issues for researchers willing to improve the already available techniques.
Real-time systems are of importance to a large number of university laboratories and research institutes worldwide, and without the proper integration of real-time into distributed computing, institutions simply could not function. Achieving Real-Time in Distributed Computing: From Grids to Clouds offers over 400 accounts from a wide range of specific research efforts. Major focus is given to the need for methodologies, tools, and architectures for complex distributed systems that address the practical issues of performance guarantees, timed execution, real-time management of resources, synchronized communication under various load conditions, satisfaction of QoS constraints, and dealing with the trade-offs between these aspects.
This book analyzes the challenges in verifying Dynamically
Reconfigurable Systems (DRS) with respect to the user design and
the physical implementation of such systems. The authors describe
the use of a simulation-only layer to emulate the behavior of
target FPGAs and accurately model the characteristic features of
reconfiguration. Readers are enabled with this simulation-only
layer to maintain verification productivity by abstracting away the
physical details of the FPGA fabric. Two implementations of the
simulation-only layer are included: Extended ReChannel is a SystemC
library that can be used to check DRS designs at a high level;
ReSim is a library to support RTL simulation of a DRS reconfiguring
both its logic and state. Through a number of case studies, the
authors demonstrate how their approach integrates seamlessly with
existing, mainstream DRS design flows and with well-established
verification methodologies such as top-down modeling and
coverage-driven verification.
Motivation Modem enterprises rely on database management systems (DBMS) to collect, store and manage corporate data, which is considered a strategic corporate re source. Recently, with the proliferation of personal computers and departmen tal computing, the trend has been towards the decentralization and distribution of the computing infrastructure, with autonomy and responsibility for data now residing at the departmental and workgroup level of the organization. Users want their data delivered to their desktops, allowing them to incor porate data into their personal databases, spreadsheets, word processing doc uments, and most importantly, into their daily tasks and activities. They want to be able to share their information while retaining control over its access and distribution. There are also pressures from corporate leaders who wish to use information technology as a strategic resource in offering specialized value-added services to customers. Database technology is being used to manage the data associated with corporate processes and activities. Increasingly, the data being managed are not simply formatted tables in relational databases, but all types of ob jects, including unstructured text, images, audio, and video. Thus, the database management providers are being asked to extend the capabilities of DBMS to include object-relational models as well as full object-oriented database man agement systems."
Computer interfaces and documentation are notoriously difficult for any user, regardless of his or her level of experience. Advances in technology are not making applications more friendly. Introducing concepts from linguistics and language teaching, Language and Communication proposes a new approach to computer interface design. The book explains for the first time why the much hyped user-friendly interface is treated with such derision by the user community. The author argues that software and hardware designers should consider such fundamental language concepts as meaning, context, function, variety, and equivalence. She goes on to show how imagining an interface as a new language can be an invaluable design exercise, calling into question deeply held beliefs and assumptions about what users will or will not understand. Written for a wide range of computer scientists and professionals, and presuming no prior knowledge of language-related terminology, this volume is a key step in the on-going information revolution.
Logic circuits are becoming increasingly susceptible to probabilistic behavior caused by external radiation and process variation. In addition, inherently probabilistic quantum- and nano-technologies are on the horizon as we approach the limits of CMOS scaling. Ensuring the reliability of such circuits despite the probabilistic behavior is a key challenge in IC design---one that necessitates a fundamental, probabilistic reformulation of synthesis and testing techniques. This monograph will present techniques for analyzing, designing, and testing logic circuits with probabilistic behavior.
This book is the product of Research Study Group (RSG) 13 on "Human Engineering Evaluation on the Use of Colour in Electronic Displays," of Panel 8, "Defence Applications of Human and Biomedical Sciences," of the NATO Defence Research Group. RSG 13 was chaired by Heino Widdel (Germany) and consisted of Jeffrey Grossman (United States), Jean-Pierre Menu (France), Giampaolo Noja (Italy, point of contact), David Post (United States), and Jan Walraven (Netherlands). Initially, Christopher Gibson (United Kingdom) and Sharon McFaddon (Canada) participated also. Most of these representatives served previously on the NATO program committee that produced Proceedings of a Workshop on Colour Coded vs. Monochrome Displays (edited by Christopher Gibson and published by the Royal Aircraft Establishment, Farnborough, England) in 1984. RSG 13 can be regarded as a descendent of that program committee. RSG 13 was formed in 1987 for the purpose of developing and distributing guidance regarding the use of color on electronic displays. During our first meeting, we discussed the fact that, although there is a tremendous amount of information available concerning color vision, color perception, colorimetry, and color displays-much of it relevant to display design-it is scattered across numerous texts, journals, conference proceedings, and technical reports. We decided that we could fulfill the RSG's purpose best by producing a book that consolidates and summarizes this information, emphasizing those aspects that are most applicable to display design.
High Performance Computing Systems and Applications contains the fully refereed papers from the 13th Annual Symposium on High Performance Computing, held in Kingston, Canada, in June 1999. This book presents the latest research in HPC architectures, distributed and shared memory performance, algorithms and solvers, with special sessions on atmospheric science, computational chemistry and physics. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
Logic and Complexity looks at basic logic as it is used in Computer Science, and provides students with a logical approach to Complexity theory. With plenty of exercises, this book presents classical notions of mathematical logic, such as decidability, completeness and incompleteness, as well as new ideas brought by complexity theory such as NP-completeness, randomness and approximations, providing a better understanding for efficient algorithmic solutions to problems. Divided into three parts, it covers: - Model Theory and Recursive Functions - introducing the basic model theory of propositional, 1st order, inductive definitions and 2nd order logic. Recursive functions, Turing computability and decidability are also examined. - Descriptive Complexity - looking at the relationship between definitions of problems, queries, properties of programs and their computational complexity. - Approximation - explaining how some optimization problems and counting problems can be approximated according to their logical form. Logic is important in Computer Science, particularly for verification problems and database query languages such as SQL. Students and researchers in this field will find this book of great interest.
This book describes the life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection. Various trade-offs in the design process are discussed, including those associated with many of the most common memory cores, controller IPs and system-on-chip (SoC) buses. Readers will also benefit from the author's practical coverage of new verification methodologies. such as bug localization, UVM, and scan-chain. A SoC case study is presented to compare traditional verification with the new verification methodologies. Discusses the entire life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection; Introduce a deep introduction for Verilog for both implementation and verification point of view. Demonstrates how to use IP in applications such as memory controllers and SoC buses. Describes a new verification methodology called bug localization; Presents a novel scan-chain methodology for RTL debugging; Enables readers to employ UVM methodology in straightforward, practical terms.
Effective compilers allow for a more efficient execution of application programs for a given computer architecture, while well-conceived architectural features can support more effective compiler optimization techniques. A well thought-out strategy of trade-offs between compilers and computer architectures is the key to the successful designing of highly efficient and effective computer systems. From embedded micro-controllers to large-scale multiprocessor systems, it is important to understand the interaction between compilers and computer architectures. The goal of the Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT) is to promote new ideas and to present recent developments in compiler techniques and computer architectures that enhance each other's capabilities and performance. Interaction Between Compilers and Computer Architectures is an updated and revised volume consisting of seven papers originally presented at the Fifth Workshop on Interaction between Compilers and Computer Architectures (INTERACT-5), which was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico in 2001. This volume explores recent developments and ideas for better integration of the interaction between compilers and computer architectures in designing modern processors and computer systems. Interaction Between Compilers and Computer Architectures is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
This book provides readers with a valuable reference on cyber weapons and, in particular, viruses, software and hardware Trojans. The authors discuss in detail the most dangerous computer viruses, software Trojans and spyware, models of computer Trojans affecting computers, methods of implementation and mechanisms of their interaction with an attacker - a hacker, an intruder or an intelligence agent. Coverage includes Trojans in electronic equipment such as telecommunication systems, computers, mobile communication systems, cars and even consumer electronics. The evolutionary path of development of hardware Trojans from "cabinets", "crates" and "boxes" to the microcircuits (IC) is also discussed. Readers will benefit from the detailed review of the major known types of hardware Trojans in chips, principles of their design, mechanisms of their functioning, methods of their introduction, means of camouflaging and detecting, as well as methods of protection and counteraction.
Companies must confront an increasingly competitive environment with lean, flexible and market oriented structures. Therefore companies organize themselves according to their business processes. These processes are more and more often designed, implemented and managed based on standard software, mostly ERP or SCM packages. This is the first book delivering a complete description of a business driven implementation of standard software packages, accelerated by the use of reference models and other information models. The use of those models ensures best quality results and speeds up the software implementation. The book discusses how companies can optimize business processes and realize strategic goals with the implementation of software like SAP R/3, Oracle, Baan or Peoplesoft. It also includes the post implementation activities. The book cites numerous case studies and outlines each step of a process oriented implementation, including the goals, procedures and necessary methods and tools.
This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques. |
![]() ![]() You may like...
International Symposium on Mathematics…
Tsuyoshi Takagi, Masato Wakayama, …
Hardcover
R1,671
Discovery Miles 16 710
The Definitive Guide to CentOS
Peter Membrey, Tim Verhoeven, …
Paperback
Evolutionary Multi-Agent Systems - From…
Aleksander Byrski, Marek Kisiel-Dorohinicki
Hardcover
R4,556
Discovery Miles 45 560
Apache HTTP Server Documentation Version…
Apache Software Foundation
Hardcover
R1,795
Discovery Miles 17 950
|