Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
This book provides readers with a valuable reference on cyber weapons and, in particular, viruses, software and hardware Trojans. The authors discuss in detail the most dangerous computer viruses, software Trojans and spyware, models of computer Trojans affecting computers, methods of implementation and mechanisms of their interaction with an attacker - a hacker, an intruder or an intelligence agent. Coverage includes Trojans in electronic equipment such as telecommunication systems, computers, mobile communication systems, cars and even consumer electronics. The evolutionary path of development of hardware Trojans from "cabinets", "crates" and "boxes" to the microcircuits (IC) is also discussed. Readers will benefit from the detailed review of the major known types of hardware Trojans in chips, principles of their design, mechanisms of their functioning, methods of their introduction, means of camouflaging and detecting, as well as methods of protection and counteraction.
Artificial Intelligence for Capital Market throws light on application of AI/ML techniques in the financial capital markets. This book discusses the challenges posed by the AI/ML techniques as these are prone to "black box" syndrome. The complexity of understanding the underlying dynamics for results generated by these methods is one of the major concerns which is highlighted in this book: Features: Showcases artificial intelligence in finance service industry Explains Credit and Risk Analysis Elaborates on cryptocurrencies and blockchain technology Focuses on optimal choice of asset pricing model Introduces Testing of market efficiency and Forecasting in Indian Stock Market This book serves as a reference book for Academicians, Industry Professional, Traders, Finance Mangers and Stock Brokers. It may also be used as textbook for graduate level courses in financial services and financial Analytics.
This book is for researchers in computer science, mathematical logic, and philosophical logic. It shows the state of the art in current investigations of process calculi with mainly two major paradigms at work: linear logic and modal logic. The combination of approaches and pointers for further integration also suggests a grander vision for the field.
This book analyzes the challenges in verifying Dynamically
Reconfigurable Systems (DRS) with respect to the user design and
the physical implementation of such systems. The authors describe
the use of a simulation-only layer to emulate the behavior of
target FPGAs and accurately model the characteristic features of
reconfiguration. Readers are enabled with this simulation-only
layer to maintain verification productivity by abstracting away the
physical details of the FPGA fabric. Two implementations of the
simulation-only layer are included: Extended ReChannel is a SystemC
library that can be used to check DRS designs at a high level;
ReSim is a library to support RTL simulation of a DRS reconfiguring
both its logic and state. Through a number of case studies, the
authors demonstrate how their approach integrates seamlessly with
existing, mainstream DRS design flows and with well-established
verification methodologies such as top-down modeling and
coverage-driven verification.
Grid Computing: Achievements and Prospects, the 9th edited volume of the CoreGRID series, includes selected papers from the CoreGRID Integration Workshop, held April 2008 in Heraklion-Crete, Greece. This event brings together representatives of the academic and industrial communities performing Grid research in Europe. The workshop was organized in the context of the CoreGRID Network of Excellence in order to provide a forum for the presentation and exchange of views on the latest developments in grid technology research. Grid Computing: Achievements and Prospects is designed for a professional audience, composed of researchers and practitioners in industry. This volume is also suitable for graduate-level students in computer science.
This IMA Volume in Mathematics and its Applications ALGORITHMS FOR PARALLEL PROCESSING is based on the proceedings of a workshop that was an integral part of the 1996-97 IMA program on "MATHEMATICS IN HIGH-PERFORMANCE COMPUTING. " The workshop brought together algorithm developers from theory, combinatorics, and scientific computing. The topics ranged over models, linear algebra, sorting, randomization, and graph algorithms and their analysis. We thank Michael T. Heath of University of lllinois at Urbana (Com puter Science), Abhiram Ranade of the Indian Institute of Technology (Computer Science and Engineering), and Robert S. Schreiber of Hewlett Packard Laboratories for their excellent work in organizing the workshop and editing the proceedings. We also take this opportunity to thank the National Science Founda tion (NSF) and the Army Research Office (ARO), whose financial support made the workshop possible. A vner Friedman Robert Gulliver v PREFACE The Workshop on Algorithms for Parallel Processing was held at the IMA September 16 - 20, 1996; it was the first workshop of the IMA year dedicated to the mathematics of high performance computing. The work shop organizers were Abhiram Ranade of The Indian Institute of Tech nology, Bombay, Michael Heath of the University of Illinois, and Robert Schreiber of Hewlett Packard Laboratories. Our idea was to bring together researchers who do innovative, exciting, parallel algorithms research on a wide range of topics, and by sharing insights, problems, tools, and methods to learn something of value from one another."
This book describes the life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection. Various trade-offs in the design process are discussed, including those associated with many of the most common memory cores, controller IPs and system-on-chip (SoC) buses. Readers will also benefit from the author's practical coverage of new verification methodologies. such as bug localization, UVM, and scan-chain. A SoC case study is presented to compare traditional verification with the new verification methodologies. Discusses the entire life cycle process of IP cores, from specification to production, including IP modeling, verification, optimization, and protection; Introduce a deep introduction for Verilog for both implementation and verification point of view. Demonstrates how to use IP in applications such as memory controllers and SoC buses. Describes a new verification methodology called bug localization; Presents a novel scan-chain methodology for RTL debugging; Enables readers to employ UVM methodology in straightforward, practical terms.
Multiprocessor Execution of Logic Programs addresses the problem of efficient implementation of logic programming languages, specifically Prolog, on multiprocessor architectures. The approaches and implementations developed attempt to take full advantage of sequential implementation technology developed for Prolog (such as the WAM) while exploiting all forms of control parallelism present in logic programs, namely, or-parallelism, independent and-parallelism and dependent and-parallelism. Coverage includes a thorough survey of parallel implementation techniques and parallel systems developed for Prolog. Multiprocessor Execution of Logic Programs is recommended for people implementing parallel logic programming systems, parallel symbolic systems, parallel AI systems, and parallel theorem proving systems. It will also be useful to people who wish to learn about the implementation of parallel logic programming systems.
During the last three decades, breakthroughs in computer technology have made a tremendous impact on optimization. In particular, parallel computing has made it possible to solve larger and computationally more difficult problems. The book covers recent developments in novel programming and algorithmic aspects of parallel computing as well as technical advances in parallel optimization. Each contribution is essentially expository in nature, but of scholarly treatment. In addition, each chapter includes a collection of carefully selected problems. The first two chapters discuss theoretical models for parallel algorithm design and their complexity. The next chapter gives the perspective of the programmer practicing parallel algorithm development on real world platforms. Solving systems of linear equations efficiently is of great importance not only because they arise in many scientific and engineering applications but also because algorithms for solving many optimization problems need to call system solvers and subroutines (chapters four and five). Chapters six through thirteen are dedicated to optimization problems and methods. They include parallel algorithms for network problems, parallel branch and bound techniques, parallel heuristics for discrete and continuous problems, decomposition methods, parallel algorithms for variational inequality problems, parallel algorithms for stochastic programming, and neural networks. Audience: Parallel Computing in Optimization is addressed not only to researchers of mathematical programming, but to all scientists in various disciplines who use optimization methods in parallel and multiprocessing environments to model and solve problems.
Pipelined ADCs have seen phenomenal improvements in performance over the last few years. As such, when designing a pipelined ADC a clear understanding of the design tradeoffs, and state of the art techniques is required to implement today's high performance low power ADCs.
Information security concerns the confidentiality, integrity, and availability of information processed by a computer system. With an emphasis on prevention, traditional information security research has focused little on the ability to survive successful attacks, which can seriously impair the integrity and availability of a system. Trusted Recovery And Defensive Information Warfare uses database trusted recovery, as an example, to illustrate the principles of trusted recovery in defensive information warfare. Traditional database recovery mechanisms do not address trusted recovery, except for complete rollbacks, which undo the work of benign transactions as well as malicious ones, and compensating transactions, whose utility depends on application semantics. Database trusted recovery faces a set of unique challenges. In particular, trusted database recovery is complicated mainly by (a) the presence of benign transactions that depend, directly or indirectly on malicious transactions; and (b) the requirement by many mission-critical database applications that trusted recovery should be done on-the-fly without blocking the execution of new user transactions. Trusted Recovery And Defensive Information Warfare proposes a new model and a set of innovative algorithms for database trusted recovery. Both read-write dependency based and semantics based trusted recovery algorithms are proposed. Both static and dynamic database trusted recovery algorithms are proposed. These algorithms can typically save a lot of work by innocent users and can satisfy a variety of attack recovery requirements of real world database applications. Trusted Recovery And Defensive Information Warfare is suitable as a secondary text for a graduate level course in computer science, and as a reference for researchers and practitioners in information security.
This volume reports new developments on work in the quantum flux parametron (QFP) project. It makes complete a series on Josephson supercomputers, which includes four earlier volumes, also published by World Scientific. QFP technology has great potential especially in the design of computer architecture. It is regarded as being able to go beyond the horizon of current technology, and is a leading direction for the advancement of computer technology in the next decade.
This book presents the theory behind software-implemented hardware fault tolerance, as well as the practical aspects needed to put it to work on real examples. By evaluating accurately the advantages and disadvantages of the already available approaches, the book provides a guide to developers willing to adopt software-implemented hardware fault tolerance in their applications. Moreover, the book identifies open issues for researchers willing to improve the already available techniques.
Logic circuits are becoming increasingly susceptible to probabilistic behavior caused by external radiation and process variation. In addition, inherently probabilistic quantum- and nano-technologies are on the horizon as we approach the limits of CMOS scaling. Ensuring the reliability of such circuits despite the probabilistic behavior is a key challenge in IC design---one that necessitates a fundamental, probabilistic reformulation of synthesis and testing techniques. This monograph will present techniques for analyzing, designing, and testing logic circuits with probabilistic behavior.
This state-of-the-art survey gives a systematic presentation of recent advances in the design and validation of computer architectures. Based on advanced research ideas and approaches, and written by eminent researchers in the field, seven chapters cover the whole range from computer aided high-level design of VLSI circuits and systems to layout and testable design, including modeling and synthesis of behavior, of control, and of dataflow, cell based logic optimization, machine assisted verification, and virtual machine design. The chapters presuppose only basic familiarity with computer architecture. They are self-contained and lead the reader gently and informatively to the forefront of current research. A special feature of the book is the comprehensive range of architecture design and validation topics covered, giving the reader a clear view of the problems and of advanced techniques for their solution.
This book describes algorithmic methods and parallelization techniques to design a parallel sparse direct solver which is specifically targeted at integrated circuit simulation problems. The authors describe a complete flow and detailed parallel algorithms of the sparse direct solver. They also show how to improve the performance by simple but effective numerical techniques. The sparse direct solver techniques described can be applied to any SPICE-like integrated circuit simulator and have been proven to be high-performance in actual circuit simulation. Readers will benefit from the state-of-the-art parallel integrated circuit simulation techniques described in this book, especially the latest parallel sparse matrix solution techniques.
This book discusses various aspects of cloud computing, in which trust and fault-tolerance models are included in a multilayered, cloud architecture. The authors present a variety of trust and fault models used in the cloud, comparing them based on their functionality and the layer in the cloud to which they respond. Various methods are discussed that can improve the performance of cloud architectures, in terms of trust and fault-tolerance, while providing better performance and quality of service to user. The discussion also includes new algorithms that overcome drawbacks of existing methods, using a performance matrix for each functionality. This book provide readers with an overview of cloud computing and how trust and faults in cloud datacenters affects the performance and quality of service assured to the users. Discusses fundamental issues related to trust and fault-tolerance in Cloud Computing; Describes trust and fault management techniques in multi layered cloud architecture to improve security, reliability and performance of the system; Includes methods to enhance power efficiency and network efficiency, using trust and fault based resource allocation.
Motivation Modem enterprises rely on database management systems (DBMS) to collect, store and manage corporate data, which is considered a strategic corporate re source. Recently, with the proliferation of personal computers and departmen tal computing, the trend has been towards the decentralization and distribution of the computing infrastructure, with autonomy and responsibility for data now residing at the departmental and workgroup level of the organization. Users want their data delivered to their desktops, allowing them to incor porate data into their personal databases, spreadsheets, word processing doc uments, and most importantly, into their daily tasks and activities. They want to be able to share their information while retaining control over its access and distribution. There are also pressures from corporate leaders who wish to use information technology as a strategic resource in offering specialized value-added services to customers. Database technology is being used to manage the data associated with corporate processes and activities. Increasingly, the data being managed are not simply formatted tables in relational databases, but all types of ob jects, including unstructured text, images, audio, and video. Thus, the database management providers are being asked to extend the capabilities of DBMS to include object-relational models as well as full object-oriented database man agement systems."
This book is the product of Research Study Group (RSG) 13 on "Human Engineering Evaluation on the Use of Colour in Electronic Displays," of Panel 8, "Defence Applications of Human and Biomedical Sciences," of the NATO Defence Research Group. RSG 13 was chaired by Heino Widdel (Germany) and consisted of Jeffrey Grossman (United States), Jean-Pierre Menu (France), Giampaolo Noja (Italy, point of contact), David Post (United States), and Jan Walraven (Netherlands). Initially, Christopher Gibson (United Kingdom) and Sharon McFaddon (Canada) participated also. Most of these representatives served previously on the NATO program committee that produced Proceedings of a Workshop on Colour Coded vs. Monochrome Displays (edited by Christopher Gibson and published by the Royal Aircraft Establishment, Farnborough, England) in 1984. RSG 13 can be regarded as a descendent of that program committee. RSG 13 was formed in 1987 for the purpose of developing and distributing guidance regarding the use of color on electronic displays. During our first meeting, we discussed the fact that, although there is a tremendous amount of information available concerning color vision, color perception, colorimetry, and color displays-much of it relevant to display design-it is scattered across numerous texts, journals, conference proceedings, and technical reports. We decided that we could fulfill the RSG's purpose best by producing a book that consolidates and summarizes this information, emphasizing those aspects that are most applicable to display design.
This comprehensive textbook on the field programmable gate array (FPGA) covers its history, fundamental knowledge, architectures, device technologies, computer-aided design technologies, design tools, examples of application, and future trends. Programmable logic devices represented by FPGAs have been rapidly developed in recent years and have become key electronic devices used in most IT products. This book provides both complete introductions suitable for students and beginners, and high-level techniques useful for engineers and researchers in this field. Differently developed from usual integrated circuits, the FPGA has unique structures, design methodologies, and application techniques. Allowing programming by users, the device can dramatically reduce the rising cost of development in advanced semiconductor chips. The FPGA is now driving the most advanced semiconductor processes and is an all-in-one platform combining memory, CPUs, and various peripheral interfaces. This book introduces the FPGA from various aspects for readers of different levels. Novice learners can acquire a fundamental knowledge of the FPGA, including its history, from Chapter 1; the first half of Chapter 2; and Chapter 4. Professionals who are already familiar with the device will gain a deeper understanding of the structures and design methodologies from Chapters 3 and 5. Chapters 6-8 also provide advanced techniques and cutting-edge applications and trends useful for professionals. Although the first parts are mainly suitable for students, the advanced sections of the book will be valuable for professionals in acquiring an in-depth understanding of the FPGA to maximize the performance of the device.
High Performance Computing Systems and Applications contains the fully refereed papers from the 13th Annual Symposium on High Performance Computing, held in Kingston, Canada, in June 1999. This book presents the latest research in HPC architectures, distributed and shared memory performance, algorithms and solvers, with special sessions on atmospheric science, computational chemistry and physics. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
Logic and Complexity looks at basic logic as it is used in Computer Science, and provides students with a logical approach to Complexity theory. With plenty of exercises, this book presents classical notions of mathematical logic, such as decidability, completeness and incompleteness, as well as new ideas brought by complexity theory such as NP-completeness, randomness and approximations, providing a better understanding for efficient algorithmic solutions to problems. Divided into three parts, it covers: - Model Theory and Recursive Functions - introducing the basic model theory of propositional, 1st order, inductive definitions and 2nd order logic. Recursive functions, Turing computability and decidability are also examined. - Descriptive Complexity - looking at the relationship between definitions of problems, queries, properties of programs and their computational complexity. - Approximation - explaining how some optimization problems and counting problems can be approximated according to their logical form. Logic is important in Computer Science, particularly for verification problems and database query languages such as SQL. Students and researchers in this field will find this book of great interest.
Effective compilers allow for a more efficient execution of application programs for a given computer architecture, while well-conceived architectural features can support more effective compiler optimization techniques. A well thought-out strategy of trade-offs between compilers and computer architectures is the key to the successful designing of highly efficient and effective computer systems. From embedded micro-controllers to large-scale multiprocessor systems, it is important to understand the interaction between compilers and computer architectures. The goal of the Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT) is to promote new ideas and to present recent developments in compiler techniques and computer architectures that enhance each other's capabilities and performance. Interaction Between Compilers and Computer Architectures is an updated and revised volume consisting of seven papers originally presented at the Fifth Workshop on Interaction between Compilers and Computer Architectures (INTERACT-5), which was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico in 2001. This volume explores recent developments and ideas for better integration of the interaction between compilers and computer architectures in designing modern processors and computer systems. Interaction Between Compilers and Computer Architectures is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
This book describes a circuit architecture for converting real analog signals into a digital format, suitable for digital signal processors. This architecture, referred to as multi-stage noise-shaping (MASH) Continuous-Time Sigma-Delta Modulators (CT- M), has the potential to provide better digital data quality and achieve better data rate conversion with lower power consumption. The authors not only cover MASH continuous-time sigma delta modulator fundamentals, but also provide a literature review that will allow students, professors, and professionals to catch up on the latest developments in related technology.
Adiabatic logic is a potential successor for static CMOS circuit design when it comes to ultra-low-power energy consumption. Future development like the evolutionary shrinking of the minimum feature size as well as revolutionary novel transistor concepts will change the gate level savings gained by adiabatic logic. In addition, the impact of worsening degradation effects has to be considered in the design of adiabatic circuits. The impact of the technology trends on the figures of merit of adiabatic logic, energy saving potential and optimum operating frequency, are investigated, as well as degradation related issues. Adiabatic logic benefits from future devices, is not susceptible to Hot Carrier Injection, and shows less impact of Bias Temperature Instability than static CMOS circuits. Major interest also lies on the efficient generation of the applied power-clock signal. This oscillating power supply can be used to save energy in short idle times by disconnecting circuits. An efficient way to generate the power-clock is by means of the synchronous 2N2P LC oscillator, which is also robust with respect to pattern-induced capacitive variations. An easy to implement but powerful power-clock gating supplement is proposed by gating the synchronization signals. Diverse implementations to shut down the system are presented and rated for their applicability and other aspects like energy reduction capability and data retention. Advantageous usage of adiabatic logic requires compact and efficient arithmetic structures. A broad variety of adder structures and a Coordinate Rotation Digital Computer are compared and rated according to energy consumption and area usage, and the resulting energy saving potential against static CMOS proves the ultra-low-power capability of adiabatic logic. In the end, a new circuit topology has to compete with static CMOS also in productivity. On a 130nm test chip, a large scale test vehicle containing an FIR filter was implemented in adiabatic logic, utilizing a standard, library-based design flow, fabricated, measured and compared to simulations of a static CMOS counterpart, with measured saving factors compliant to the values gained by simulation. This leads to the conclusion that adiabatic logic is ready for productive design due to compatibility not only to CMOS technology, but also to electronic design automation (EDA) tools developed for static CMOS system design. |
You may like...
Migrating Legacy Applications…
Anca Daniela Ionita, Marin Litoiu, …
Hardcover
R5,151
Discovery Miles 51 510
Creativity in Load-Balance Schemes for…
Alberto Garcia-Robledo, Arturo Diaz Perez, …
Hardcover
R4,079
Discovery Miles 40 790
|