![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems
Neural network and artificial intelligence algorithrns and computing have increased not only in complexity but also in the number of applications. This in turn has posed a tremendous need for a larger computational power that conventional scalar processors may not be able to deliver efficiently. These processors are oriented towards numeric and data manipulations. Due to the neurocomputing requirements (such as non-programming and learning) and the artificial intelligence requirements (such as symbolic manipulation and knowledge representation) a different set of constraints and demands are imposed on the computer architectures/organizations for these applications. Research and development of new computer architectures and VLSI circuits for neural networks and artificial intelligence have been increased in order to meet the new performance requirements. This book presents novel approaches and trends on VLSI implementations of machines for these applications. Papers have been drawn from a number of research communities; the subjects span analog and digital VLSI design, computer design, computer architectures, neurocomputing and artificial intelligence techniques. This book has been organized into four subject areas that cover the two major categories of this book; the areas are: analog circuits for neural networks, digital implementations of neural networks, neural networks on multiprocessor systems and applications, and VLSI machines for artificial intelligence. The topics that are covered in each area are briefly introduced below.
Now in a thoroughly revised second edition, this practical practitioner guide provides a comprehensive overview of the SoC design process. It explains end-to-end system on chip (SoC) design processes and includes updated coverage of design methodology, the design environment, EDA tool flow, design decisions, choice of design intellectual property (IP) cores, sign-off procedures, and design infrastructure requirements. The second edition provides new information on SOC trends and updated design cases. Coverage also includes critical advanced guidance on the latest UPF-based low power design flow, challenges of deep submicron technologies, and 3D design fundamentals, which will prepare the readers for the challenges of working at the nanotechnology scale. A Practical Approach to VLSI System on Chip (SoC) Design: A Comprehensive Guide, Second Edition provides engineers who aspire to become VLSI designers with all the necessary information and details of EDA tools. It will be a valuable professional reference for those working on VLSI design and verification portfolios in complex SoC designs
This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respect to several techniques. This book also illustrates turbo decoders for 3GPP-LTE/LTE-A and IEEE 802.16e/m standards, which provide a low-complexity but high-flexibility circuit structure to support these standards in multiple parallel modes. Moreover, some solutions that can overcome the limitation upon the speedup of parallel architecture by modification to turbo codec are presented here. Compared to the traditional designs, these methods can lead to at most 33% gain in throughput with similar performance and similar cost.
Dynamic Reconfigurable Architectures and Transparent Optimization Techniques presents a detailed study on new techniques to cope with the aforementioned limitations. First, characteristics of reconfigurable systems are discussed in details, and a large number of case studies is shown. Then, a detailed analysis of several benchmarks demonstrates that such architectures need to attack a diverse range of applications with very different behaviours, besides supporting code compatibility. This requires the use of dynamic optimization techniques, such as Binary Translation and Trace reuse. Finally, works that combine both reconfigurable systems and dynamic techniques are discussed and a quantitative analysis of one them, the DIM architecture, is presented.
This is "the Word" -- one man's word, certainly -- about the art (and artifice) of the state of our computer-centric existence. And considering that the "one man" is Neal Stephenson, "the hacker Hemingway" (Newsweek) -- acclaimed novelist, pragmatist, seer, nerd-friendly philosopher, and nationally bestselling author of groundbreaking literary works (Snow Crash, Cryptonomicon, etc., etc.) -- the word is well worth hearing. Mostly well-reasoned examination and partial rant, Stephenson's In the Beginning... was the Command Line is a thoughtful, irreverent, hilarious treatise on the cyber-culture past and present; on operating system tyrannies and downloaded popular revolutions; on the Internet, Disney World, Big Bangs, not to mention the meaning of life itself.
This book provides a comprehensive coverage of hardware security concepts, derived from the unique characteristics of emerging logic and memory devices and related architectures. The primary focus is on mapping device-specific properties, such as multi-functionality, runtime polymorphism, intrinsic entropy, nonlinearity, ease of heterogeneous integration, and tamper-resilience to the corresponding security primitives that they help realize, such as static and dynamic camouflaging, true random number generation, physically unclonable functions, secure heterogeneous and large-scale systems, and tamper-proof memories. The authors discuss several device technologies offering the desired properties (including spintronics switches, memristors, silicon nanowire transistors and ferroelectric devices) for such security primitives and schemes, while also providing a detailed case study for each of the outlined security applications. Overall, the book gives a holistic perspective of how the promising properties found in emerging devices, which are not readily afforded by traditional CMOS devices and systems, can help advance the field of hardware security.
The one instruction set computer (OISC) is the ultimate reduced instruction set computer (RISC). In OISC, the instruction set consists of only one instruction, and then by composition, all other necessary instructions are synthesized. This is an approach completely opposite to that of a complex instruction set computer (CISC), which incorporates complex instructions as microprograms within the processor. Computer Architecture: A Minimalist Perspective examines
computer architecture, computability theory, and the history of
computers from the perspective of one instruction set computing - a
novel approach in which the computer supports only one, simple
instruction. This bold, new paradigm offers significant promise in
biological, chemical, optical, and molecular scale computers. - Provides a comprehensive study of computer architecture using
computability theory as a base.
The primary goal of The Design and Implementation of Low-Power CMOS Radio Receivers is to explore techniques for implementing wireless receivers in an inexpensive complementary metal-oxide-semiconductor (CMOS) technology. Although the techniques developed apply somewhat generally across many classes of receivers, the specific focus of this work is on the Global Positioning System (GPS). Because GPS provides a convenient vehicle for examining CMOS receivers, a brief overview of the GPS system and its implications for consumer electronics is presented. The GPS system comprises 24 satellites in low earth orbit that continuously broadcast their position and local time. Through satellite range measurements, a receiver can determine its absolute position and time to within about 100m anywhere on Earth, as long as four satellites are within view. The deployment of this satellite network was completed in 1994 and, as a result, consumer markets for GPS navigation capabilities are beginning to blossom. Examples include automotive or maritime navigation, intelligent hand-off algorithms in cellular telephony, and cellular emergency services, to name a few. Of particular interest in the context of this book are embedded GPS applications where a GPS receiver is just one component of a larger system. Widespread proliferation of embedded GPS capability will require receivers that are compact, cheap and low-power. The Design and Implementation of Low-Power CMOS Radio Receivers will be of interest to professional radio engineers, circuit designers, professors and students engaged in integrated radio research and other researchers who work in the radio field.
In recent years, tremendous research has been devoted to the design of database systems for real-time applications, called real-time database systems (RTDBS), where transactions are associated with deadlines on their completion times, and some of the data objects in the database are associated with temporal constraints on their validity. Examples of important applications of RTDBS include stock trading systems, navigation systems and computer integrated manufacturing. Different transaction scheduling algorithms and concurrency control protocols have been proposed to satisfy transaction timing data temporal constraints. Other design issues important to the performance of a RTDBS are buffer management, index accesses and I/O scheduling. Real-Time Database Systems: Architecture and Techniques summarizes important research results in this area, and serves as an excellent reference for practitioners, researchers and educators of real-time systems and database systems.
This text offers complete information on the latest developments in the emerging technology of polymer thick film--from the mechanics to applications in telephones, radio and television, and smart cards. Readers discover how specific markets for PTF are growing and changing and how construction schemes can alter and improve performance. Each aspect of PTF technology is discussed in detail.
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
This book describes a specification, microarchitecture, VHDL implementation and evaluation of a SPARC v8 CPU with fine-grain multi-threading, called micro-threading. The CPU, named UTLEON3, is an alternative platform for exploring CPU multi-threading that is compatible with the industry-standard GRLIB package. The processor microarchitecture was designed to map in an efficient way the data-flow scheme on a classical von Neumann pipelined processing used in common processors, while retaining full binary compatibility with existing legacy programs.
With the rapid development of big data, it is necessary to transfer the massive data generated by end devices to the cloud under the traditional cloud computing model. However, the delays caused by massive data transmission no longer meet the requirements of various real-time mobile services. Therefore, the emergence of edge computing has been recently developed as a new computing paradigm that can collect and process data at the edge of the network, which brings significant convenience to solving problems such as delay, bandwidth, and off-loading in the traditional cloud computing paradigm. By extending the functions of the cloud to the edge of the network, edge computing provides effective data access control, computation, processing and storage for end devices. Furthermore, edge computing optimizes the seamless connection from the cloud to devices, which is considered the foundation for realizing the interconnection of everything. However, due to the open features of edge computing, such as content awareness, real-time computing and parallel processing, the existing problems of privacy in the edge computing environment have become more prominent. The access to multiple categories and large numbers of devices in edge computing also creates new privacy issues. In this book, we discuss on the research background and current research process of privacy protection in edge computing. In the first chapter, the state-of-the-art research of edge computing are reviewed. The second chapter discusses the data privacy issue and attack models in edge computing. Three categories of privacy preserving schemes will be further introduced in the following chapters. Chapter three introduces the context-aware privacy preserving scheme. Chapter four further introduces a location-aware differential privacy preserving scheme. Chapter five presents a new blockchain based decentralized privacy preserving in edge computing. Chapter six summarize this monograph and propose future research directions. In summary, this book introduces the following techniques in edge computing: 1) describe an MDP-based privacy-preserving model to solve context-aware data privacy in the hierarchical edge computing paradigm; 2) describe a SDN based clustering methods to solve the location-aware privacy problems in edge computing; 3) describe a novel blockchain based decentralized privacy-preserving scheme in edge computing. These techniques enable the rapid development of privacy-preserving in edge computing.
This book offers a straight-forward guide to the fundamental work of governing bodies and the people who serve on them. The aim is of the book is to help every member serving on a governing body understand and improve their contribution to the entity and governing body they serve. The book is rooted in research, including five years' work by the author as a Research Fellow of Nuffield College, Oxford.
Align IT projects strategically to achieve business goals and objectives Project management and leadership to seize opportunities and manage threats Build and follow a roadmap to implement strategic governance Assess and improve project management capabilities Includes templates and case studies
Organizations cannot continue to blindly accept and introduce components into Information Systems without studying the effectiveness, feasibility and efficiency of the individual components of their information systems. Information Systems may be the only business area where it is automatically assumed that the latest, greatest and most powerful component is the one for our organization and must be managed and developed as any other resource in organizations today. Human Computer Interaction Development and Management contains the most recent research articles concerning the management and development of Information Systems, so that organizations can effectively manage information systems growth and development. Not only must hardware, software, data, information, and networks be managed people must be managed. Humans must be trained to use information systems. Systems must be developed so humans can use the systems as efficiently and effectively as possible.
An introduction to operating systems, covering processes, states of processes, synchronization, programming methods of synchronization, main memory, secondary storage and file systems. Although the book is short, it covers all the essentials and opens up synchronization by introducing a metaphor: producer--consumer that other authors have employed. The difference is that the concept is presented without the programming normally involved with the concept. The thinking is that using a warehouse, the size of which is the shared variable in synchronization terms, without the programming will aid in understanding to this difficult concept. The book also covers main memory, secondary storage with file systems, and concludes with a brief discussion of the client-server paradigm and the way in which client-server impacts the design of the World-Wide Web.
This textbook aims to help the reader develop an in-depth understanding of logical reasoning and gain knowledge of the theory of computation. The book combines theoretical teaching and practical exercises; the latter is realised in Isabelle/HOL, a modern theorem prover, and PAT, an industry-scale model checker. I also give entry-level tutorials on the two software to help the reader get started. By the end of the book, the reader should be proficient in both software. Content-wise, this book focuses on the syntax, semantics and proof theory of various logics; automata theory, formal languages, computability and complexity. The final chapter closes the gap with a discussion on the insight that links logic with computation. This book is written for a high-level undergraduate course or a Master's course. The hybrid skill set of practical theorem proving and model checking should be helpful for the future of readers should they pursue a research career or engineering in formal methods.
This book intends to unite studies in different fields related to the development of the relations between logic, law and legal reasoning. Combining historical and philosophical studies on legal reasoning in Civil and Common Law, and on the often neglected Arabic and Talmudic traditions of jurisprudence, this project unites these areas with recent technical developments in computer science. This combination has resulted in renewed interest in deontic logic and logic of norms that stems from the interaction between artificial intelligence and law and their applications to these areas of logic. The book also aims to motivate and launch a more intense interaction between the historical and philosophical work of Arabic, Talmudic and European jurisprudence. The publication discusses new insights in the interaction between logic and law, and more precisely the study of different answers to the question: what role does logic play in legal reasoning? Varying perspectives include that of foundational studies (such as logical principles and frameworks) to applications, and historical perspectives.
This book is intended for senior undergraduate and graduate students as well as practicing engineers who are involved in design and analysis of radio frequency (RF) circuits. Fully-solved, tutorial-like examples are used to put into practice major topics and to understand the underlying principles of the main sub-circuits required to design an RF transceiver and the whole communication system. Starting with review of principles in electromagnetic (EM) transmission and signal propagation, through detailed practical analysis of RF amplifier, mixer, modulator, demodulator, and oscillator circuit topologies, as well as basics of the system communication theory, this book systematically covers most relevant aspects in a way that is suitable for a single semester university level course. Readers will benefit from the author's sharp focus on radio receiver design, demonstrated through hundreds of fully-solved, realistic examples, as opposed to texts that cover many aspects of electronics and electromagnetic without making the required connection to wireless communication circuit design. Offers readers a complete, self-sufficient tutorial style textbook; Includes all relevant topics required to study and design an RF receiver in a consistent, coherent way with appropriate depth for a one-semester course; Uses hundreds of fully-solved, realistic examples of radio design technology to demonstrate concepts; Explains necessary physical/mathematical concepts and their interrelationship.
This book covers several aspects of the operational amplifier and includes theoretical explanations with simplified expressions and derivations. The book is designed to serve as a textbook for courses offered to undergraduate and postgraduate students enrolled in electronics and communication engineering. The topics included are DC amplifier, AC/DC analysis of DC amplifier, relevant derivations, a block diagram of the operational amplifier, positive and negative feedbacks, amplitude modulator, current to voltage and voltage to current converters, DAC and ADC, integrator, differentiator, active filters, comparators, sinusoidal and non-sinusoidal waveform generators, phase lock loop (PLL), etc. This book contains two parts-sections A and B. Section A includes theory, methodology, circuit design and derivations. Section B explains the design and study of experiments for laboratory practice. Laboratory experiments enable students to perform a practical activity that demonstrates applications of the operational amplifier. A simplified description of the circuits, working principle and practical approach towards understanding the concept is a unique feature of this book. Simple methods and easy steps of the derivation and lucid presentation are some other traits of this book for readers that do not have any background information about electronics. This book is student-centric towards the basics of the operational amplifier and its applications. The detailed coverage and pedagogical tools make this an ideal textbook for students and researchers enrolled in senior undergraduate and beginning postgraduate electronics and communication engineering courses.
This book provides an overview of the emerging smart connected world, and discusses the roles and the usage of underlying semantic computing and Internet-of-Things (IoT) technologies. The book comprises ten chapters overall, grouped in two parts. Part I "Smart Connected World: Overview and Technologies" consists of seven chapters and provides a holistic overview of the smart connected world and its supporting tools and technologies. Part II "Applications and Case Studies" consists of three chapters that describe applications and case studies in manufacturing, smart cities, health, and more. Each chapter is self-contained and can be read independently; taken together, readers get a bigger picture of the technological and application landscape of the smart connected world. This book is of interest for researchers, lecturers, and practitioners in Semantic Web, IoT and related fields. It can serve as a reference for instructors and students taking courses in hybrid computing getting abreast of cutting edge and future directions of a connected ecosystem. It will also benefit industry professionals like software engineers or data scientists, by providing a synergy between Web technologies and applications. This book covers the most important topics on the emerging field of the smart connected world. The contributions from leading active researchers and practitioners in the field are thought provoking and can help in learning and further research. The book is a valuable resource that will benefit academics and industry. It will lead to further research and advancement of the field. Bharat K. Bhargava, Professor of Computer Science, Purdue University, United States
Widespread use of parallel processing will become a reality only if the process of porting applications to parallel computers can be largely automated. Usually it is straightforward for a user to determine how an application can be mapped onto a parallel machine; however, the actual development of parallel code, if done by hand, is typically difficult and time consuming. Parallelizing compilers, which can gen erate parallel code automatically, are therefore a key technology for parallel processing. In this book, Ping-Sheng Tseng describes a parallelizing compiler for systolic arrays, called AL. Although parallelizing compilers are quite common for shared-memory parallel machines, the AL compiler is one of the first working parallelizing compilers for distributed memory machines, of which systolic arrays are a special case. The AL compiler takes advantage of the fine grain and high bandwidth interprocessor communication capabilities in a systolic architecture to generate efficient parallel code. xii Foreword While capable of handling an important class of applications, AL is not intended to be a general-purpose parallelizing compiler."
Load Balancing in Parallel Computers: Theory and Practice is about the essential software technique of load balancing in distributed memory message-passing parallel computers, also called multicomputers. Each processor has its own address space and has to communicate with other processors by message passing. In general, a direct, point-to-point interconnection network is used for the communications. Many commercial parallel computers are of this class, including the Intel Paragon, the Thinking Machine CM-5, and the IBM SP2. Load Balancing in Parallel Computers: Theory and Practice presents a comprehensive treatment of the subject using rigorous mathematical analyses and practical implementations. The focus is on nearest-neighbor load balancing methods in which every processor at every step is restricted to balancing its workload with its direct neighbours only. Nearest-neighbor methods are iterative in nature because a global balanced state can be reached through processors' successive local operations. Since nearest-neighbor methods have a relatively relaxed requirement for the spread of local load information across the system, they are flexible in terms of allowing one to control the balancing quality, effective for preserving communication locality, and can be easily scaled in parallel computers with a direct communication network. Load Balancing in Parallel Computers: Theory and Practice serves as an excellent reference source and may be used as a text for advanced courses on the subject.
The book provides the complete strategic understanding requisite to allow a person to create and use the RMF process recommendations for risk management. This will be the case both for applications of the RMF in corporate training situations, as well as for any individual who wants to obtain specialized knowledge in organizational risk management. It is an all-purpose roadmap of sorts aimed at the practical understanding and implementation of the risk management process as a standard entity. It will enable an "application" of the risk management process as well as the fundamental elements of control formulation within an applied context. |
You may like...
Linguistics Meets Literature - More on…
Matthias Bauer, Sigrid Beck, …
Hardcover
R3,536
Discovery Miles 35 360
Verbal and Signed Languages - Comparing…
Elena Pizzuto, Paola Pietrandrea, …
Hardcover
R5,723
Discovery Miles 57 230
The Syntax of Ellipsis - Evidence from…
Jeroen Van Craenenbroeck
Hardcover
R2,156
Discovery Miles 21 560
The Oxford Handbook of Information…
Caroline Fery, Shinichiro Ishihara
Hardcover
R4,569
Discovery Miles 45 690
Grammaticalization and Pragmatics…
Corinne Rossari, Claudia Ricci, …
Hardcover
R4,160
Discovery Miles 41 600
|