![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems
This book puts in focus various techniques for checking modeling fidelity of Cyber Physical Systems (CPS), with respect to the physical world they represent. The authors' present modeling and analysis techniques representing different communities, from very different angles, discuss their possible interactions, and discuss the commonalities and differences between their practices. Coverage includes model driven development, resource-driven development, statistical analysis, proofs of simulator implementation, compiler construction, power/temperature modeling of digital devices, high-level performance analysis, and code/device certification. Several industrial contexts are covered, including modeling of computing and communication, proof architectures models and statistical based validation techniques.
This book describes automated debugging approaches for the bugs and the faults which appear in different abstraction levels of a hardware system. The authors employ a transaction-based debug approach to systems at the transaction-level, asserting the correct relation of transactions. The automated debug approach for design bugs finds the potential fault candidates at RTL and gate-level of a circuit. Debug techniques for logic bugs and synchronization bugs are demonstrated, enabling readers to localize the most difficult bugs. Debug automation for electrical faults (delay faults)finds the potentially failing speedpaths in a circuit at gate-level. The various debug approaches described achieve high diagnosis accuracy and reduce the debugging time, shortening the IC development cycle and increasing the productivity of designers. Describes a unified framework for debug automation used at both pre-silicon and post-silicon stages; Provides approaches for debug automation of a hardware system at different levels of abstraction, i.e., chip, gate-level, RTL and transaction level; Includes techniques for debug automation of design bugs and electrical faults, as well as an infrastructure to debug NoC-based multiprocessor SoCs.
Now in a thoroughly revised second edition, this practical practitioner guide provides a comprehensive overview of the SoC design process. It explains end-to-end system on chip (SoC) design processes and includes updated coverage of design methodology, the design environment, EDA tool flow, design decisions, choice of design intellectual property (IP) cores, sign-off procedures, and design infrastructure requirements. The second edition provides new information on SOC trends and updated design cases. Coverage also includes critical advanced guidance on the latest UPF-based low power design flow, challenges of deep submicron technologies, and 3D design fundamentals, which will prepare the readers for the challenges of working at the nanotechnology scale. A Practical Approach to VLSI System on Chip (SoC) Design: A Comprehensive Guide, Second Edition provides engineers who aspire to become VLSI designers with all the necessary information and details of EDA tools. It will be a valuable professional reference for those working on VLSI design and verification portfolios in complex SoC designs
This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respect to several techniques. This book also illustrates turbo decoders for 3GPP-LTE/LTE-A and IEEE 802.16e/m standards, which provide a low-complexity but high-flexibility circuit structure to support these standards in multiple parallel modes. Moreover, some solutions that can overcome the limitation upon the speedup of parallel architecture by modification to turbo codec are presented here. Compared to the traditional designs, these methods can lead to at most 33% gain in throughput with similar performance and similar cost.
Dynamic Reconfigurable Architectures and Transparent Optimization Techniques presents a detailed study on new techniques to cope with the aforementioned limitations. First, characteristics of reconfigurable systems are discussed in details, and a large number of case studies is shown. Then, a detailed analysis of several benchmarks demonstrates that such architectures need to attack a diverse range of applications with very different behaviours, besides supporting code compatibility. This requires the use of dynamic optimization techniques, such as Binary Translation and Trace reuse. Finally, works that combine both reconfigurable systems and dynamic techniques are discussed and a quantitative analysis of one them, the DIM architecture, is presented.
This book provides a comprehensive coverage of hardware security concepts, derived from the unique characteristics of emerging logic and memory devices and related architectures. The primary focus is on mapping device-specific properties, such as multi-functionality, runtime polymorphism, intrinsic entropy, nonlinearity, ease of heterogeneous integration, and tamper-resilience to the corresponding security primitives that they help realize, such as static and dynamic camouflaging, true random number generation, physically unclonable functions, secure heterogeneous and large-scale systems, and tamper-proof memories. The authors discuss several device technologies offering the desired properties (including spintronics switches, memristors, silicon nanowire transistors and ferroelectric devices) for such security primitives and schemes, while also providing a detailed case study for each of the outlined security applications. Overall, the book gives a holistic perspective of how the promising properties found in emerging devices, which are not readily afforded by traditional CMOS devices and systems, can help advance the field of hardware security.
The one instruction set computer (OISC) is the ultimate reduced instruction set computer (RISC). In OISC, the instruction set consists of only one instruction, and then by composition, all other necessary instructions are synthesized. This is an approach completely opposite to that of a complex instruction set computer (CISC), which incorporates complex instructions as microprograms within the processor. Computer Architecture: A Minimalist Perspective examines
computer architecture, computability theory, and the history of
computers from the perspective of one instruction set computing - a
novel approach in which the computer supports only one, simple
instruction. This bold, new paradigm offers significant promise in
biological, chemical, optical, and molecular scale computers. - Provides a comprehensive study of computer architecture using
computability theory as a base.
Organizations cannot continue to blindly accept and introduce components into Information Systems without studying the effectiveness, feasibility and efficiency of the individual components of their information systems. Information Systems may be the only business area where it is automatically assumed that the latest, greatest and most powerful component is the one for our organization and must be managed and developed as any other resource in organizations today. Human Computer Interaction Development and Management contains the most recent research articles concerning the management and development of Information Systems, so that organizations can effectively manage information systems growth and development. Not only must hardware, software, data, information, and networks be managed people must be managed. Humans must be trained to use information systems. Systems must be developed so humans can use the systems as efficiently and effectively as possible.
The primary goal of The Design and Implementation of Low-Power CMOS Radio Receivers is to explore techniques for implementing wireless receivers in an inexpensive complementary metal-oxide-semiconductor (CMOS) technology. Although the techniques developed apply somewhat generally across many classes of receivers, the specific focus of this work is on the Global Positioning System (GPS). Because GPS provides a convenient vehicle for examining CMOS receivers, a brief overview of the GPS system and its implications for consumer electronics is presented. The GPS system comprises 24 satellites in low earth orbit that continuously broadcast their position and local time. Through satellite range measurements, a receiver can determine its absolute position and time to within about 100m anywhere on Earth, as long as four satellites are within view. The deployment of this satellite network was completed in 1994 and, as a result, consumer markets for GPS navigation capabilities are beginning to blossom. Examples include automotive or maritime navigation, intelligent hand-off algorithms in cellular telephony, and cellular emergency services, to name a few. Of particular interest in the context of this book are embedded GPS applications where a GPS receiver is just one component of a larger system. Widespread proliferation of embedded GPS capability will require receivers that are compact, cheap and low-power. The Design and Implementation of Low-Power CMOS Radio Receivers will be of interest to professional radio engineers, circuit designers, professors and students engaged in integrated radio research and other researchers who work in the radio field.
The definitive expert guide to Windows is now rewritten from the ground up to deliver the most valuable, detailed hands-on insights for maximizing your productivity with Windows 11. Legendary Windows expert Ed Bott reveals the full power of Windows 11's most innovative new features, and offers detailed guidance on making the most of Microsoft's new Windows with modern PC hardware and cloud services. Windows 11 isn't just an incremental update: it's a thorough and thoughtful reworking of Windows, from user experience to security-a new way of working, for more than 250,000,000 new device owners every year. Now, backed with insider support from Microsoft's own Windows teams, Bott presents better, smarter ways to work with it: hundreds of timesaving tips, practical solutions, troubleshooting techniques, and easy workarounds you won't find anywhere else. In one supremely well-organized reference, you'll find authoritative coverage of all this, and much more: Windows 11's new user experience, from reworked Start menu and Settings app to voice input The brand-new Windows 365 option for running Windows 11 as a Cloud PC, accessible from anywhere Major security and privacy enhancements that leverage the latest PC hardware Expert insight and options for installation, configuration, deployment, and management - from the individual to the enterprise Getting more productivity out of Windows 11's built-in apps and advanced Microsoft Edge browser Improving performance, maximizing power efficiency, troubleshooting, and backup/recovery Managing and automating Windows with PowerShell, Windows Terminal, and other pro tools Running Android apps on Windows 11, and using the Windows subsystem for Linux
In recent years, tremendous research has been devoted to the design of database systems for real-time applications, called real-time database systems (RTDBS), where transactions are associated with deadlines on their completion times, and some of the data objects in the database are associated with temporal constraints on their validity. Examples of important applications of RTDBS include stock trading systems, navigation systems and computer integrated manufacturing. Different transaction scheduling algorithms and concurrency control protocols have been proposed to satisfy transaction timing data temporal constraints. Other design issues important to the performance of a RTDBS are buffer management, index accesses and I/O scheduling. Real-Time Database Systems: Architecture and Techniques summarizes important research results in this area, and serves as an excellent reference for practitioners, researchers and educators of real-time systems and database systems.
This text offers complete information on the latest developments in the emerging technology of polymer thick film--from the mechanics to applications in telephones, radio and television, and smart cards. Readers discover how specific markets for PTF are growing and changing and how construction schemes can alter and improve performance. Each aspect of PTF technology is discussed in detail.
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
This book describes a specification, microarchitecture, VHDL implementation and evaluation of a SPARC v8 CPU with fine-grain multi-threading, called micro-threading. The CPU, named UTLEON3, is an alternative platform for exploring CPU multi-threading that is compatible with the industry-standard GRLIB package. The processor microarchitecture was designed to map in an efficient way the data-flow scheme on a classical von Neumann pipelined processing used in common processors, while retaining full binary compatibility with existing legacy programs.
With the rapid development of big data, it is necessary to transfer the massive data generated by end devices to the cloud under the traditional cloud computing model. However, the delays caused by massive data transmission no longer meet the requirements of various real-time mobile services. Therefore, the emergence of edge computing has been recently developed as a new computing paradigm that can collect and process data at the edge of the network, which brings significant convenience to solving problems such as delay, bandwidth, and off-loading in the traditional cloud computing paradigm. By extending the functions of the cloud to the edge of the network, edge computing provides effective data access control, computation, processing and storage for end devices. Furthermore, edge computing optimizes the seamless connection from the cloud to devices, which is considered the foundation for realizing the interconnection of everything. However, due to the open features of edge computing, such as content awareness, real-time computing and parallel processing, the existing problems of privacy in the edge computing environment have become more prominent. The access to multiple categories and large numbers of devices in edge computing also creates new privacy issues. In this book, we discuss on the research background and current research process of privacy protection in edge computing. In the first chapter, the state-of-the-art research of edge computing are reviewed. The second chapter discusses the data privacy issue and attack models in edge computing. Three categories of privacy preserving schemes will be further introduced in the following chapters. Chapter three introduces the context-aware privacy preserving scheme. Chapter four further introduces a location-aware differential privacy preserving scheme. Chapter five presents a new blockchain based decentralized privacy preserving in edge computing. Chapter six summarize this monograph and propose future research directions. In summary, this book introduces the following techniques in edge computing: 1) describe an MDP-based privacy-preserving model to solve context-aware data privacy in the hierarchical edge computing paradigm; 2) describe a SDN based clustering methods to solve the location-aware privacy problems in edge computing; 3) describe a novel blockchain based decentralized privacy-preserving scheme in edge computing. These techniques enable the rapid development of privacy-preserving in edge computing.
An introduction to operating systems, covering processes, states of processes, synchronization, programming methods of synchronization, main memory, secondary storage and file systems. Although the book is short, it covers all the essentials and opens up synchronization by introducing a metaphor: producer--consumer that other authors have employed. The difference is that the concept is presented without the programming normally involved with the concept. The thinking is that using a warehouse, the size of which is the shared variable in synchronization terms, without the programming will aid in understanding to this difficult concept. The book also covers main memory, secondary storage with file systems, and concludes with a brief discussion of the client-server paradigm and the way in which client-server impacts the design of the World-Wide Web.
This textbook aims to help the reader develop an in-depth understanding of logical reasoning and gain knowledge of the theory of computation. The book combines theoretical teaching and practical exercises; the latter is realised in Isabelle/HOL, a modern theorem prover, and PAT, an industry-scale model checker. I also give entry-level tutorials on the two software to help the reader get started. By the end of the book, the reader should be proficient in both software. Content-wise, this book focuses on the syntax, semantics and proof theory of various logics; automata theory, formal languages, computability and complexity. The final chapter closes the gap with a discussion on the insight that links logic with computation. This book is written for a high-level undergraduate course or a Master's course. The hybrid skill set of practical theorem proving and model checking should be helpful for the future of readers should they pursue a research career or engineering in formal methods.
This book intends to unite studies in different fields related to the development of the relations between logic, law and legal reasoning. Combining historical and philosophical studies on legal reasoning in Civil and Common Law, and on the often neglected Arabic and Talmudic traditions of jurisprudence, this project unites these areas with recent technical developments in computer science. This combination has resulted in renewed interest in deontic logic and logic of norms that stems from the interaction between artificial intelligence and law and their applications to these areas of logic. The book also aims to motivate and launch a more intense interaction between the historical and philosophical work of Arabic, Talmudic and European jurisprudence. The publication discusses new insights in the interaction between logic and law, and more precisely the study of different answers to the question: what role does logic play in legal reasoning? Varying perspectives include that of foundational studies (such as logical principles and frameworks) to applications, and historical perspectives.
This book is intended for senior undergraduate and graduate students as well as practicing engineers who are involved in design and analysis of radio frequency (RF) circuits. Fully-solved, tutorial-like examples are used to put into practice major topics and to understand the underlying principles of the main sub-circuits required to design an RF transceiver and the whole communication system. Starting with review of principles in electromagnetic (EM) transmission and signal propagation, through detailed practical analysis of RF amplifier, mixer, modulator, demodulator, and oscillator circuit topologies, as well as basics of the system communication theory, this book systematically covers most relevant aspects in a way that is suitable for a single semester university level course. Readers will benefit from the author's sharp focus on radio receiver design, demonstrated through hundreds of fully-solved, realistic examples, as opposed to texts that cover many aspects of electronics and electromagnetic without making the required connection to wireless communication circuit design. Offers readers a complete, self-sufficient tutorial style textbook; Includes all relevant topics required to study and design an RF receiver in a consistent, coherent way with appropriate depth for a one-semester course; Uses hundreds of fully-solved, realistic examples of radio design technology to demonstrate concepts; Explains necessary physical/mathematical concepts and their interrelationship.
This book covers several aspects of the operational amplifier and includes theoretical explanations with simplified expressions and derivations. The book is designed to serve as a textbook for courses offered to undergraduate and postgraduate students enrolled in electronics and communication engineering. The topics included are DC amplifier, AC/DC analysis of DC amplifier, relevant derivations, a block diagram of the operational amplifier, positive and negative feedbacks, amplitude modulator, current to voltage and voltage to current converters, DAC and ADC, integrator, differentiator, active filters, comparators, sinusoidal and non-sinusoidal waveform generators, phase lock loop (PLL), etc. This book contains two parts-sections A and B. Section A includes theory, methodology, circuit design and derivations. Section B explains the design and study of experiments for laboratory practice. Laboratory experiments enable students to perform a practical activity that demonstrates applications of the operational amplifier. A simplified description of the circuits, working principle and practical approach towards understanding the concept is a unique feature of this book. Simple methods and easy steps of the derivation and lucid presentation are some other traits of this book for readers that do not have any background information about electronics. This book is student-centric towards the basics of the operational amplifier and its applications. The detailed coverage and pedagogical tools make this an ideal textbook for students and researchers enrolled in senior undergraduate and beginning postgraduate electronics and communication engineering courses.
This book provides an overview of the emerging smart connected world, and discusses the roles and the usage of underlying semantic computing and Internet-of-Things (IoT) technologies. The book comprises ten chapters overall, grouped in two parts. Part I "Smart Connected World: Overview and Technologies" consists of seven chapters and provides a holistic overview of the smart connected world and its supporting tools and technologies. Part II "Applications and Case Studies" consists of three chapters that describe applications and case studies in manufacturing, smart cities, health, and more. Each chapter is self-contained and can be read independently; taken together, readers get a bigger picture of the technological and application landscape of the smart connected world. This book is of interest for researchers, lecturers, and practitioners in Semantic Web, IoT and related fields. It can serve as a reference for instructors and students taking courses in hybrid computing getting abreast of cutting edge and future directions of a connected ecosystem. It will also benefit industry professionals like software engineers or data scientists, by providing a synergy between Web technologies and applications. This book covers the most important topics on the emerging field of the smart connected world. The contributions from leading active researchers and practitioners in the field are thought provoking and can help in learning and further research. The book is a valuable resource that will benefit academics and industry. It will lead to further research and advancement of the field. Bharat K. Bhargava, Professor of Computer Science, Purdue University, United States
Widespread use of parallel processing will become a reality only if the process of porting applications to parallel computers can be largely automated. Usually it is straightforward for a user to determine how an application can be mapped onto a parallel machine; however, the actual development of parallel code, if done by hand, is typically difficult and time consuming. Parallelizing compilers, which can gen erate parallel code automatically, are therefore a key technology for parallel processing. In this book, Ping-Sheng Tseng describes a parallelizing compiler for systolic arrays, called AL. Although parallelizing compilers are quite common for shared-memory parallel machines, the AL compiler is one of the first working parallelizing compilers for distributed memory machines, of which systolic arrays are a special case. The AL compiler takes advantage of the fine grain and high bandwidth interprocessor communication capabilities in a systolic architecture to generate efficient parallel code. xii Foreword While capable of handling an important class of applications, AL is not intended to be a general-purpose parallelizing compiler."
Load Balancing in Parallel Computers: Theory and Practice is about the essential software technique of load balancing in distributed memory message-passing parallel computers, also called multicomputers. Each processor has its own address space and has to communicate with other processors by message passing. In general, a direct, point-to-point interconnection network is used for the communications. Many commercial parallel computers are of this class, including the Intel Paragon, the Thinking Machine CM-5, and the IBM SP2. Load Balancing in Parallel Computers: Theory and Practice presents a comprehensive treatment of the subject using rigorous mathematical analyses and practical implementations. The focus is on nearest-neighbor load balancing methods in which every processor at every step is restricted to balancing its workload with its direct neighbours only. Nearest-neighbor methods are iterative in nature because a global balanced state can be reached through processors' successive local operations. Since nearest-neighbor methods have a relatively relaxed requirement for the spread of local load information across the system, they are flexible in terms of allowing one to control the balancing quality, effective for preserving communication locality, and can be easily scaled in parallel computers with a direct communication network. Load Balancing in Parallel Computers: Theory and Practice serves as an excellent reference source and may be used as a text for advanced courses on the subject.
This textbook teaches students techniques for the design of advanced digital systems using Field Programmable Gate Arrays (FPGAs). The authors focus on communication between FPGAs and peripheral devices (such as EEPROM, analog-to-digital converters, sensors, digital-to-analog converters, displays etc.) and in particular state machines and timed state machines for the implementation of serial communication protocols, such as UART, SPI, I(2)C, and display protocols, such as VGA, HDMI. VHDL is used as the programming language and all topics are covered in a structured, step-by-step manner. |
You may like...
Advanced Theory of Constraint and Motion…
Jingshan Zhao, Zhijing Feng, …
Hardcover
R3,544
Discovery Miles 35 440
Design and Control Advances in Robotics
Mohamed Arezk Mellal
Hardcover
R7,594
Discovery Miles 75 940
Security and Privacy Issues in IoT…
Sudhir Kumar Sharma, Bharat Bhushan, …
Paperback
R2,540
Discovery Miles 25 400
Advanced Distributed Consensus for…
Magdi S. Mahmoud, Mojeed O. Oyedeji, …
Paperback
R2,781
Discovery Miles 27 810
Dynamic Thinking - A Primer on Dynamic…
Gregor Schoener, John Spencer, …
Hardcover
R5,494
Discovery Miles 54 940
|