![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems
This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respect to several techniques. This book also illustrates turbo decoders for 3GPP-LTE/LTE-A and IEEE 802.16e/m standards, which provide a low-complexity but high-flexibility circuit structure to support these standards in multiple parallel modes. Moreover, some solutions that can overcome the limitation upon the speedup of parallel architecture by modification to turbo codec are presented here. Compared to the traditional designs, these methods can lead to at most 33% gain in throughput with similar performance and similar cost.
System Center Configuration Manager Current Branch provides a total systems management solution for a people-centric world. It can deploy applications to individuals using virtually any device or platform, centralizing and automating management across on-premise, service provider, and Microsoft Azure environments. In System Center Configuration Manager Current Branch Unleashed, a team of world-renowned System Center experts shows you how to make the most of this powerful toolset. The authors begin by introducing modern systems management and offering practical strategies for coherently managing today's IT infrastructures. Drawing on their immense consulting experience, they offer expert guidance for ConfigMgr planning, architecture, and implementation. You'll walk through efficiently performing a wide spectrum of ConfigMgr operations, from managing clients, updates, and compliance to reporting. Finally, you'll find current best practices for administering ConfigMgr, from security to backups. Detailed information on how to: Successfully manage distributed, people-centric, cloud-focused IT environments Optimize ConfigMgr architecture, design, and deployment plans to reflect your environment Smoothly install ConfigMgr Current Branch and migrate from Configuration Manager 2012 Save time and improve efficiency by automating system management Use the console to centralize control over infrastructure, software, users, and devices Discover and manage clients running Windows, macOS, Linux, and UNIX Define, monitor, enforce, remediate, and report on all aspects of configuration compliance Deliver the right software to the right people with ConfigMgr applications and deployment types Reliably manage patches and updates, including Office 365 client updates Integrate Intune to manage on-premise and mobile devices through a single console Secure access to corporate resources from mobile devices Manage Microsoft's enterprise antimalware platform with System Center Endpoint Protection Using this guide's proven techniques and comprehensive reference information, you can maximize the value of ConfigMgr in your environment-no matter how complex it is or how quickly it's changing.
Dynamic Reconfigurable Architectures and Transparent Optimization Techniques presents a detailed study on new techniques to cope with the aforementioned limitations. First, characteristics of reconfigurable systems are discussed in details, and a large number of case studies is shown. Then, a detailed analysis of several benchmarks demonstrates that such architectures need to attack a diverse range of applications with very different behaviours, besides supporting code compatibility. This requires the use of dynamic optimization techniques, such as Binary Translation and Trace reuse. Finally, works that combine both reconfigurable systems and dynamic techniques are discussed and a quantitative analysis of one them, the DIM architecture, is presented.
In recent years, tremendous research has been devoted to the design of database systems for real-time applications, called real-time database systems (RTDBS), where transactions are associated with deadlines on their completion times, and some of the data objects in the database are associated with temporal constraints on their validity. Examples of important applications of RTDBS include stock trading systems, navigation systems and computer integrated manufacturing. Different transaction scheduling algorithms and concurrency control protocols have been proposed to satisfy transaction timing data temporal constraints. Other design issues important to the performance of a RTDBS are buffer management, index accesses and I/O scheduling. Real-Time Database Systems: Architecture and Techniques summarizes important research results in this area, and serves as an excellent reference for practitioners, researchers and educators of real-time systems and database systems.
Organizations cannot continue to blindly accept and introduce components into Information Systems without studying the effectiveness, feasibility and efficiency of the individual components of their information systems. Information Systems may be the only business area where it is automatically assumed that the latest, greatest and most powerful component is the one for our organization and must be managed and developed as any other resource in organizations today. Human Computer Interaction Development and Management contains the most recent research articles concerning the management and development of Information Systems, so that organizations can effectively manage information systems growth and development. Not only must hardware, software, data, information, and networks be managed people must be managed. Humans must be trained to use information systems. Systems must be developed so humans can use the systems as efficiently and effectively as possible.
The one instruction set computer (OISC) is the ultimate reduced instruction set computer (RISC). In OISC, the instruction set consists of only one instruction, and then by composition, all other necessary instructions are synthesized. This is an approach completely opposite to that of a complex instruction set computer (CISC), which incorporates complex instructions as microprograms within the processor. Computer Architecture: A Minimalist Perspective examines
computer architecture, computability theory, and the history of
computers from the perspective of one instruction set computing - a
novel approach in which the computer supports only one, simple
instruction. This bold, new paradigm offers significant promise in
biological, chemical, optical, and molecular scale computers. - Provides a comprehensive study of computer architecture using
computability theory as a base.
Amid recent interest in Clifford algebra for dual quaternions as a more suitable method for Computer Graphics than standard matrix algebra, this book presents dual quaternions and their associated Clifford algebras in a new light, accessible to and geared towards the Computer Graphics community. Collating all the associated formulas and theorems in one place, this book provides an extensive and rigorous treatment of dual quaternions, as well as showing how two models of Clifford algebras emerge naturally from the theory of dual quaternions. Each chapter comes complete with a set of exercises to help readers sharpen and practice their knowledge. This book is accessible to anyone with a basic knowledge of quaternion algebra and is of particular use to forward-thinking members of the Computer Graphics community. .
This text offers complete information on the latest developments in the emerging technology of polymer thick film--from the mechanics to applications in telephones, radio and television, and smart cards. Readers discover how specific markets for PTF are growing and changing and how construction schemes can alter and improve performance. Each aspect of PTF technology is discussed in detail.
An introduction to operating systems, covering processes, states of processes, synchronization, programming methods of synchronization, main memory, secondary storage and file systems. Although the book is short, it covers all the essentials and opens up synchronization by introducing a metaphor: producer--consumer that other authors have employed. The difference is that the concept is presented without the programming normally involved with the concept. The thinking is that using a warehouse, the size of which is the shared variable in synchronization terms, without the programming will aid in understanding to this difficult concept. The book also covers main memory, secondary storage with file systems, and concludes with a brief discussion of the client-server paradigm and the way in which client-server impacts the design of the World-Wide Web.
Highlights developments, discoveries, and practical and advanced experiences related to responsive distributed computing and how it can support the deployment of trajectory-based applications in intelligent systems. Presents metamodeling with new trajectories patterns which are very useful for intelligent transportation systems. Examines the processing aspects of raw trajectories to develop other types of semantic and activity-type and space-time path type trajectories. Discusses Complex Event Processing (CEP), Internet of Things (IoT), Internet of Vehicle (IoV), V2X communication, Big Data Analytics, distributed processing frameworks, and Cloud Computing. Presents a number of case studies to demonstrate smart trajectories related to spatio-temporal events such as traffic congestion, viral contamination, and pedestrian accidents.
This book was originally published in 1995. At the time of publication, distributed file systems were monolithic and only supported single file abstractions. Network storage devices needed to be able to accommodate emerging information media such as digital audio and video, with data radically different in characteristics to traditional text and binary that file systems were optimised for. By combining emerging and traditional media, information could be recorded and presented in the most suitable way, and the value of a piece of information could be further enhanced by linking together related pieces. However composite data and cross-reference between data items raised a number of system issues that had not been addressed properly before. In this book Dr Lo defined a multi-service storage architecture that could meet the needs of existing and emerging applications and support multiple file abstractions. He also explored a number of related design issues.
Addresses innovations in technology relating to the energy efficiency of a wide variety of contemporary computer systems and networks With concerns about global energy consumption at an all-time high, improving computer networks energy efficiency is becoming an increasingly important topic. Large-Scale Distributed Systems and Energy Efficiency: A Holistic View addresses innovations in technology relating to the energy efficiency of a wide variety of contemporary computer systems and networks. After an introductory overview of the energy demands of current Information and Communications Technology (ICT), individual chapters offer in-depth analyses of such topics as cloud computing, green networking (both wired and wireless), mobile computing, power modeling, the rise of green data centers and high-performance computing, resource allocation, and energy efficiency in peer-to-peer (P2P) computing networks. Discusses measurement and modeling of the energy consumption method Includes methods for energy consumption reduction in diverse computing environments Features a variety of case studies and examples of energy reduction and assessment Timely and important, Large-Scale Distributed Systems and Energy Efficiency is an invaluable resource for ways of increasing the energy efficiency of computing systems and networks while simultaneously reducing the carbon footprint.
Almost all the systems in our world, including technical, social, economic, and environmental systems, are becoming interconnected and increasingly complex, and as such they are vulnerable to various risks. Due to this trend, resilience creation is becoming more important to system managers and decision makers, this to ensure sustained performance. In order to be able to ensure an acceptable sustained performance under such interconnectedness and complexity, resilience creation with a system approach is a requirement. Mathematical modeling based approaches are the most common approach for system resilience creation. Mathematical Modelling of System Resilience covers resilience creation for various system aspects including a functional system of the supply chain, overall supply chain systems; various methodologies for modeling system resilience; satellite-based approach for addressing climate related risks, repair-based approach for sustainable performance of an engineering system, and modeling measures of the reliability for a vertical take-off and landing system. Each of the chapters contributes state of the art research for the relevant resilience related topic covered in the chapter. Technical topics covered in the book include: 1. Supply chain risk, vulnerability and disruptions 2. System resilience for containing failures and disruptions 3. Resiliency considering frequency and intensities of disasters 4. Resilience performance index 5. Resiliency of electric Traction system 6. Degree of resilience 7. Satellite observation and hydrological risk 8. Latitude of Resilience 9. On-line repair for resilience 10. Reliability design for Vertical Takeoff and landing Prototype
This book presents the cellular wireless network standard NB-IoT (Narrow Band-Internet of Things), which addresses many key requirements of the IoT. NB-IoT is a topic that is inspiring the industry to create new business cases and associated products. The author first introduces the technology and typical IoT use cases. He then explains NB-IoT extended network coverage and outstanding power saving features which are enabling the design of IoT devices (e.g. sensors) to work everywhere and for more than 10 years, in a maintenance-free way. The book explains to industrial users how to utilize NB-IoT features for their own IoT projects. Other system ingredients (e.g. IoT cloud services) and embedded security aspects are covered as well. The author takes an in-depth look at NB-IoT from an application engineering point of view, focusing on IoT device design. The target audience is technical-minded IoT project owners and system design engineers who are planning to develop an IoT application.
This book explores the most recent Edge and Distributed Cloud computing research and industrial advances, settling the basis for Advanced Swarm Computing developments. It features the Swarm computing concepts and realizes it as an Ad-hoc Edge Cloud architecture. Unlike current techniques in Edge and Cloud computing that solely view IoT connected devices as sources of data, Swarm computing aims at using the compute capabilities of IoT connected devices in coordination with current Edge and Cloud computing innovations. In addition to being more widely available, IoT-connected devices are also quickly becoming more sophisticated in terms of their ability to carry considerable compute and storage resources. Swarm computing and Ad-hoc Edge Cloud take full advantage of this trend to create on-demand, autonomic and decentralized self-managed computing infrastructures. Focusing on cognitive resource and service management, the book examines the specific research challenges of the Swarm computing approach, related to the characteristics of IoT connected devices that form the infrastructure. It also offers academics and practitioners insights for future research in the fields of Edge and Swarm computing.
The primary goal of The Design and Implementation of Low-Power CMOS Radio Receivers is to explore techniques for implementing wireless receivers in an inexpensive complementary metal-oxide-semiconductor (CMOS) technology. Although the techniques developed apply somewhat generally across many classes of receivers, the specific focus of this work is on the Global Positioning System (GPS). Because GPS provides a convenient vehicle for examining CMOS receivers, a brief overview of the GPS system and its implications for consumer electronics is presented. The GPS system comprises 24 satellites in low earth orbit that continuously broadcast their position and local time. Through satellite range measurements, a receiver can determine its absolute position and time to within about 100m anywhere on Earth, as long as four satellites are within view. The deployment of this satellite network was completed in 1994 and, as a result, consumer markets for GPS navigation capabilities are beginning to blossom. Examples include automotive or maritime navigation, intelligent hand-off algorithms in cellular telephony, and cellular emergency services, to name a few. Of particular interest in the context of this book are embedded GPS applications where a GPS receiver is just one component of a larger system. Widespread proliferation of embedded GPS capability will require receivers that are compact, cheap and low-power. The Design and Implementation of Low-Power CMOS Radio Receivers will be of interest to professional radio engineers, circuit designers, professors and students engaged in integrated radio research and other researchers who work in the radio field.
This text provides novel smart network systems, wireless telecommunications infrastructures, and computing capabilities to help healthcare systems using computing techniques like IoT, cloud computing, machine and deep learning Big Data along with smart wireless networks. It discusses important topics, including robotics manipulation and analysis in smart healthcare industries, smart telemedicine framework using machine learning and deep learning, role of UAV and drones in smart hospitals, virtual reality based on 5G/6G and augmented reality in healthcare systems, data privacy and security, nanomedicine, and cloud-based artificial intelligence in healthcare systems. The book: * Discusses intelligent computing through IoT and Big Data in secure and smart healthcare systems. * Covers algorithms, including deterministic algorithms, randomized algorithms, iterative algorithms, and recursive algorithms. * Discusses remote sensing devices in hospitals and local health facilities for patient evaluation and care. * Covers wearable technology applications such as weight control and physical activity tracking for disease prevention and smart healthcare. This book will be useful for senior undergraduate, graduate students, and academic researchers in areas such as electrical engineering, electronics and communication engineering, computer science, and information technology. Discussing concepts of smart networks, advanced wireless communication, and technologies in setting up smart healthcare services, this text will be useful for senior undergraduate, graduate students, and academic researchers in areas such as electrical engineering, electronics and communication engineering, computer science, and information technology. It covers internet of things (IoT) implementation and challenges in healthcare industries, wireless network, and communication-based optimization algorithms for smart healthcare devices.
The dramatic increase in computer performance has been extraordinary, but not for all computations: it has key limits and structure. Software architects, developers, and even data scientists need to understand how exploit the fundamental structure of computer performance to harness it for future applications. Ideal for upper level undergraduates, Computer Architecture for Scientists covers four key pillars of computer performance and imparts a high-level basis for reasoning with and understanding these concepts: Small is fast - how size scaling drives performance; Implicit parallelism - how a sequential program can be executed faster with parallelism; Dynamic locality - skirting physical limits, by arranging data in a smaller space; Parallelism - increasing performance with teams of workers. These principles and models provide approachable high-level insights and quantitative modelling without distracting low-level detail. Finally, the text covers the GPU and machine-learning accelerators that have become increasingly important for mainstream applications.
Blockchain technology is an emerging distributed, decentralized architecture and computing paradigm, which has accelerated the development and application of cloud, fog and edge computing; artificial intelligence; cyber physical systems; social networking; crowdsourcing and crowdsensing; 5g; trust management and finance; and other many useful sectors. Nowadays, the primary blockchain technology uses are in information systems to keep information secure and private. However, many threats and vulnerabilities are facing blockchain in the past decade such 51% attacks, double spending attacks, etc. The popularity and rapid development of blockchain brings many technical and regulatory challenges for research and academic communities. The main goal of this book is to encourage both researchers and practitioners of Blockchain technology to share and exchange their experiences and recent studies between academia and industry. The reader will be provided with the most up-to-date knowledge of blockchain in mainstream areas of security and privacy in the decentralized domain, which is timely and essential (this is due to the fact that the distributed and p2p applications are increasing day-by-day, and the attackers adopt new mechanisms to threaten the security and privacy of the users in those environments). This book provides a detailed explanation of security and privacy with respect to blockchain for information systems, and will be an essential resource for students, researchers and scientists studying blockchain uses in information systems and those wanting to explore the current state of play.
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
This book describes a specification, microarchitecture, VHDL implementation and evaluation of a SPARC v8 CPU with fine-grain multi-threading, called micro-threading. The CPU, named UTLEON3, is an alternative platform for exploring CPU multi-threading that is compatible with the industry-standard GRLIB package. The processor microarchitecture was designed to map in an efficient way the data-flow scheme on a classical von Neumann pipelined processing used in common processors, while retaining full binary compatibility with existing legacy programs.
This book provides a comprehensive theory of mono- and multi-fractal traffic, including the basics of long-range dependent time series and 1/f noise, ergodicity and predictability of traffic, traffic modeling and simulation, stationarity tests of traffic, traffic measurement and the anomaly detection of traffic in communications networks. Proving that mono-fractal LRD time series is ergodic, the book exhibits that LRD traffic is stationary. The author shows that the stationarity of multi-fractal traffic relies on observation time scales, and proposes multi-fractional generalized Cauchy processes and modified multi-fractional Gaussian noise. The book also establishes a set of guidelines for determining the record length of traffic in measurement. Moreover, it presents an approach of traffic simulation, as well as the anomaly detection of traffic under distributed-denial-of service attacks. Scholars and graduates studying network traffic in computer science will find the book beneficial.
This book provides a comprehensive introduction to embedded flash memory, describing the history, current status, and future projections for technology, circuits, and systems applications. The authors describe current main-stream embedded flash technologies from floating-gate 1Tr, floating-gate with split-gate (1.5Tr), and 1Tr/1.5Tr SONOS flash technologies and their successful creation of various applications. Comparisons of these embedded flash technologies and future projections are also provided. The authors demonstrate a variety of embedded applications for auto-motive, smart-IC cards, and low-power, representing the leading-edge technology developments for eFlash. The discussion also includes insights into future prospects of application-driven non-volatile memory technology in the era of smart advanced automotive system, such as ADAS (Advanced Driver Assistance System) and IoE (Internet of Everything). Trials on technology convergence and future prospects of embedded non-volatile memory in the new memory hierarchy are also described. Introduces the history of embedded flash memory technology for micro-controller products and how embedded flash innovations developed; Includes comprehensive and detailed descriptions of current main-stream embedded flash memory technologies, sub-system designs and applications; Explains why embedded flash memory requirements are different from those of stand-alone flash memory and how to achieve specific goals with technology development and circuit designs; Describes a mature and stable floating-gate 1Tr cell technology imported from stand-alone flash memory products - that then introduces embedded-specific split-gate memory cell technologies based on floating-gate storage structure and charge-trapping SONOS technology and their eFlash sub-system designs; Describes automotive and smart-IC card applications requirements and achievements in advanced eFlash beyond 4 0nm node. |
![]() ![]() You may like...
How to Develop a Sustainable Business…
Veronique Ambrosini, Gavin Jack, …
Hardcover
R2,393
Discovery Miles 23 930
We Are Still Human - And Work Shouldn't…
Brad Shorkend, Andy Golding
Paperback
![]()
Being A Black Springbok - The Thando…
Sibusiso Mjikeliso
Paperback
![]()
Biomechanical Systems - Techniques and…
Cornelius T. Leondes
Hardcover
|