![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems
Real-time systems are used in a wide range of applications, including control, sensing, multimedia, etc. Scheduling is a central problem for these computing/communication systems since it is responsible for software execution in a timely manner. This book, the second of two volumes on the subject, brings together knowledge on specific topics and discusses the recent advances for some of them. It addresses foundations as well as the latest advances and findings in real-time scheduling, giving comprehensive references to important papers, but the chapters are short and not overloaded with confusing details. Coverage includes scheduling approaches for networks and for energy autonomous systems. Other sophisticated issues, such as feedback control scheduling and probabilistic scheduling, are also addressed. This book can serve as a textbook for courses on the topic in bachelor's degrees and in more advanced master's degree programs. It also provides a reference for computer scientists and engineers involved in the design or the development of Cyber-Physical Systems which require up-to-date real-time scheduling solutions.
This volume presents new directions and solutions in broadly perceived intelligent systems. An urgent need this volume has occurred as a result of vivid discussions and presentations at the "IEEE-IS 2006 The 2006 Third International IEEE Conference on Intelligent Systems" held in London, UK, September, 2006. This book is a compilation of many valuable inspiring works written by both the conference participants and some other experts in this new and challenging field.
This book provides a comprehensive picture of fog computing technology, including of fog architectures, latency aware application management issues with real time requirements, security and privacy issues and fog analytics, in wide ranging application scenarios such as M2M device communication, smart homes, smart vehicles, augmented reality and transportation management. This book explores the research issues involved in the application of traditional shallow machine learning and deep learning techniques to big data analytics. It surveys global research advances in extending the conventional unsupervised or clustering algorithms, extending supervised and semi-supervised algorithms and association rule mining algorithms to big data Scenarios. Further it discusses the deep learning applications of big data analytics to fields of computer vision and speech processing, and describes applications such as semantic indexing and data tagging. Lastly it identifies 25 unsolved research problems and research directions in fog computing, as well as in the context of applying deep learning techniques to big data analytics, such as dimensionality reduction in high-dimensional data and improved formulation of data abstractions along with possible directions for their solutions.
The newest addition to the Harris and Harris family of Digital Design and Computer Architecture books, this RISC-V Edition covers the fundamentals of digital logic design and reinforces logic concepts through the design of a RISC-V microprocessor. Combining an engaging and humorous writing style with an updated and hands-on approach to digital design, this book takes the reader from the fundamentals of digital logic to the actual design of a processor. By the end of this book, readers will be able to build their own RISC-V microprocessor and will have a top-to-bottom understanding of how it works. Beginning with digital logic gates and progressing to the design of combinational and sequential circuits, this book uses these fundamental building blocks as the basis for designing a RISC-V processor. SystemVerilog and VHDL are integrated throughout the text in examples illustrating the methods and techniques for CAD-based circuit design. The companion website includes a chapter on I/O systems with practical examples that show how to use SparkFun's RED-V RedBoard to communicate with peripheral devices such as LCDs, Bluetooth radios, and motors. This book will be a valuable resource for students taking a course that combines digital logic and computer architecture or students taking a two-quarter sequence in digital logic and computer organization/architecture.
About PowerPoint 2000 Traditionally, presenters have had to travel to reach audiences in different parts of the world. With today's technologies, this is no longer necessary. Using Microsoft PowerPoint (R) 2000, presenters can now easily and inexpensively collaborate on presentations and show them to remote audiences without leaving their offices. PowerPoint 2000 offers new ease-of-use features that speed users through presentation development and help users deliver Web-based presentations to remote audiences. Let this NEW Made Simple book guide you through the new features of PowerPoint 2000 and help you make the most of the product.
Supercomputers are the largest and fastest computers available at any point in time. The term was used for the first time in the New York World, March 1920, to describe "new statistical machines with the mental power of 100 skilled mathematicians in solving even highly complex algebraic problems. " Invented by Mendenhall and Warren, these machines were used at Columbia University'S Statistical Bureau. Recently, supercomputers have been used primarily to solve large-scale prob lems in science and engineering. Solutions of systems of partial differential equa tions, such as those found in nuclear physics, meteorology, and computational fluid dynamics, account for the majority of supercomputer use today. The early computers, such as EDVAC, SSEC, 701, and UNIVAC, demonstrated the feasibility of building fast electronic computing machines which could become commercial products. The next generation of computers focused on attaining the highest possible computational speeds. This book discusses the architectural approaches used to yield significantly higher computing speeds while preserving the conventional, von Neumann, machine organization (Chapters 2-4). Subsequent improvements depended on developing a new generation of computers employing a new model of computation: single-instruction multiple data (SIMD) processors (Chapters 5-7). Later machines refmed SIMD architec ture and technology (Chapters 8-9). SUPERCOMPUTER ARCHITECI'URE CHAPTER! INTRODUCTION THREE ERAS OF SUPERCOMPUTERS Supercomputers -- the largest and fastest computers available at any point in time -- have been the products of complex interplay among technological, architectural, and algorithmic developments.
This book describes techniques to verify the authenticity of integrated circuits (ICs). It focuses on hardware Trojan detection and prevention and counterfeit detection and prevention. The authors discuss a variety of detection schemes and design methodologies for improving Trojan detection techniques, as well as various attempts at developing hardware Trojans in IP cores and ICs. While describing existing Trojan detection methods, the authors also analyze their effectiveness in disclosing various types of Trojans, and demonstrate several architecture-level solutions.
This book is about information systems development failures and how
to avoid them.
Addresses a wide selection of multimedia applications, programmable and custom architectures for the implementations of multimedia systems, and arithmetic architectures and design methodologies. The book covers recent applications of digital signal processing algorithms in multimedia, presents high-speed and low-priority binary and finite field arithmetic architectures, details VHDL-based implementation approaches, and more.
The object oriented paradigm has become one of the dominant forces in the computing world. According to a recent survey, by the year 2000, more than 80% of development organizations are expected to use object technology as the basis for their distributed development strategies.
Like the anti-lock brakes system of a car, real-time systems are time-vital technologies put in place to react under a certain set of circumstances, often vital to security of data, information, or other resources. Innovations in Embedded and Real-Time Systems Engineering for Communication has collected the latest research within the field of real-time systems engineering, and will serve as a vital reference compendium for practitioners and academics. From a wide variety of fields and countries, the authors of this collection are the respective experts in their areas of concentration, giving the latest case studies, methodologies, frameworks, architectures, best practices, and research as it relates to real-time systems engineering for communication.
Prepare for Microsoft Exam 70-697--and help demonstrate your real-world mastery of configuring Windows 10 devices in the enterprise. Designed for experienced IT pros ready to advance their status, this Exam Ref focuses on the critical-thinking and decision-making acumen needed for success as a Microsoft specialist. Focus on the expertise measured by these objectives: Manage identity Plan desktop and device deployment Plan and implement a Microsoft Intune device management solution Configure networking and storage Manage data access and protection Manage remote access Manage apps Manage updates and recovery This Microsoft Exam Ref: Organizes its coverage by exam objectives Features strategic, what-if scenarios to challenge you Assumes you have experience with Windows desktop administration, maintenance, and troubleshooting; basic experience and understanding of Windows networking; and introductory-level knowledge of Active Directory and Microsoft Intune
The Maintenance Management Framework describes and reviews the concept, process and framework of modern maintenance management of complex systems; concentrating specifically on modern modelling tools (deterministic and empirical) for maintenance planning and scheduling. It will be bought by engineers and professionals involved in maintenance management, maintenance engineering, operations management, quality, etc. as well as graduate students and researchers in this field.
Over the last fifteen years GIS has become a fully-fledged technology, deployed across a range of application areas. However, although computer advances in performance appear to continue unhindered, data volumes and the growing sophistication of analysis procedures mean that performance will increasingly become a serious concern in GIS. Parallel computing offers a potential solution. However, traditional algorithms may not run effectively in a parallel environment, so utilization of parallel technology is not entirely straightforward. This groundbreaking book examines some of the current strategies facing scientists and engineers at this crucial interface of parallel computing and GIS.; The book begins with an introduction to the concepts, terminology and techniques of parallel processing, with particular reference to GIS. High level programming paradigms and software engineering issues underlying parallel software developments are considered and emphasis is given to designing modular reusable software libraries. The book continues with problems in designing parallel software for GIS applications, potential vector and raster data structures and details the algorithmic design for some major GIS operations. An implementation case study is included, based around a raster generalization problem, which illustrates some of the principles involved. Subsequent chapters review progress in parallel database technology in a GIS environment and the use of parallel techniques in various application areas, dealing with both algorithmic and implementation issues.; "Parallel Processing Algorithms for GIS" should be a useful text for a new generation of GIS professionals whose principal concern is the challenge of embracing major computer performance enhancements via parallel computing. Similarly, it should be an important volume for parallel computing professionals who are increasingly aware that GIS offers a major application domain for their technology.
Whether you're taking the CPHIMS exam or simply want the most current and comprehensive overview in healthcare information and management systems today, this completely revised and updated fourth edition has it all. But for those preparing for the CPHIMS exam, this book is also an ideal study partner. The content reflects the outline of exam topics covering healthcare and technology environments; clinical informatics; analysis, design, selection, implementation, support, maintenance, testing, evaluation, privacy and security; and management and leadership. Candidates can challenge themselves with the sample multiple-choice questions given at the end of the book. The benefits of CPHIMS certification are broad and far-reaching. Certification is a process that is embraced in many industries, including healthcare information and technology. CPHIMS is recognized as the 'gold standard' in healthcare IT because it is developed by HIMSS, has a global focus and is valued by clinicians and non-clinicians, management and staff positions and technical and nontechnical individuals. Certification, specifically CPHIMS certification, provides a means by which employers can evaluate potential new hires, analyze job performance, evaluate employees, market IT services and motivate employees to enhance their skills and knowledge. Certification also provides employers with the evidence that the certificate holders have demonstrated an established level of job-related knowledge, skills and abilities and are competent practitioners of healthcare IT.
Developing today's complex systems requires "more" than just good
software engineering solutions. Many are faced with complex systems
projects, incomplete or inaccurate requirements, canceled projects,
or cost overruns, and have their systems' users in revolt and
demanding more. Others want to build user-centric systems, but fear
managing the process. This book describes an approach that brings
the engineering process together with human performance engineering
and business process reengineering. The result is a manageable
user-centered process for gathering, analyzing, and evaluating
requirements that can vastly improve the success rate in the
development of medium-to-large size systems and applications.
Content distribution, i.e., distributing digital content from one node to another node or multiple nodes, is the most fundamental function of the Internet. Since Amazon's launch of EC2 in 2006 and Apple's release of the iPhone in 2007, Internet content distribution has shown a strong trend toward polarization. On the one hand, considerable investments have been made in creating heavyweight, integrated data centers ("heavy-cloud") all over the world, in order to achieve economies of scale and high flexibility/efficiency of content distribution. On the other hand, end-user devices ("light-end") have become increasingly lightweight, mobile and heterogeneous, creating new demands concerning traffic usage, energy consumption, bandwidth, latency, reliability, and/or the security of content distribution. Based on comprehensive real-world measurements at scale, we observe that existing content distribution techniques often perform poorly under the abovementioned new circumstances. Motivated by the trend of "heavy-cloud vs. light-end," this book is dedicated to uncovering the root causes of today's mobile networking problems and designing innovative cloud-based solutions to practically address such problems. Our work has produced not only academic papers published in prestigious conference proceedings like SIGCOMM, NSDI, MobiCom and MobiSys, but also concrete effects on industrial systems such as Xiaomi Mobile, MIUI OS, Tencent App Store, Baidu PhoneGuard, and WiFi.com. A series of practical takeaways and easy-to-follow testimonials are provided to researchers and practitioners working in mobile networking and cloud computing. In addition, we have released as much code and data used in our research as possible to benefit the community.
Developing today's complex systems requires "more" than just good
software engineering solutions. Many are faced with complex systems
projects, incomplete or inaccurate requirements, canceled projects,
or cost overruns, and have their systems' users in revolt and
demanding more. Others want to build user-centric systems, but fear
managing the process. This book describes an approach that brings
the engineering process together with human performance engineering
and business process reengineering. The result is a manageable
user-centered process for gathering, analyzing, and evaluating
requirements that can vastly improve the success rate in the
development of medium-to-large size systems and applications.
Until now, business systems have focused on selected data within a certain context to produce information. A better approach, says Thierauf, is to take information accompanied by experience over time to generate knowledge. He demonstrates that knowledge management systems can be used as a source of power to outmaneuver business competitors. Knowledge discovery tools enable decision makers to extract the patterns, trends, and correlations that underlie the inner (and inter-) workings of a company. His book is the first comprehensive text to define this important new direction in computer technology and will be essential reading for MIS practitioners, systems analysts, and academics researching and teaching the theory and applications of knowledge management systems. Thierauf centers on leveraging a company's knowledge capital. Indeed, knowledge is power--the power to improve customer satisfaction, marketing and production methods, financial operations, and other functions. Thierauf shows how knowledge, when developed and renewed, can be applied to a company's functional areas and provide an important competitive advantage. By utilizing some form of internal and external computer networks and providing some type of knowledge discovery software that encapsulates usable knowledge, Thierauf shows how to create an infrastructure to capture knowledge, store it, improve it, clarify it, and disseminate it throughout the organization, then how to use it regularly. His book demonstrates clearly how knowledge management systems focus on making knowledge available to company employees in the right format, at the right time, and in the right place. The result is inevitably a higher order of intelligence in decision making, more so now than could ever have been possible in even the most recent past.
Memory Architecture Exploration for Programmable Embedded Systems
addresses efficient exploration of alternative memory
architectures, assisted by a "compiler-in-the-loop" that allows
effective matching of the target application to the
processor-memory architecture. This new approach for memory
architecture exploration replaces the traditional black-box view of
the memory system and allows for aggressive co-optimization of the
programmable processor together with a customized memory system.
Blockchain technology is an emerging distributed, decentralized architecture and computing paradigm, which has accelerated the development and application of cloud, fog and edge computing; artificial intelligence; cyber physical systems; social networking; crowdsourcing and crowdsensing; 5g; trust management and finance; and other many useful sectors. Nowadays, the primary blockchain technology uses are in information systems to keep information secure and private. However, many threats and vulnerabilities are facing blockchain in the past decade such 51% attacks, double spending attacks, etc. The popularity and rapid development of blockchain brings many technical and regulatory challenges for research and academic communities. The main goal of this book is to encourage both researchers and practitioners of Blockchain technology to share and exchange their experiences and recent studies between academia and industry. The reader will be provided with the most up-to-date knowledge of blockchain in mainstream areas of security and privacy in the decentralized domain, which is timely and essential (this is due to the fact that the distributed and p2p applications are increasing day-by-day, and the attackers adopt new mechanisms to threaten the security and privacy of the users in those environments). This book provides a detailed explanation of security and privacy with respect to blockchain for information systems, and will be an essential resource for students, researchers and scientists studying blockchain uses in information systems and those wanting to explore the current state of play.
Based on a symposium honoring the extensive work of Allen Newell --
one of the founders of artificial intelligence, cognitive science,
human-computer interaction, and the systematic study of
computational architectures -- this volume demonstrates how
unifying themes may be found in the diversity that characterizes
current research on computers and cognition. The subject matter
includes:
The third edition of Digital Logic Techniques provides a clear and comprehensive treatment of the representation of data, operations on data, combinational logic design, sequential logic, computer architecture, and practical digital circuits. A wealth of exercises and worked examples in each chapter give students valuable experience in applying the concepts and techniques discussed. Beginning with an objective comparison between analogue and digital representation of data, the author presents the Boolean algebra framework for digital electronics, develops combinational logic design from first principles, and presents cellular logic as an alternative structure more relevant than canonical forms to VLSI implementation. He then addresses sequential logic design and develops a strategy for designing finite state machines, giving students a solid foundation for more advanced studies in automata theory. The second half of the book focuses on the digital system as an entity. Here the author examines the implementation of logic systems in programmable hardware, outlines the specification of a system, explores arithmetic processors, and elucidates fault diagnosis. The final chapter examines the electrical properties of logic components, compares the different logic families, and highlights the problems that can arise in constructing practical hardware systems.
This book highlights the capabilities and limitations of radar and air navigation. It discusses issues related to the physical principles of an electromagnetic field, the structure of radar information, and ways to transmit it. Attention is paid to the classification of radio waves used for transmitting radar information, as well as to the physical description of their propagation media. The third part of the book addresses issues related to the current state of navigation systems used in civil aviation and the prospects for their development in the future, as well as the history of satellite radio navigation systems. The book may be useful for schoolchildren, interested in the problems of radar and air navigation.
Fast, Efficient and Predictable Memory Accesses presents techniques for designing fast, energy-efficient and timing predictable memory systems. By using a careful combination of compiler optimizations and architectural improvements, we can achieve more than what would be feasible at one of the levels in isolation. The described optimization algorithms achieve the goals of high performance and low energy consumption. In addition to these benefits, the use of scratchpad memories significantly improves the timing predictability of the entire system, leading to tighter worst case execution time bounds (WCET). The WCET is a relevant design parameter for all timing critical systems. In addition, the book covers algorithms to exploit the power down modes of main memories in SDRAM technology, as well as the execute-in-place feature of Flash memories. The final chapter considers the impact of the register file, which is also part of the memory hierarchy. |
You may like...
Restoration of Facial Defects with…
Leonardo Ciocca, Giorgio Gastaldi
Paperback
R3,935
Discovery Miles 39 350
Advanced Methods and Deep Learning in…
E.R. Davies, Matthew Turk
Paperback
R2,578
Discovery Miles 25 780
|