Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems
The go-to guide to getting started with micro:bit and exploring all of the mini-computer's amazing capabilities The micro:bit is a pocket-sized electronic development platform built with education in mind. It was developed by the BBC in partnership with Microsoft and other major tech companies to provide kids with a fun, easy, inexpensive way to develop their digital skills. With it, kids (and grownups) can learn basic programming and coding while having fun making virtual pets, developing games, and a whole lot more. Written by internationally bestselling tech author Gareth Halfacree and endorsed by the Micro:bit Foundation, the micro:bit User Guide contains what you need to know to get up and running fast with the micro:bit. Learn everything from taking your first steps with the software to writing your own programs. You'll also learn how to expand its capabilities with add-ons through easy-to-follow, step-by-step instructions. * Configure your micro:bit and develop your digital skills * Write code in Microsoft PXT, Python, JavaScript, and more * Discover the motion detector and compass * Connect the micro:bit to a computer, Raspberry Pi, or your smartphone * Build your own circuits and create hardware The micro:bit User Guide is your go-to source for learning all the secrets of the micro:bit. Whether you're just beginning or have some experience, this book allows you to dive right in and experience everything the micro:bit has to offer.
Pervasive healthcare is the conceptual system of providing healthcare to anyone, at anytime, and anywhere by removing restraints of time and location while increasing both the coverage and the quality of healthcare. Pervasive Healthcare Monitoring is at the forefront of this research, and presents the ways in which mobile and wireless technologies can be used to implement the vision of pervasive healthcare. This vision includes prevention, healthcare maintenance and checkups; short-term monitoring (home healthcare monitoring), long-term monitoring (nursing home), and personalized healthcare monitoring; and incidence detection and management, emergency intervention, and transportation and treatment. The pervasive healthcare applications include pervasive health monitoring, intelligent emergency management system, pervasive healthcare data access, and ubiquitous mobile telemedicine. Pervasive Healthcare Monitoring fills the need for a research-oriented book on the wide array of emerging healthcare applications and services, including the treatment of several new wireless technologies and the ways in which they will implement the vision of pervasive healthcare. This book is written primarily for university faculty and graduate students in the field of healthcare technologies, and industry professionals involved in healthcare IT research, design, and development.
With the multiple overwrite feature, rewritable optical discs have found application in consumer DVD+RW video recorders, professional archiving systems and computer drives for data storage, replacing the floppy disc in the latter case. Optical Data Storage provides an overview of the recording principles, materials aspects, and application areas of phase-change optical storage. Some theoretical background is given to familiarize the reader with the basics of the phase-change processes. Elements of data recording, including mark formation, eraseability, direct overwrite strategies, data quality and data stability, etc. are explained and extensively discussed. A mark formation model is described and used throughout the whole book to back up measurement results and support the applications discussed. Two major aspects - high-speed and dual-layer recording are considered in depth and solutions to achieve higher performance are analyzed.
High Performance Computational Methods for Biological Sequence Analysis presents biological sequence analysis using an interdisciplinary approach that integrates biological, mathematical and computational concepts. These concepts are presented so that computer scientists and biomedical scientists can obtain the necessary background for developing better algorithms and applying parallel computational methods. This book will enable both groups to develop the depth of knowledge needed to work in this interdisciplinary field. This work focuses on high performance computational approaches that are used to perform computationally intensive biological sequence analysis tasks: pairwise sequence comparison, multiple sequence alignment, and sequence similarity searching in large databases. These computational methods are becoming increasingly important to the molecular biology community allowing researchers to explore the increasingly large amounts of sequence data generated by the Human Genome Project and other related biological projects. The approaches presented by the authors are state-of-the-art and show how to reduce analysis times significantly, sometimes from days to minutes. High Performance Computational Methods for Biological Sequence Analysis is tremendously important to biomedical science students and researchers who are interested in applying sequence analyses to their studies, and to computational science students and researchers who are interested in applying new computational approaches to biological sequence analyses.
In this book, fundamental theories and engineering designs of NOMA are organically blended, with comprehensive performance evaluations from both link level and system level simulations.
Digital Systems Design and Prototyping: Using Field Programmable Logic and Hardware Description Languages, Second Edition covers the subject of digital systems design using two important technologies: Field Programmable Logic Devices (FPLDs) and Hardware Description Languages (HDLs). These two technologies are combined to aid in the design, prototyping, and implementation of a whole range of digital systems from very simple ones replacing traditional glue logic to very complex ones customized as the applications require. Three HDLs are presented: VHDL and Verilog, the widely used standard languages, and the proprietary Altera HDL (AHDL). The chapters on these languages serve as tutorials and comparisons are made that show the strengths and weaknesses of each language. A large number of examples are used in the description of each language providing insight for the design and implementation of FPLDs. The CD-ROM included with the book contains the Altera MAX+PLUS II development environment which is ready to compile and simulate all examples. With the addition of the Altera UP-1 prototyping board, all examples can be tested and verified in a real FPLD. Digital Systems Design and Prototyping: Using Field Programmable Logic and Hardware Description Languages, Second Edition is designed as an advanced level textbook as well as a reference for the professional engineer.
The advent of very large scale integrated circuit technology has enabled the construction of very complex and large interconnection networks. By most accounts, the next generation of supercomputers will achieve its gains by increasing the number of processing elements, rather than by using faster processors. The most difficult technical problem in constructing a supercom puter will be the design of the interconnection network through which the processors communicate. Selecting an appropriate and adequate topological structure of interconnection networks will become a critical issue, on which many research efforts have been made over the past decade. The book is aimed to attract the readers' attention to such an important research area. Graph theory is a fundamental and powerful mathematical tool for de signing and analyzing interconnection networks, since the topological struc ture of an interconnection network is a graph. This fact has been univer sally accepted by computer scientists and engineers. This book provides the most basic problems, concepts and well-established results on the topological structure and analysis of interconnection networks in the language of graph theory. The material originates from a vast amount of literature, but the theory presented is developed carefully and skillfully. The treatment is gen erally self-contained, and most stated results are proved. No exercises are explicitly exhibited, but there are some stated results whose proofs are left to the reader to consolidate his understanding of the material."
Fault-Tolerant Parallel Computation presents recent advances in algorithmic ways of introducing fault-tolerance in multiprocessors under the constraint of preserving efficiency. The difficulty associated with combining fault-tolerance and efficiency is that the two have conflicting means: fault-tolerance is achieved by introducing redundancy, while efficiency is achieved by removing redundancy. This monograph demonstrates how in certain models of parallel computation it is possible to combine efficiency and fault-tolerance and shows how it is possible to develop efficient algorithms without concern for fault-tolerance, and then correctly and efficiently execute these algorithms on parallel machines whose processors are subject to arbitrary dynamic fail-stop errors. The efficient algorithmic approaches to multiprocessor fault-tolerance presented in this monograph make a contribution towards bridging the gap between the abstract models of parallel computation and realizable parallel architectures. Fault-Tolerant Parallel Computation presents the state of the art in algorithmic approaches to fault-tolerance in efficient parallel algorithms. The monograph synthesizes work that was presented in recent symposia and published in refereed journals by the authors and other leading researchers. This is the first text that takes the reader on the grand tour of this new field summarizing major results and identifying hard open problems. This monograph will be of interest to academic and industrial researchers and graduate students working in the areas of fault-tolerance, algorithms and parallel computation and may also be used as a text in a graduate course on parallel algorithmic techniques and fault-tolerance.
Blockchain technology is an emerging distributed, decentralized architecture and computing paradigm, which has accelerated the development and application of cloud, fog and edge computing; artificial intelligence; cyber physical systems; social networking; crowdsourcing and crowdsensing; 5g; trust management and finance; and other many useful sectors. Nowadays, the primary blockchain technology uses are in information systems to keep information secure and private. However, many threats and vulnerabilities are facing blockchain in the past decade such 51% attacks, double spending attacks, etc. The popularity and rapid development of blockchain brings many technical and regulatory challenges for research and academic communities. The main goal of this book is to encourage both researchers and practitioners of Blockchain technology to share and exchange their experiences and recent studies between academia and industry. The reader will be provided with the most up-to-date knowledge of blockchain in mainstream areas of security and privacy in the decentralized domain, which is timely and essential (this is due to the fact that the distributed and p2p applications are increasing day-by-day, and the attackers adopt new mechanisms to threaten the security and privacy of the users in those environments). This book provides a detailed explanation of security and privacy with respect to blockchain for information systems, and will be an essential resource for students, researchers and scientists studying blockchain uses in information systems and those wanting to explore the current state of play.
This book presents the cellular wireless network standard NB-IoT (Narrow Band-Internet of Things), which addresses many key requirements of the IoT. NB-IoT is a topic that is inspiring the industry to create new business cases and associated products. The author first introduces the technology and typical IoT use cases. He then explains NB-IoT extended network coverage and outstanding power saving features which are enabling the design of IoT devices (e.g. sensors) to work everywhere and for more than 10 years, in a maintenance-free way. The book explains to industrial users how to utilize NB-IoT features for their own IoT projects. Other system ingredients (e.g. IoT cloud services) and embedded security aspects are covered as well. The author takes an in-depth look at NB-IoT from an application engineering point of view, focusing on IoT device design. The target audience is technical-minded IoT project owners and system design engineers who are planning to develop an IoT application.
In Symbolic Analysis for Parallelizing Compilers the author presents an excellent demonstration of the effectiveness of symbolic analysis in tackling important optimization problems, some of which inhibit loop parallelization. The framework that Haghighat presents has proved extremely successful in induction and wraparound variable analysis, strength reduction, dead code elimination and symbolic constant propagation. The approach can be applied to any program transformation or optimization problem that uses properties and value ranges of program names. Symbolic analysis can be used on any transformational system or optimization problem that relies on compile-time information about program variables. This covers the majority of, if not all optimization and parallelization techniques. The book makes a compelling case for the potential of symbolic analysis, applying it for the first time - and with remarkable results - to a number of classical optimization problems: loop scheduling, static timing or size analysis, and dependence analysis. It demonstrates how symbolic analysis can solve these problems faster and more accurately than existing hybrid techniques.
Efficient parallel solutions have been found to many problems. Some of them can be obtained automatically from sequential programs, using compilers. However, there is a large class of problems - irregular problems - that lack efficient solutions. IRREGULAR 94 - a workshop and summer school organized in Geneva - addressed the problems associated with the derivation of efficient solutions to irregular problems. This book, which is based on the workshop, draws on the contributions of outstanding scientists to present the state of the art in irregular problems, covering aspects ranging from scientific computing, discrete optimization, and automatic extraction of parallelism. Audience: This first book on parallel algorithms for irregular problems is of interest to advanced graduate students and researchers in parallel computer science.
This book puts in focus various techniques for checking modeling fidelity of Cyber Physical Systems (CPS), with respect to the physical world they represent. The authors' present modeling and analysis techniques representing different communities, from very different angles, discuss their possible interactions, and discuss the commonalities and differences between their practices. Coverage includes model driven development, resource-driven development, statistical analysis, proofs of simulator implementation, compiler construction, power/temperature modeling of digital devices, high-level performance analysis, and code/device certification. Several industrial contexts are covered, including modeling of computing and communication, proof architectures models and statistical based validation techniques.
This book describes the design and implementation of energy-efficient smart (digital output) temperature sensors in CMOS technology. To accomplish this, a new readout topology, namely the zoom-ADC, is presented. It combines a coarse SAR-ADC with a fine Sigma-Delta (SD) ADC. The digital result obtained from the coarse ADC is used to set the reference levels of the SD-ADC, thereby zooming its full-scale range into a small region around the input signal. This technique considerably reduces the SD-ADC's full-scale range, and notably relaxes the number of clock cycles needed for a given resolution, as well as the DC-gain and swing of the loop-filter. Both conversion time and power-efficiency can be improved, which results in a substantial improvement in energy-efficiency. Two BJT-based sensor prototypes based on 1st-order and 2nd-order zoom-ADCs are presented. They both achieve inaccuracies of less than +/-0.2 DegreesC over the military temperature range (-55 DegreesC to 125 DegreesC). A prototype capable of sensing temperatures up to 200 DegreesC is also presented. As an alternative to BJTs, sensors based on dynamic threshold MOSTs (DTMOSTs) are also presented. It is shown that DTMOSTs are capable of achieving low inaccuracy (+/-0.4 DegreesC over the military temperature range) as well as sub-1V operation, making them well suited for use in modern CMOS processes.
This book constitutes the refereed post-conference proceedings of the Fourth IFIP International Cross-Domain Conference on Internet of Things, IFIPIoT 2021, held virtually in November 2021. The 15 full papers presented were carefully reviewed and selected from 33 submissions. Also included is a summary of two panel sessions held at the conference. The papers are organized in the following topical sections: challenges in IoT Applications and Research, Modernizing Agricultural Practice Using IoT, Cyber-physical IoT systems in Wildfire Context, IoT for Smart Health, Security, Methods.
Welcome to 1M 2003, the eighth in a series of the premier international technical conference in this field. As IT management has become mission critical to the economies of the developed world, our technical program has grown in relevance, strength and quality. Over the next few years, leading IT organizations will gradually move from identifying infrastructure problems to providing business services via automated, intelligent management systems. To be successful, these future management systems must provide global scalability, for instance, to support Grid computing and large numbers of pervasive devices. In Grid environments, organizations can pool desktops and servers, dynamically creating a virtual environment with huge processing power, and new management challenges. As the number, type, and criticality of devices connected to the Internet grows, new innovative solutions are required to address this unprecedented scale and management complexity. The growing penetration of technologies, such as WLANs, introduces new management challenges, particularly for performance and security. Management systems must also support the management of business processes and their supporting technology infrastructure as integrated entities. They will need to significantly reduce the amount of adventitious, bootless data thrown at consoles, delivering instead a cogent view of the system state, while leaving the handling of lower level events to self-managed, multifarious systems and devices. There is a new emphasis on "autonomic" computing, building systems that can perform routine tasks without administrator intervention and take prescient actions to rapidly recover from potential software or hardware failures.
Peer-to-peer (P2P) technology, or peer computing, is a paradigm that is viewed as a potential technology for redesigning distributed architectures and, consequently, distributed processing. Yet the scale and dynamism that characterize P2P systems demand that we reexamine traditional distributed technologies. A paradigm shift that includes self-reorganization, adaptation and resilience is called for. On the other hand, the increased computational power of such networks opens up completely new applications, such as in digital content sharing, scientific computation, gaming, or collaborative work environments. In this book, Vu, Lupu and Ooi present the technical challenges offered by P2P systems, and the means that have been proposed to address them. They provide a thorough and comprehensive review of recent advances on routing and discovery methods; load balancing and replication techniques; security, accountability and anonymity, as well as trust and reputation schemes; programming models and P2P systems and projects. Besides surveying existing methods and systems, they also compare and evaluate some of the more promising schemes. The need for such a book is evident. It provides a single source for practitioners, researchers and students on the state of the art. For practitioners, this book explains best practice, guiding selection of appropriate techniques for each application. For researchers, this book provides a foundation for the development of new and more effective methods. For students, it is an overview of the wide range of advanced techniques for realizing effective P2P systems, and it can easily be used as a text for an advanced course on Peer-to-Peer Computing and Technologies, or as a companion text for courses on various subjects, such as distributed systems, and grid and cluster computing.
It has been widely recognized that artificial intelligence computations offer large potential for distributed and parallel processing. Unfortunately, not much is known about designing parallel AI algorithms and efficient, easy-to-use parallel computer architectures for AI applications. The field of parallel computation and computers for AI is in its infancy, but some significant ideas have appeared and initial practical experience has become available. The purpose of this book has been to collect in one volume contributions from several leading researchers and pioneers of AI that represent a sample of these ideas and experiences. This sample does not include all schools of thought nor contributions from all leading researchers, but it covers a relatively wide variety of views and topics and in this sense can be helpful in assessing the state ofthe art. We hope that the book will serve, at least, as a pointer to more specialized literature and that it will stimulate interest in the area of parallel AI processing. It has been a great pleasure and a privilege to cooperate with all contributors to this volume. They have my warmest thanks and gratitude. Mrs. Birgitta Knapp has assisted me in the editorial task and demonstrated a great deal of skill and patience. Janusz S. Kowalik vii INTRODUCTION Artificial intelligence (AI) computer programs can be very time-consuming.
Linux is for everyone! Linux All-in-One For Dummies breaks down the ever-popular operating system to its basics and trains users on the art of Linux. This handy reference covers all the latest updates and operating system features. It presents content on Linux desktops, applications, and more. With eight books in one, you’ll have access to the most comprehensive overview of Linux around. Explore the inner workings of Linux machines, so you’ll know Linux front to back. This all-inclusive handbook also walks you through solving Linux problems―complete with hands-on examples―so you’ll be a Linux whiz before you know it.
This book is a massive source of support for beginning and intermediate Linux users, as well as those looking to brush up on their knowledge for certification. And, thanks to the signature Dummies approach, it’s also a lot of fun.
Based on the Lectures given during the Eurocourse on 'Computing with Parallel Architectures' held at the Joint Research Centre Ispra, Italy, September 10-14, 1990
This book describes automated debugging approaches for the bugs and the faults which appear in different abstraction levels of a hardware system. The authors employ a transaction-based debug approach to systems at the transaction-level, asserting the correct relation of transactions. The automated debug approach for design bugs finds the potential fault candidates at RTL and gate-level of a circuit. Debug techniques for logic bugs and synchronization bugs are demonstrated, enabling readers to localize the most difficult bugs. Debug automation for electrical faults (delay faults)finds the potentially failing speedpaths in a circuit at gate-level. The various debug approaches described achieve high diagnosis accuracy and reduce the debugging time, shortening the IC development cycle and increasing the productivity of designers. Describes a unified framework for debug automation used at both pre-silicon and post-silicon stages; Provides approaches for debug automation of a hardware system at different levels of abstraction, i.e., chip, gate-level, RTL and transaction level; Includes techniques for debug automation of design bugs and electrical faults, as well as an infrastructure to debug NoC-based multiprocessor SoCs.
This book presents best selected papers presented at the 4th International Conference on Smart Computing and Informatics (SCI 2020), held at the Department of Computer Science and Engineering, Vasavi College of Engineering (Autonomous), Hyderabad, Telangana, India. It presents advanced and multi-disciplinary research towards the design of smart computing and informatics. The theme is on a broader front which focuses on various innovation paradigms in system knowledge, intelligence and sustainability that may be applied to provide realistic solutions to varied problems in society, environment and industries. The scope is also extended towards the deployment of emerging computational and knowledge transfer approaches, optimizing solutions in various disciplines of science, technology and health care.
Analog Integrated Circuits deals with the design and analysis of modem analog circuits using integrated bipolar and field-effect transistor technologies. This book is suitable as a text for a one-semester course for senior level or first-year graduate students as well as a reference work for practicing engin eers. Advanced students will also find the text useful in that some of the material presented here is not covered in many first courses on analog circuits. Included in this is an extensive coverage of feedback amplifiers, current-mode circuits, and translinear circuits. Suitable background would be fundamental courses in electronic circuits and semiconductor devices. This book contains numerous examples, many of which include commercial analog circuits. End-of-chapter problems are given, many illustrating practical circuits. Chapter 1 discuses the models commonly used to represent devices used in modem analog integrated circuits. Presented are models for bipolar junction transistors, junction diodes, junction field-effect transistors, and metal-oxide semiconductor field-effect transistors. Both large-signal and small-signal models are developed as well as their implementation in the SPICE circuit simulation program. The basic building blocks used in a large variety of analog circuits are analyzed in Chapter 2; these consist of current sources, dc level-shift stages, single-transistor gain stages, two-transistor gain stages, and output stages. Both bipolar and field-effect transistor implementations are presented. Chapter 3 deals with operational amplifier circuits. The four basic op-amp circuits are analyzed: (1) voltage-feedback amplifiers, (2) current-feedback amplifiers, (3) current-differencing amplifiers, and (4) transconductance ampli fiers. Selected applications are also presented."
In a world that is awash in ubiquitous technology, even the least tech-savvy know that we must take care how that technology affects individuals and society. That governments and organizations around the world now focus on these issues, that universities and research institutes in many different languages dedicate significant resources to study the issues, and that international professional organizations have adopted standards and directed resources toward ethical issues in technology is in no small part the result of the work of Simon Rogerson. - Chuck Huff, Professor of Social Psychology at Saint Olaf College, Northfield, Minnesota In 1995, Apple launched its first WWW server, Quick Time On-line. It was the year Microsoft released Internet Explorer and sold 7 million copies of Windows 95 in just 2 months. In March 1995, the author Simon Rogerson opened the first ETHICOMP conference with these words: We live in a turbulent society where there is social, political, economic and technological turbulence ... it is causing a vast amount of restructuring within all these organisations which impacts on individuals, which impacts on the way departments are set up, organisational hierarchies, job content, span of control, social interaction and so on and so forth. ... Information is very much the fuel of modern technological change. Almost anything now can be represented by the technology and transported to somewhere else. It's a situation where the more information a computer can process, the more of the world it can actually turn into information. That may well be very exciting, but it is also very concerning. That could be describing today. More than 25 years later, these issues are still at the forefront of how ethical digital technology can be developed and utilised. This book is an anthology of the author's work over the past 25 years of pioneering research in digital ethics. It is structured into five themes: Journey, Process, Product, Future and Education. Each theme commences with an introductory explanation of the papers, their relevance and their interrelationship. The anthology finishes with a concluding chapter which summarises the key messages and suggests what might happen in the future. Included in this chapter are insights from some younger leading academics who are part of the community charged with ensuring that ethical digital technology is realised. |
You may like...
Edsger Wybe Dijkstra - His Life, Work…
Krzysztof R. Apt, Tony Hoare
Hardcover
R3,075
Discovery Miles 30 750
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey
Hardcover
R7,041
Discovery Miles 70 410
|