Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems
This book provides a comprehensive guide to the design of sustainable and green computing systems (GSC). Coverage includes important breakthroughs in various aspects of GSC, including multi-core architectures, interconnection technology, data centers, high performance computing (HPC), and sensor networks. The authors address the challenges of power efficiency and sustainability in various contexts, including system design, computer architecture, programming languages, compilers and networking.
These are the proceedings of the 20th international conference on domain decomposition methods in science and engineering. Domain decomposition methods are iterative methods for solving the often very large linearor nonlinear systems of algebraic equations that arise when various problems in continuum mechanics are discretized using finite elements. They are designed for massively parallel computers and take the memory hierarchy of such systems in mind. This is essential for approaching peak floating point performance. There is an increasingly well developed theory whichis having a direct impact on the development and improvements of these algorithms.
Cryptographic applications, such as RSA algorithm, ElGamal cryptography, elliptic curve cryptography, Rabin cryptosystem, Diffie -Hellmann key exchange algorithm, and the Digital Signature Standard, use modular exponentiation extensively. The performance of all these applications strongly depends on the efficient implementation of modular exponentiation and modular multiplication. Since 1984, when Montgomery first introduced a method to evaluate modular multiplications, many algorithmic modifications have been done for improving the efficiency of modular multiplication, but very less work has been done on the modular exponentiation to improve the efficiency. This research monograph addresses the question- how can the performance of modular exponentiation, which is the crucial operation of many public-key cryptographic techniques, be improved? The book focuses on Energy Efficient Modular Exponentiations for Cryptographic hardware. Spread across five chapters, this well-researched text focuses in detail on the Bit Forwarding Techniques and the corresponding hardware realizations. Readers will also discover advanced performance improvement techniques based on high radix multiplication and Cryptographic hardware based on multi-core architectures.
1 Die wirtschaftliche Bedeutung der Schutzmassnahmen fur EDV-Anlagen.- 2 Voraussetzungen und Anforderungen an die sicherheits-gerechte Konzipierung eines Hochsicherheitsbereichs.- 3 Unterschiedliche Gefahrdungen fur Rechenzentren.- 3.1 Einbruch/Diebstahl, Sabotage und Vandalismus.- 3.2 Brand, Verrauchung.- 3.3 Fehlfunktionen in der Klimatisierung.- 3.4 Wassereinbruch.- 3.5 Elektrische Versorgung.- 3.5.1 Aufrechterhaltung der Stromversorgung.- 3.5.2 UEberspannungen und Blitzschlag.- 3.6 Datenverlust.- 3.7 Sonstige Gefahren.- 4 Moegliche Analysemethoden.- 5 Schema der konkreten Risiko- und Schutzniveauermittlung.- 5.1 Massnahmen gegen Einbruch, Diebstahl, Sabotage und Vandalismus.- 5.2 Massnahmen gegen Feuer und Verrauchung.- 5.3 Massnahmen gegen Fehlfunktionen der Klimatisierung.- 5.4 Massnahmen gegen Beschadigungen durch Wasser bzw. fehlerhafte Versorgung.- 5.5 Massnahmen zur Aufrechterhaltung der gleichbleibenden Stromversorgung.- 5.6 Massnahmen gegen Datenverlust.- 5.7 Sonstige sicherheitsrelevante Kriterien.- 5.8 Zusammenfassende Benotung der analysierten Risiken.- 6 Sicherheitsmanagement: Organisation und Realisierung der sicherheitstechnischen Massnahmen.- 7 Sicherheitsgerechter EDV-Betrieb.- 8 Organisatorische Schritte zur permanenten Beibehaltung des Niveaus des ursprunglich entworfenen Sicherheitskonzepts.- 8.1 Menschliche Aspekte.- 8.2 Technische Massnahmen.- 9 Katastrophenvorsorge.- 9.1 Katastrophenplan.- 9.2 Backup-Konzepte.- 9.3 Versicherungskonzepte fur Hochsicherheitsbereiche.- 10 Schlussworte und Aussicht.
Each day, new applications and methods are developed for utilizing technology in the field of medical sciences, both as diagnostic tools and as methods for patients to access their medical information through their personal gadgets. However, the maximum potential for the application of new technologies within the medical field has not yet been realized. Mobile Devices and Smart Gadgets in Medical Sciences is a pivotal reference source that explores different mobile applications, tools, software, and smart gadgets and their applications within the field of healthcare. Covering a wide range of topics such as artificial intelligence, telemedicine, and oncology, this book is ideally designed for medical practitioners, mobile application developers, technology developers, software experts, computer engineers, programmers, ICT innovators, policymakers, researchers, academicians, and students.
Given the widespread use of real-time multitasking systems, there are tremendous optimization opportunities if reconfigurable computing can be effectively incorporated while maintaining performance and other design constraints of typical applications. The focus of this book is to describe the dynamic reconfiguration techniques that can be safely used in real-time systems. This book provides comprehensive approaches by considering synergistic effects of computation, communication as well as storage together to significantly improve overall performance, power, energy and temperature."
The Heinz Nixdorf Museum Forum (HNF) is the world's largest c- puter museum and is dedicated to portraying the past, present and future of information technology. In the "Year of Informatics 2006" the HNF was particularly keen to examine the history of this still quite young discipline. The short-lived nature of information technologies means that individuals, inventions, devices, institutes and companies"age" more rapidly than in many other specialties. And in the nature of things the group of computer pioneers from the early days is growing smaller all the time. To supplement a planned new exhibit on "Software and Inform- ics" at the HNF, the idea arose of recording the history of informatics in an accompanying publication. Mysearchforsuitablesourcesandauthorsveryquickly cameupwith the right answer, the very rst name in Germany: Friedrich L. Bauer, Professor Emeritus of Mathematics at the TU in Munich, one of the - thers of informatics in Germany and for decades the indefatigable author of the"Historical Notes" column of the journal Informatik Spektrum. Friedrich L. Bauer was already the author of two works on the history of informatics, published in different decades and in different books. Both of them are notable for their knowledgeable, extremely comp- hensive and yet compact style. My obvious course was to motivate this author to amalgamate, supplement and illustrate his previous work.
This book addresses the topic of exploiting enterprise-linked data with a particular focus on knowledge construction and accessibility within enterprises. It identifies the gaps between the requirements of enterprise knowledge consumption and "standard" data consuming technologies by analysing real-world use cases, and proposes the enterprise knowledge graph to fill such gaps. It provides concrete guidelines for effectively deploying linked-data graphs within and across business organizations. It is divided into three parts, focusing on the key technologies for constructing, understanding and employing knowledge graphs. Part 1 introduces basic background information and technologies, and presents a simple architecture to elucidate the main phases and tasks required during the lifecycle of knowledge graphs. Part 2 focuses on technical aspects; it starts with state-of-the art knowledge-graph construction approaches, and then discusses exploration and exploitation techniques as well as advanced question-answering topics concerning knowledge graphs. Lastly, Part 3 demonstrates examples of successful knowledge graph applications in the media industry, healthcare and cultural heritage, and offers conclusions and future visions.
This innovative and in-depth book integrates the well-developed
theory and practical applications of one dimensional and
multidimensional multirate signal processing. Using a rigorous
mathematical framework, it carefully examines the fundamentals of
this rapidly growing field. Areas covered include: basic building
blocks of multirate signal processing; fundamentals of
multidimensional multirate signal processing; multirate filter
banks; lossless lattice structures; introduction to wavelet signal
processing.
Wafer-scale integration has long been the dream of system designers. Instead of chopping a wafer into a few hundred or a few thousand chips, one would just connect the circuits on the entire wafer. What an enormous capability wafer-scale integration would offer: all those millions of circuits connected by high-speed on-chip wires. Unfortunately, the best known optical systems can provide suitably ?ne resolution only over an area much smaller than a whole wafer. There is no known way to pattern a whole wafer with transistors and wires small enough for modern circuits. Statistical defects present a ?rmer barrier to wafer-scale integration. Flaws appear regularly in integrated circuits; the larger the circuit area, the more probable there is a ?aw. If such ?aws were the result only of dust one might reduce their numbers, but ?aws are also the inevitable result of small scale. Each feature on a modern integrated circuit is carved out by only a small number of photons in the lithographic process. Each transistor gets its electrical properties from only a small number of impurity atoms in its tiny area. Inevitably, the quantized nature of light and the atomic nature of matter produce statistical variations in both the number of photons de?ning each tiny shape and the number of atoms providing the electrical behavior of tiny transistors. No known way exists to eliminate such statistical variation, nor may any be possible.
Making the most ef?cient use of computer systems has rapidly become a leading topic of interest for the computer industry and its customers alike. However, the focus of these discussions is often on single, isolated, and speci?c architectural and technological improvements for power reduction and conservation, while ignoring the fact that power ef?ciency as a ratio of performance to power consumption is equally in?uenced by performance improvements and architectural power red- tion. Furthermore, ef?ciency can be in?uenced on all levels of today's system hi- archies from single cores all the way to distributed Grid environments. To improve execution and power ef?ciency requires progress in such diverse ?elds as program optimization, optimization of program scheduling, and power reduction of idling system components for all levels of the system hierarchy. Improving computer system ef?ciency requires improving system performance and reducing system power consumption. To research and reach reasonable conc- sions about system performance we need to not only understand the architectures of our computer systems and the available array of code transformations for p- formance optimizations, but we also need to be able to express this understanding in performance models good enough to guide decisions about code optimizations for speci?c systems. This understanding is necessary on all levels of the system hierarchy from single cores to nodes to full high performance computing (HPC) systems, and eventually to Grid environments with multiple systems and resources.
Today s semiconductor memory market is divided between two types of memory: DRAM and Flash. Each has its own advantages and disadvantages. While DRAM is fast but volatile, Flash is non-volatile but slow. A memory system based on self-organized quantum dots (QDs) as storage node could combine the advantages of modern DRAM and Flash, thus merging the latter s non-volatility with very fast write times. This thesis investigates the electronic properties of and carrier dynamics in self-organized quantum dots by means of time-resolved capacitance spectroscopy and time-resolved current measurements. The first aim is to study the localization energy of various QD systems in order to assess the potential of increasing the storage time in QDs to non-volatility. Surprisingly, it is found that the major impact of carrier capture cross-sections of QDs is to influence, and at times counterbalance, carrier storage in addition to the localization energy. The second aim is to study the coupling between a layer of self-organized QDs and a two-dimensional hole gas (2DHG), which is relevant for the read-out process in memory systems. The investigation yields the discovery of the many-particle ground states in the QD ensemble.In addition to its technological relevance, the thesis also offers new insights into the fascinating field of nanostructure physics."
This book provides a comprehensive overview of key technologies being used to address challenges raised by continued device scaling and the extending gap between memory and central processing unit performance. Authors discuss in detail what are known commonly as "More than Moore" (MtM), technologies, which add value to devices by incorporating functionalities that do not necessarily scale according to "Moore's Law". Coverage focuses on three key technologies needed for efficient power management and cost per performance: novel memories, 3D integration and photonic on-chip interconnect.
This book serves as a practical guide for practicing engineers who need to design embedded systems for high-speed data acquisition and control systems. A minimum amount of theory is presented, along with a review of analog and digital electronics, followed by detailed explanations of essential topics in hardware design and software development. The discussion of hardware focuses on microcontroller design (ARM microcontrollers and FPGAs), techniques of embedded design, high speed data acquisition (DAQ) and control systems. Coverage of software development includes main programming techniques, culminating in the study of real-time operating systems. All concepts are introduced in a manner to be highly-accessible to practicing engineers and lead to the practical implementation of an embedded board that can be used in various industrial fields as a control system and high speed data acquisition system.
Nowadays software engineers not only have to worry about the technical knowledge needed to do their job, but they are increasingly having to know about the legal, professional and commercial context in which they must work. With the explosion of the Internet and major changes to the field with the introduction of the new Data Protection Act and the legal status of software engineers, it is now essential that they have an appreciation of a wide variety of issues outside the technical. Equally valuable to both students and practitioners, it brings together the expertise and experience of leading academics in software engineering, law, industrial relations, and health and safety, explaining the central principles and issues in each field and shows how they apply to software engineering.
Longitudinal studies have traditionally been seen as too cumbersome and labor-intensive to be of much use in research on Human-Computer Interaction (HCI). However, recent trends in market, legislation, and the research questions we address, have highlighted the importance of studying prolonged use, while technology itself has made longitudinal research more accessible to researchers across different application domains. Aimed as an educational resource for graduate students and researchers in HCI, this book brings together a collection of chapters, addressing theoretical and methodological considerations, and presenting case studies of longitudinal HCI research. Among others, the authors: discuss the theoretical underpinnings of longitudinal HCI research, such as when a longitudinal study is appropriate, what research questions can be addressed and what challenges are entailed in different longitudinal research designs reflect on methodological challenges in longitudinal data collection and analysis, such as how to maintain participant adherence and data reliability when employing the Experience Sampling Method in longitudinal settings, or how to cope with data collection fatigue and data safety in applications of autoethnography and autobiographical design, which may span from months to several years present a number of case studies covering different topics of longitudinal HCI research, from "slow technology", to self-tracking, to mid-air haptic feedback, and crowdsourcing.
This book provides the foundations for understanding hardware security and trust, which have become major concerns for national security over the past decade. Coverage includes security and trust issues in all types of electronic devices and systems such as ASICs, COTS, FPGAs, microprocessors/DSPs, and embedded systems. This serves as an invaluable reference to the state-of-the-art research that is of critical significance to the security of, and trust in, modern society's microelectronic-supported infrastructures.
This book defines and explores the problem of placing the instances of dynamic data types on the components of the heterogeneous memory organization of an embedded system, with the final goal of reducing energy consumption and improving performance. It is one of the first to cover the problem of placement for dynamic data objects on embedded systems with heterogeneous memory architectures, presenting a complete methodology that can be easily adapted to real cases and work flows. The authors discuss how to improve system performance and energy consumption simultaneously. Discusses the problem of placement for dynamic data objects on embedded systems with heterogeneous memory architectures; Presents a complete methodology that can be adapted easily to real cases and work flows; Offers hints on how to improve system performance and energy consumption simultaneously.
This book is the fifth volume of the CoreGRID series. Organized jointly with the Euro-Par 2007 conference, The CoreGRID Symposium intends to become the premiere European event on Grid Computing. The aim of this symposium is to strengthen and advance scientific and technological excellence in the area of Grid and Peer-to-Peer Computing. The book includes all aspects of Grid Computing including service infrastructure. It is designed for a professional audience composed of researchers and practitioners in industry. This volume is also suitable for advanced-level students in computer science.
Time-tested advice on Windows 10 Windows 10 For Dummies remains the #1 source for readers looking for advice on Windows 10. Expert author Andy Rathbone provides an easy-to-follow guidebook to understanding Windows 10 and getting things done based on his decades of experience as a Windows guru. Look inside to get a feel for the basics of the Windows interface, the Windows apps that help you get things done, ways to connect to the Internet at home or on the go, and steps for customizing your Windows 10 experience from the desktop wallpaper to how tightly you secure your computer. - Manage user accounts - Customize the start menu - Find and manage your files - Connect to a printer wirelessly Revised to cover the latest round of Windows 10 updates, this trusted source for unleashing everything the operating system has to offer is your first and last stop for learning the basics of Windows!
Modern multimedia systems are becoming increasingly multiprocessor and heterogeneous to match the high performance and low power demands placed on them by the large number of applications. The concurrent execution of these applications causes interference and unpredictability in the performance of these systems. In Multimedia Multiprocessor Systems, an analysis mechanism is presented to accurately predict the performance of multiple applications executing concurrently. With high consumer demand the time-to-market has become significantly lower. To cope with the complexity in designing such systems, an automated design-flow is needed that can generate systems from a high-level architectural description such that they are not error-prone and consume less time. Such a design methodology is presented for multiple use-cases -- combinations of active applications. A resource manager is also presented to manage the various resources in the system, and to achieve the goals of performance prediction, admission control and budget enforcement.
Prepare for Microsoft Exam 70-532-and help demonstrate your real-world mastery of the skills needed to develop Microsoft Azure solutions. Designed for experienced IT professionals ready to advance their status, Exam Ref focuses on the critical thinking and decision-making acumen needed for job success. Focus on the expertise measured by these objectives: Create and manage Azure Resource Manager Virtual Machines Design and implement a storage and data strategy Manage identity, application, and network services Design and implement Azure PaaS compute, web, and mo bile services This Microsoft Exam Ref: Organizes its coverage by exam objectives Features strategic, what-if scenarios to challenge you Assumes you have experience designing, programming, implementing, automating, and monitoring Microsoft Azure solutions, and are proficient with tools, techniques, and approaches for building scalable, resilient solutions About the Exam Exam 70-532 focuses on skills and knowledge for building highly available solutions in the Microsoft Azure cloud. About Microsoft Certification This exam is for candidates who are experienced in designing, programming, implementing, automating, and monitoring Microsoft Azure solutions. Candidates are also proficient with development tools, techniques, and approaches used to build scalable and resilient solutions. See full details at: microsoft.com/learning |
You may like...
|