![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Storage media & peripherals
This book examines the field of parallel database management systems and illustrates the great variety of solutions based on a shared-storage or a shared-nothing architecture. Constantly dropping memory prices and the desire to operate with low-latency responses on large sets of data paved the way for main memory-based parallel database management systems. However, this area is currently dominated by the shared-nothing approach in order to preserve the in-memory performance advantage by processing data locally on each server. The main argument this book makes is that such an unilateral development will cease due to the combination of the following three trends: a) Today's network technology features remote direct memory access (RDMA) and narrows the performance gap between accessing main memory on a server and of a remote server to and even below a single order of magnitude. b) Modern storage systems scale gracefully, are elastic and provide high-availability. c) A modern storage system such as Stanford's RAM Cloud even keeps all data resident in the main memory. Exploiting these characteristics in the context of a main memory-based parallel database management system is desirable. The book demonstrates that the advent of RDMA-enabled network technology makes the creation of a parallel main memory DBMS based on a shared-storage approach feasible.
This brief describes how non-volatile change of the resistance , due to the application of electric voltage allows for fabrication of novel digital memory devices. The author explains the physics of the devices and provides a concrete description of the materials involved as well as the fundamental properties of the technology. He details how charge trapping, charge transfer and conductive filament formation effect resistive switching memory devices.
No standard work of reference, dealing with dc linear motor in all its aspects has ever been published. However, a considerable amount of literature in the form of published papers dealing with this subject, and also an amount of hitherto unpublished work, particularly of an industrial or applied nature, has been accumulated during the last 25 years. An attempt has been made to collate all this information and present it in a comprehensive and orderly manner in this unique volume. This book has been designed to be useful to two main categories of readers, namely, electrical and mechanical engineers in the user industries, and post-graduates and students embracing mechanical and electrical engineers. This book is intended for researchers and graduate students in electrical machinery. Computer peripherals engineers and engineers in industry
Only recently have oversampling methods used for high resolution A/D and D/A conversion become popular. This is the first book to address all aspects of the subject and to compare and evaluate various design approaches. It presents a theoretical analysis of converter performance, actual design methods for converters and their simulation and circuit implementation. It also covers applications together with the design of decimation filters for A/D converters, and the design of interpolators for D/A converters. Of particular interest to electrical engineers involved with designing and/or using circuits for signal processing in communications, audio applications, sonar, and instrumentation.
Hardware Based Packet Classification for High Speed Internet Routers presents the most recent developments in hardware based packet classification algorithms and architectures. This book describes five methods which reduce the space that classifiers occupy within TCAMs; TCAM Razor, All-Match Redundancy Removal, Bit Weaving, Sequential Decomposition, and Topological Transformations. These methods demonstrate that in most cases a substantial reduction of space is achieved. Case studies and examples are provided throughout this book. About this book: * Presents the only book in the market that exclusively covers hardware based packet classification algorithms and architectures. * Describes five methods which reduce the space that classifiers occupy within TCAMs: TCAM Razor, All-Match Redundancy Removal, Bit Weaving, Sequential Decomposition, and Topological Transformations. * Provides case studies and examples throughout. Hardware Based Packet Classification for High Speed Internet Routers is designed for professionals and researchers who work within the related field of router design. Advanced-level students concentrating on computer science and electrical engineering will also find this book valuable as a text or reference book.
Variability is one of the most challenging obstacles for IC design in the nanometer regime. In nanometer technologies, SRAM show an increased sensitivity to process variations due to low-voltage operation requirements, which are aggravated by the strong demand for lower power consumption and cost, while achieving higher performance and density. With the drastic increase in memory densities, lower supply voltages, and higher variations, statistical simulation methodologies become imperative to estimate memory yield and optimize performance and power. This book is an invaluable reference on robust SRAM circuits and statistical design methodologies for researchers and practicing engineers in the field of memory design. It combines state of the art circuit techniques and statistical methodologies to optimize SRAM performance and yield in nanometer technologies. Provides comprehensive review of state-of-the-art, variation-tolerant SRAM circuit techniques; Discusses Impact of device related process variations and how they affect circuit and system performance, from a design point of view; Helps designers optimize memory yield, with practical statistical design methodologies and yield estimation techniques.
This book constitutes the refereed proceedings of the 17th National Conference on Computer Engineering and Technology, NCCET 2013, held in Xining, China, in July 2013. The 26 papers presented were carefully reviewed and selected from 234 submissions. They are organized in topical sections named: Application Specific Processors; Communication Architecture; Computer Application and Software Optimization; IC Design and Test; Processor Architecture; Technology on the Horizon.
Oracle Exadata Survival Guide is a hands-on guide for busy Oracle database administrators who are migrating their skill sets to Oracle's Exadata database appliance. The book covers the concepts behind Exadata, and the available configurations for features such as smart scans, storage indexes, Smart Flash Cache, hybrid columnar compression, and more. You'll learn about performance metrics and execution plans, and how to optimize SQL running in Oracle's powerful, new environment. The authors also cover migration from other servers. Oracle Exadata is fast becoming the standard for large installations such as those running data warehouse, business intelligence, and large-scale OLTP systems. Exadata is like no other platform, and is new ground even for experienced Oracle database administrators. The Oracle Exadata Survival Guide helps you navigate the ins and outs of this new platform, de-mystifying this amazing appliance and its exceptional performance. The book takes a highly practical approach, not diving too deeply into the details, but giving you just the right depth of information to quickly transfer your skills to Oracle's important new platform.* Helps transfer your skills to the platform of the future * Covers the important ground without going too deep * Takes a practical and hands-on approach to everyday tasks What you'll learn * Learn the components and basic architecture of an Exadata machine * Reduce data transfer overhead by processing queries in the storage layer * Examine and take action on Exadata-specific performance metrics * Deploy Hybrid Columnar Compression to reduce storage and I/O needs * Create worry-free migrations from existing databases into Exadata * Understand and address issues specific to ERP migrations Who this book is for Oracle Exadata Survival Guide is for the busy enterprise Oracle DBA who has suddenly been thrust into the Exadata arena. Readers should have a sound grasp of traditional Oracle database administration, and be prepared to learn new aspects that are specific to the Exadata appliance.
Verification of real-time requirements in systems-on-chip becomes more complex as more applications are integrated. Predictable and composable systems can manage the increasing complexity using formal verification and simulation. This book explains the concepts of predictability and composability and shows how to apply them to the design and analysis of a memory controller, which is a key component in any real-time system.
This book constitutes the refereed proceedings of the 16th National Conference on Computer Engineering and Technology, NCCET 2012, held in Shanghai, China, in August 2012. The 27 papers presented were carefully reviewed and selected from 108 submissions. They are organized in topical sections named: microprocessor and implementation; design of integration circuit; I/O interconnect; and measurement, verification, and others.
With the semiconductor market growth, new Integrated Circuit designs are pushing the limit of the technology and in some cases, require speci?c ?ne-tuning of certain process modules in manufacturing. Thus the communities of design and technology are increasingly intertwined. The issues that require close interactions and colla- ration for trade-off and optimization across the design/device/process ?elds are addressed in this book. It contains a set of outstanding papers, keynote and tutorials presented during 3 days at the International Conference on Integrated Circuit Design and Technology (ICICDT) held in June 2008 in Minatec, Grenoble. The selected papers are spread over ?ve chapters covering various aspects of emerging technologies and devices, advanced circuit design, reliability, variability issues and solutions, advanced memories and analog and mixed signals. All these papers are focusing on design and technology interactions and comply with the scope of the conference. v . Contents Part I Introduction 1 Synergy Between Design and Technology: A Key Factor in the Evolving Microelectronic Landscape. . . . . . . . . . . . . . . . . . . . . . 3 Michel Brilloue]t Part II Emerging Technologies and Circuits 2 New State Variable Opportunities Beyond CMOS: A System Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Victor V. Zhirnov, Ralph K. Cavin, and George I. Bourianoff 3 A Simple Compact Model to Analyze the Impact of Ballistic and Quasi-Ballistic Transport on Ring Oscillator Performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 S. Martinie, D. Munteanu, G. Le Carval, and J. L. Autran Part III Advanced Devices and Circuits 4 Low-Voltage Scaled 6T FinFET SRAM Cells . . . . . . . . . . . . . . . . . . . 55 N. Collaert, K. von Arnim, R. Rooyackers, T."
Universal access and management of information has been one of the driving forces in the evolution of computer technology. Central computing gave the ability to perform large and complex computations and advanced information manipulation. Advances in networking connected computers together and led to distributed computing. Web technology and the Internet went even further to provide hyper-linked information access and global computing. However, restricting access stations to physical locations limits the boundary of the vision. The real global network can be achieved only via the ability to compute and access information from anywhere and anytime. This is the fundamental wish that motivates mobile computing. This evolution is the cumulative result of both hardware and software advances at various levels motivated by tangible application needs. Infrastructure research on communications and networking is essential for realizing wireless systems.Equally important is the design and implementation of data management applications for these systems, a task directly affected by the characteristics of the wireless medium and the resulting mobility of data resources and computation. Although a relatively new area, mobile data management has provoked a proliferation of research efforts motivated both by a great market potential and by many challenging research problems. The focus of Data Management for Mobile Computing is on the impact of mobile computing on data management beyond the networking level. The purpose is to provide a thorough and cohesive overview of recent advances in wireless and mobile data management. The book is written with a critical attitude. This volume probes the new issues introduced by wireless and mobile access to data and their conceptual and practical consequences. Data Management for Mobile Computing provides a single source for researchers and practitioners who want to keep abreast of the latest innovations in the field.It can also serve as a textbook for an advanced course on mobile computing or as a companion text for a variety of courses including courses on distributed systems, database management, transaction management, operating or file systems, information retrieval or dissemination, and web computing.
Advanced Database Indexing begins by introducing basic material on storage media, including magnetic disks, RAID systems and tertiary storage such as optical disk and tapes. Typical access methods (e.g. B+ trees, dynamic hash files and secondary key retrieval) are also introduced. The remainder of the book discusses recent advances in indexing and access methods for particular database applications. More specifically, issues such as external sorting, file structures for intervals, temporal access methods, spatial and spatio-temporal indexing, image and multimedia indexing, perfect external hashing methods, parallel access methods, concurrency issues in indexing and parallel external sorting are presented for the first time in a single book. Advanced Database Indexing is an excellent reference for database professionals and may be used as a text for advanced courses on the topic.
Desktop or DIY 3D printers are devices you can either buy preassembled as a kit, or build from a collection of parts to design and print physical objects including replacement household parts, custom toys, and even art, science, or engineering projects. Maybe you have one, or maybe you're thinking about buying or building one. Practical 3D Printers takes you beyond how to build a 3D printer, to calibrating, customizing, and creating amazing models, including 3D printed text, a warship model, a robot platform, windup toys, and arcade-inspired alien invaders. You'll learn about the different types of personal 3D printers and how they work; from the MakerBot to the RepRap printers like the Huxley and Mendel, as well as the whiteAnt CNC featured in the Apress book Printing in Plastic. You'll discover how easy it is to find and design 3D models using web-based 3D modeling, and even how to create a 3D model from a 2D image. After learning the basics, this book will walk you through building multi-part models with a steampunk warship project, working with meshes to build your own action heroes, and creating an autonomous robot chassis. Finally, you'll find even more bonus projects to build, including wind-up walkers, faceted vases for the home, and a handful of useful upgrades to modify and improve your 3D printer.
Semiconductor Memories provides in-depth coverage in the areas of
design for testing, fault tolerance, failure modes and mechanisms,
and screening and qualification methods including.
Unternehmensmodellierung dient dazu, die wichtigsten Komponenten von Organisationen sowie deren Relationen zueinander abzubilden. Sie wird bei einer Vielzahl strategischer und operativer Aufgaben eingesetzt. In dem Buch erlautern die Autoren anhand der Methode "Kochbuch" die Grundlagen und Einsatzwecke der Unternehmensmodellierung, insbesondere stellen sie die unterschiedlichen Perspektiven auf ein Unternehmen und die Analysetechniken dar. Die Konzepte und Methoden koennen durch genaue Vorgehensbeschreibungen unmittelbar angewendet werden.
This state-of-the-art survey features topics related to the impact of multicore, manycore, and coprocessor technologies in science and large-scale applications in an interdisciplinary environment. The papers included in this survey cover research in mathematical modeling, design of parallel algorithms, aspects of microprocessor architecture, parallel programming languages, hardware-aware computing, heterogeneous platforms, manycore technologies, performance tuning, and requirements for large-scale applications. The contributions presented in this volume are an outcome of an inspiring conference conceived and organized by the editors at the University of Applied Sciences (HfT) in Stuttgart, Germany, in September 2012. The 10 revised full papers selected from 21 submissions are presented together with the twelve poster abstracts and focus on combination of new aspects of microprocessor technologies, parallel applications, numerical simulation, and software development; thus they clearly show the potential of emerging technologies in the area of multicore and manycore processors that are paving the way towards personal supercomputing and very likely towards exascale computing.
'Now . . . in the Analytical Engine I had devised mechanical means equivalent to memory. ' For the past twenty-five years or so, scientists and engineers have been endeavouring to realize in new technologies the claim made by Charles Babbage in his memoirs over a century ago. The modern computer industry depends to a very large extent on the success of their efforts. In this book we discuss the wide variety of techniques which have been used and are being developed to meet the range of requirements for digital storage systems in computers and other applications. The book has been written as a guide for the designer of any system employing digital techniques, firstly to guide him in his choice of store for differing applications and, secondly, to give him an apprecia tion of the problems which confront the engineer designing storage systems. Technology never stands still and developments in recent years have, of necessity, greatly increased the amount of material included in this second edition. The opportunity has also been taken to reorganize the contents and more emphasis has been given to those developments which have had, or which are likely to have, the greatest effect on computer development. Brief descriptions of obsolete or obsolescent systems have been retained, both as a warn ing to designers of the problems likely to be encountered in develop ment and to demonstrate how changes in technology can give a new impetus to old designs."
These are the proceedings of a NATO Advanced Study Institute (ASI) held in Cetraro, Italy during 6-17 June 1983. The title of the ASI was Computer Arehiteetures for SpatiaZZy vistributed Vata, and it brouqht together some 60 participants from Europe and America. Presented ere are 21 of the lectures that were delivered. The articles cover a wide spectrum of topics related to computer architecture s specially oriented toward the fast processing of spatial data, and represent an excellent review of the state-of-the-art of this topic. For more than 20 years now researchers in pattern recognition, image processing, meteorology, remote sensing, and computer engineering have been looking toward new forms of computer architectures to speed the processing of data from two- and three-dimensional processes. The work can be said to have commenced with the landmark article by Steve Unger in 1958, and it received a strong forward push with the development of the ILIAC III and IV computers at the University of Illinois during the 1960's. One clear obstacle faced by the computer designers in those days was the limitation of the state-of-the-art of hardware, when the only switching devices available to them were discrete transistors. As aresult parallel processing was generally considered to be imprae tieal, and relatively little progress was made."
A major technological trend for large database systems has been the introduction of ever-larger mass storage systems. This allows computing centers and business data processing installations to maintain on line their program libraries, less frequently used data files, transaction logs and backup copies under unified system control. Tapes, disks and drums are classical examples of mass storage media. The more recent IBM 3851 Mass Storage Facility, part of the IBM 3850 Mass Storage System, represents a new direction in mass storage development, namely, it is two-dimensional. With the maturity of magnetic bubble technology, more sophisticated, massive, multi-trillion-bit storage systems are not far in the future. While large in capacity, mass storage systems have in general relatively long access times. Since record access probabilities are usually not uniform, various algorithms have been devised to position the records to decrease the average access time. The first two chapters of this book are devoted mainly to such algorithmic studies in linear and two-dimensional mass storage systems. In the third chapter, we view the bubble memory as more than a storage medium. In fact, we discuss different structures where routine operations, such as data rearrangement, sorting, searching, etc., can be done in the memory itself, freeing the CPU for more complicated tasks. The problems discussed in this book are combinatorial in nature.
Current issues and approaches in the reliability and safety analysis of dynamic process systems are the subject of this book. The authors of the chapters are experts from nuclear, chemical, mechanical, aerospace and defense system industries, and from institutions including universities, national laboratories, private consulting companies, and regulatory bodies. Both the conventional approaches and dynamic methodologies which explicitly account for the time element in system evolution in failure modeling are represented. The papers on conventional approaches concentrate on the modeling of dynamic effects and the need for improved methods. The dynamic methodologies covered include the DYLAM methodology, the theory of continuous event trees, several Markov model construction procedures, Monte Carlo simulation, and utilization of logic flowgraphs in conjunction with Petri nets. Special emphasis is placed on human factors such as procedures and training.
For the technological progress in communication technology it is necessary that the advanced studies in circuit and software design are accompanied with recent results of the technological research and physics in order to exceed its limitations. This book is a guide which treats many components used in mobile communications, and in particular focuses on non-volatile memories. It emerges following the conducting line of the non-volatile memory in the wireless system: On the one hand it develops the foundations of the interdisciplinary issues needed for design analysis and testing of the system. On the other hand it deals with many of the problems appearing when the systems are realized in industrial production. These cover the difficulties from the mobile system to the different types of non-volatile memories. The book explores memory cards, multichip technologies, and algorithms of the software management as well as error handling. It also presents techniques of assurance for the single components and a guide through the Datasheet lectures.
The architectural concept of a memory hierarchy has been immensely successful, making possible today's spectacular pace of technology evolution in both the volume of data and the speed of data access. Its success is difficult to understand, however, when examined within the traditional "memoryless" framework of performance analysis. The memoryless' framework cannot properly reflect a memory hierarchy's ability to take advantage of patterns of data use that are transient. The Fractal Structure of Data Reference: Applications to the Memory Hierarchy both introduces, and justifies empirically, an alternative modeling framework in which arrivals are driven by a statistically self-similar underlying process, and are transient in nature. The substance of this book comes from the ability of the model to impose a mathematically tractable structure on important problems involving the operation and performance of a memory hierarchy. It describes events as they play out at a wide range of time scales, from the operation of file buffers and storage control cache, to a statistical view of entire disk storage applications. Striking insights are obtained about how memory hierarchies work, and how to exploit them to best advantage. The emphasis is on the practical application of such results. The Fractal Structure of Data Reference: Applications to the Memory Hierarchy will be of interest to professionals working in the area of applied computer performance and capacity planning, particularly those with a focus on disk storage. The book is also an excellent reference for those interested in database and data structure research.
Kevin Zhang Advancement of semiconductor technology has driven the rapid growth of very large scale integrated (VLSI) systems for increasingly broad applications, incl- ing high-end and mobile computing, consumer electronics such as 3D gaming, multi-function or smart phone, and various set-top players and ubiquitous sensor and medical devices. To meet the increasing demand for higher performance and lower power consumption in many different system applications, it is often required to have a large amount of on-die or embedded memory to support the need of data bandwidth in a system. The varieties of embedded memory in a given system have alsobecome increasingly more complex, ranging fromstatictodynamic and volatile to nonvolatile. Among embedded memories, six-transistor (6T)-based static random access memory (SRAM) continues to play a pivotal role in nearly all VLSI systems due to its superior speed and full compatibility with logic process technology. But as the technology scaling continues, SRAM design is facing severe challenge in mainta- ing suf?cient cell stability margin under relentless area scaling. Meanwhile, rapid expansion in mobile application, including new emerging application in sensor and medical devices, requires far more aggressive voltage scaling to meet very str- gent power constraint. Many innovative circuit topologies and techniques have been extensively explored in recent years to address these challenges.
Web caching and content delivery technologies provide the
infrastructure on which systems are built for the scalable
distribution of information. This proceedings of the eighth annual
workshop, captures a cross-section of the latest issues and
techniques of interest to network architects and researchers in
large-scale content delivery. Topics covered include the
distribution of streaming multimedia, edge caching and computation,
multicast, delivery of dynamic content, enterprise content
delivery, streaming proxies and servers, content transcoding,
replication and caching strategies, peer-to-peer content delivery,
and Web prefetching. |
![]() ![]() You may like...
Calibration and Orientation of Cameras…
Armin Gruen, Thomas S. Huang
Hardcover
R3,071
Discovery Miles 30 710
Hardware Based Packet Classification for…
Chad R. Meiners, Alex X. Liu, …
Hardcover
R3,003
Discovery Miles 30 030
Magnetic Information Storage Technology…
Shan X. Wang, Alex M Taratorin
Hardcover
R3,722
Discovery Miles 37 220
Flash Memory Integration - Performance…
Jalil Boukhobza, Pierre Olivier
Hardcover
Building a Columnar Database on RAMCloud…
Christian Tinnefeld
Hardcover
Haptic Interaction - Perception, Devices…
Hiroyuki Kajimoto, Hideyuki Ando, …
Hardcover
|