![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems
Mission-Critical Microsoft Exchange 2000 is the definitive book on
how to design and maintain extremely reliable and adaptive Exchange
Server messaging systems that rarely crash and that preserve
valuable data and services in spite of technical disruptions.
E-mail systems are now a primary means of communication for
organizations, which can afford e-mail down-time no more than they
can afford to be without phones. Further, messaging systems
increasingly are supporting vital applications in addition to
e-mail, such as workflow and knowledge management, making the data
they store both voluminous and incredibly valuable.
Operating systems kernels are central to the functioning of computers. Security of the overall system, as well as its reliability and responsiveness, depend upon the correct functioning of the kernel. This unique approach - presenting a formal specification of a kernel - starts with basic constructs and develops a set of kernels; proofs are included as part of the text.
This book describes state-of-the-art approaches to Fog Computing, including the background of innovations achieved in recent years. Coverage includes various aspects of fog computing architectures for Internet of Things, driving reasons, variations and case studies. The authors discuss in detail key topics, such as meeting low latency and real-time requirements of applications, interoperability, federation and heterogeneous computing, energy efficiency and mobility, fog and cloud interplay, geo-distribution and location awareness, and case studies in healthcare and smart space applications.
This excellent reference proposes and develops new strategies, methodologies and tools for designing low-power and low-area CMOS pipelined A/D converters. The task is tackled by following a scientifically-consistent approach. The book may also be used as a text for advanced reading on the subject.
Offering thorough coverage of atomic layer deposition (ALD), this book moves from basic chemistry of ALD and modeling of processes to examine ALD in memory, logic devices and machines. Reviews history, operating principles and ALD processes for each device.
Process calculi are among the most successful models of concurrent systems. Various behavior equivalences between processes are central notions in CCS (calculus of communicating systems) and other process calculi. In the real applications, specification and implementation are described as two processes, and correctness of programs is treated as a certain behavior equivalence between them. The purpose of this book is to establish a theory of approximate correctness and infinite evolution of concurrent programs by employing some notions and tools from point-set topology. This book is restricted to CCS for simplicity, but the main idea also applies to some other process calculi. The concept of bisimulation limits, useful for the understanding and analysis of infinite evolution of processes, is introduced. In addition, the notions of near bisimulations and bisimulation indexes, suitable in describing approximate correctness of concurrent programs, are proposed. The book will be of particular interest to researchers in the fields of theoretical computer science, especially theory of concurrency and hybrid systems, and graduate students in related disciplines. It will also be valuable to practical system designers developing concurrent and/or real-time systems.
As e-government applications are coming of age, security has been gradually becoming more demanding a requirement for users, administrators, and service providers. The increasingly widespread use of Web services facilitates the exchange of data among various e-government applications, and paves the way for enhanced service delivery. ""Secure E-Government Web Services"" addresses various aspects of building secure e-government architectures and services, and presents the views of experts from academia, policy, and the industry to conclude that secure e-government Web services can be deployed in an application-centric and interoperable way. ""Secure E-Government Web Services"" presents the promising area of Web services, shedding new light onto this innovative area of applications, and responding to the current and upcoming challenges of e-government security.
The development of nature-inspired computational techniques has enhanced problem solving in dynamic and uncertain environments. By implementing effective computing strategies, this ensures adaptable, self-organizing, and decentralized behavioral techniques. Recent Developments in Intelligent Nature-Inspired Computing is an authoritative reference source for the latest scholarly material on natural computation methods and applications in diverse fields. Highlighting multidisciplinary studies on swarm intelligence, global optimization, and group technology, this publication is an ideal reference source for professionals, researchers, scholars, and engineers interested in the latest developments in computer science methodologies.
This book explores the design implications of emerging, non-volatile memory (NVM) technologies on future computer memory hierarchy architecture designs. Since NVM technologies combine the speed of SRAM, the density of DRAM, and the non-volatility of Flash memory, they are very attractive as the basis for future universal memories. This book provides a holistic perspective on the topic, covering modeling, design, architecture and applications. The practical information included in this book will enable designers to exploit emerging memory technologies to improve significantly the performance/power/reliability of future, mainstream integrated circuits.
The purpose of the 4th International Conference on Enterprise
Information Systems (ICEIS) was to bring together researchers,
engineers and practitioners interested in the advances and business
applications of information systems. The research papers focused on
real world applications covering four main themes: Enterprise
Database Applications, Artificial Intelligence Applications and
Decision Support Systems, Systems Analysis and Specification, and
Internet and Electronic Commerce.
This book examines the issue of design of fully-integrated frequency synthesizers suitable for system-on-a-chip (SOC) processors. This book takes a more global design perspective in jointly examining the design space at the circuit level as well as at the architectural level. The coverage of the book is comprehensive and includes summary chapters on circuit theory as well as feedback control theory relevant to the operation of phase locked loops (PLLs). On the circuit level, the discussion includes low-voltage analog design in deep submicron digital CMOS processes, effects of supply noise, substrate noise, as well device noise. On the architectural level, the discussion includes PLL analysis using continuous-time as well as discrete-time models, linear and nonlinear effects of PLL performance, and detailed analysis of locking behavior. The material then develops into detailed circuit and architectural analysis of specific clock generation blocks. This includes circuits and architectures of PLLs with high power supply noise immunity and digital PLL architectures where the loop filter is digitized. Methods of generating low-spurious sampling clocks for discrete-time analog blocks are then examined. This includes sigma-delta fractional-N PLLs, Direct Digital Synthesis (DDS) techniques and non-conventional uses of PLLs. Design for test (DFT) issues as they arise in PLLs are then discussed. This includes methods of accurately measuring jitter and built-in-self-test (BIST) techniques for PLLs. Finally, clocking issues commonly associated to system-on-a-chip (SOC) designs, such as multiple clock domain interfacing and partitioning, and accurate clock phase generation techniques usingdelay-locked loops (DLLs) are also addressed. The book provides numerous real world applications, as well as practical rules-of-thumb for modern designers to use at the system, architectural, as well as the circuit level. This book is well suited for practitioners as well as graduate level students who wish to learn more about time-domain analysis and design of frequency synthesis techniques.
The Fibre Channel Association is an international organization
devoted to educating and promoting the Fibre Channel standard.
Distributed and Parallel Systems: From Instruction Parallelism to Cluster Computing is the proceedings of the third Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by the Austrian Computer Society and the MTA SZTAKI Computer and Automation Research Institute. This book contains 18 full papers and 12 short papers from 14 countries around the world, including Japan, Korea and Brazil. The paper sessions cover a broad range of research topics in the area of parallel and distributed systems, including software development environments, performance evaluation, architectures, languages, algorithms, web and cluster computing. This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.
Digital signal processing is an area of science and engineering that has been developed rapidly over the past years. This rapid development is the result of the significant advances in digital computer technology and integrated circuits fabrication. Many of the signal processing tasks conventionally performed by analog means, are realized today by less expensive and often more reliable digital hardware. Multirate Systems: Design and Applications addresses the rapid development of multirate digital signal processing and how it is complemented by the emergence of new applications.
We are extremely pleased to present a comprehensive book comprising a collection of research papers which is basically an outcome of the Second IFIP TC 13.6 Working Group conference on Human Work Interaction Design, HWID2009. The conference was held in Pune, India during October 7-8, 2009. It was hosted by the Centre for Development of Advanced Computing, India, and jointly organized with Copenhagen Business School, Denmark; Aarhus University, Denmark; and Indian Institute of Technology, Guwahati, India. The theme of HWID2009 was Usability in Social, C- tural and Organizational Contexts. The conference was held under the auspices of IFIP TC 13 on Human-Computer Interaction. 1 Technical Committee TC13 on Human-Computer Interaction The committees under IFIP include the Technical Committee TC13 on Human-Computer Interaction within which the work of this volume has been conducted. TC13 on Human-Computer Interaction has as its aim to encourage theoretical and empirical human science research to promote the design and evaluation of human-oriented ICT. Within TC13 there are different working groups concerned with different aspects of human- computer interaction. The flagship event of TC13 is the bi-annual international conference called INTERACT at which both invited and contributed papers are presented. Contributed papers are rigorously refereed and the rejection rate is high.
The implementation of object-oriented languages has been an active topic of research since the 1960s when the first Simula compiler was written. The topic received renewed interest in the early 1980s with the growing popularity of object-oriented programming languages such as c++ and Smalltalk, and got another boost with the advent of Java. Polymorphic calls are at the heart of object-oriented languages, and even the first implementation of Simula-67 contained their classic implementation via virtual function tables. In fact, virtual function tables predate even Simula-for example, Ivan Sutherland's Sketchpad drawing editor employed very similar structures in 1960. Similarly, during the 1970s and 1980s the implementers of Smalltalk systems spent considerable efforts on implementing polymorphic calls for this dynamically typed language where virtual function tables could not be used. Given this long history of research into the implementation of polymorphic calls, and the relatively mature standing it achieved over time, why, one might ask, should there be a new book in this field? The answer is simple. Both software and hardware have changed considerably in recent years, to the point where many assumptions underlying the original work in this field are no longer true. In particular, virtual function tables are no longer sufficient to implement polymorphic calls even for statically typed languages; for example, Java's interface calls cannot be implemented this way. Furthermore, today's processors are deeply pipelined and can execute instructions out-of order, making it difficult to predict the execution time of even simple code sequences."
In three main divisions the book covers combinational circuits, latches, and asynchronous sequential circuits. Combinational circuits have no memorising ability, while sequential circuits have such an ability to various degrees. Latches are the simplest sequential circuits, ones with the shortest memory. The presentation is decidedly non-standard. The design of combinational circuits is discussed in an orthodox manner using normal forms and in an unorthodox manner using set-theoretical evaluation formulas relying heavily on Karnaugh maps. The latter approach allows for a new design technique called composition. Latches are covered very extensively. Their memory functions are expressed mathematically in a time-independent manner allowing the use of (normal, non-temporal) Boolean logic in their calculation. The theory of latches is then used as the basis for calculating asynchronous circuits. Asynchronous circuits are specified in a tree-representation, each internal node of the tree representing an internal latch of the circuit, the latches specified by the tree itself. The tree specification allows solutions of formidable problems such as algorithmic state assignment, finding equivalent states non-recursively, and verifying asynchronous circuits.
Advances in optical technologies have made it possible to implement optical interconnections in future massively parallel processing systems. Photons are non-charged particles, and do not naturally interact. Consequently, there are many desirable characteristics of optical interconnects, e.g. high speed (speed of light), increased fanout, high bandwidth, high reliability, longer interconnection lengths, low power requirements, and immunity to EMI with reduced crosstalk. Optics can utilize free-space interconnects as well as guided wave technology, neither of which has the problems of VLSI technology mentioned above. Optical interconnections can be built at various levels, providing chip-to-chip, module-to-module, board-to-board, and node-to-node communications. Massively parallel processing using optical interconnections poses new challenges; new system configurations need to be designed, scheduling and data communication schemes based on new resource metrics need to be investigated, algorithms for a wide variety of applications need to be developed under the novel computation models that optical interconnections permit, and so on. Parallel Computing Using Optical Interconnections is a collection of survey articles written by leading and active scientists in the area of parallel computing using optical interconnections. This is the first book which provides current and comprehensive coverage of the field, reflects the state of the art from high-level architecture design and algorithmic points of view, and points out directions for further research and development.
Multicore Processors and Systems provides a comprehensive overview of emerging multicore processors and systems. It covers technology trends affecting multicores, multicore architecture innovations, multicore software innovations, and case studies of state-of-the-art commercial multicore systems. A cross-cutting theme of the book is the challenges associated with scaling up multicore systems to hundreds of cores. The book provides an overview of significant developments in the architectures for multicore processors and systems. It includes chapters on fundamental requirements for multicore systems, including processing, memory systems, and interconnect. It also includes several case studies on commercial multicore systems that have recently been developed and deployed across multiple application domains. The architecture chapters focus on innovative multicore execution models as well as infrastructure for multicores, including memory systems and on-chip interconnections. The case studies examine multicore implementations across different application domains, including general purpose, server, media/broadband, network processing, and signal processing. Multicore Processors and Systems is the first book that focuses solely on multicore processors and systems, and in particular on the unique technology implications, architectures, and implementations. The book has contributing authors that are from both the academic and industrial communities.
This book describes for readers a methodology for dynamic power estimation, using Transaction Level Modeling (TLM). The methodology exploits the existing tools for RTL simulation, design synthesis and SystemC prototyping to provide fast and accurate power estimation using Transaction Level Power Modeling (TLPM). Readers will benefit from this innovative way of evaluating power on a high level of abstraction, at an early stage of the product life cycle, decreasing the number of the expensive design iterations.
Grids are a crucial enabling technology for scientific and industrial development. Grid and Services Evolution, the 11th edited volume of the CoreGRID series, was based on The CoreGRID Middleware Workshop, held in Barcelona, Spain, June 5-6, 2008. Grid and Services Evolution provides a bridge between the application community and the developers of middleware services, especially in terms of parallel computing. This edited volume brings together a critical mass of well-established researchers worldwide, from forty-two institutions active in the fields of distributed systems and middleware, programming models, algorithms, tools and environments. Grid and Services Evolution is designed for a professional audience composed of researchers and practitioners within the Grid community industry. This volume is also suitable for advanced-level students in computer science.
In this volume organizational learning theory is used to analyse various practices of managing and facilitating knowledge sharing within companies. Experiences with three types of knowledge sharing, namely knowledge acquisition, knowledge reuse, and knowledge creation, at ten large companies are discussed and analyzed. This critical analysis leads to the identification of traps and obstacles when managing knowledge sharing, when supporting knowledge sharing with IT tools, and when organizations try to learn from knowledge sharing practices. The identification of these risks is followed by a discussion of how organizations can avoid them. This work will be of interest to researchers and practitioners working in organization science and business administration. Also, consultants and organizations at large will find the book useful as it will provide them with insights into how other organizations manage and facilitate knowledge sharing and how potential failures can be prevented.
This book discusses the trade-offs involved in designing direct RF
digitization receivers for the radio frequency and digital signal
processing domains. A system-level framework is developed,
quantifying the relevant impairments of the signal processing
chain, through a comprehensive system-level analysis. Special focus
is given to noise analysis (thermal noise, quantization noise,
saturation noise, signal-dependent noise), broadband non-linear
distortion analysis, including the impact of the sampling strategy
(low-pass, band-pass), analysis of time-interleaved ADC channel
mismatches, sampling clock purity and digital channel selection.
The system-level framework described is applied to the design of a
cable multi-channel RF direct digitization receiver. An optimum RF
signal conditioning, and some algorithms (automatic gain control
loop, RF front-end amplitude equalization control loop) are used to
relax the requirements of a 2.7GHz 11-bit ADC.
The most exciting development in parallel computer architecture
is the convergence of traditionally disparate approaches on a
common machine structure. This book explains the forces behind this
convergence of shared-memory, message-passing, data parallel, and
data-driven computing architectures. It then examines the design
issues that are critical to all parallel architecture across the
full range of modern design, covering data access, communication
performance, coordination of cooperative work, and correct
implementation of useful semantics. It not only describes the
hardware and software techniques for addressing each of these
issues but also explores how these techniques interact in the same
system. Examining architecture from an application-driven
perspective, it provides comprehensive discussions of parallel
programming for high performance and of workload-driven evaluation,
based on understanding hardware-software interactions. |
You may like...
Biomedical Diagnostics and Clinical…
Manuela Pereira, Mario Freire
Hardcover
R6,154
Discovery Miles 61 540
SAS for Mixed Models - Introduction and…
Walter W. Stroup, George A. Milliken, …
Hardcover
R3,109
Discovery Miles 31 090
Getting Started with R - An Introduction…
Andrew P. Beckerman, Dylan Z Childs, …
Hardcover
R3,353
Discovery Miles 33 530
The Pebble Spotter's Guide
Clive Mitchell, National Trust Books
Hardcover
|