![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems
An accessible theoretical analysis of the organizational impact of information technologies. This book examines the many ways in which actors, organizations and technologies are represented through these technologies thus bridging the gap between the abstractions of current theories of organization and the somewhat excessively grounded material on information systems.
As the complexity of modern embedded systems increases, it becomes less practical to design monolithic processing platforms. As a result, reconfigurable computing is being adopted widely for more flexible design. Reconfigurable Computers offer the spatial parallelism and fine-grained customizability of application-specific circuits with the postfabrication programmability of software. To make the most of this unique combination of performance and flexibility, designers need to be aware of both hardware and software issues. FPGA users must think not only about the gates needed to perform a computation but also about the software flow that supports the design process. The goal of this book is to help designers become comfortable with these issues, and thus be able to exploit the vast opportunities possible with reconfigurable logic.
This book describes for readers a methodology for dynamic power estimation, using Transaction Level Modeling (TLM). The methodology exploits the existing tools for RTL simulation, design synthesis and SystemC prototyping to provide fast and accurate power estimation using Transaction Level Power Modeling (TLPM). Readers will benefit from this innovative way of evaluating power on a high level of abstraction, at an early stage of the product life cycle, decreasing the number of the expensive design iterations.
As e-government applications are coming of age, security has been gradually becoming more demanding a requirement for users, administrators, and service providers. The increasingly widespread use of Web services facilitates the exchange of data among various e-government applications, and paves the way for enhanced service delivery. ""Secure E-Government Web Services"" addresses various aspects of building secure e-government architectures and services, and presents the views of experts from academia, policy, and the industry to conclude that secure e-government Web services can be deployed in an application-centric and interoperable way. ""Secure E-Government Web Services"" presents the promising area of Web services, shedding new light onto this innovative area of applications, and responding to the current and upcoming challenges of e-government security.
Operating systems kernels are central to the functioning of computers. Security of the overall system, as well as its reliability and responsiveness, depend upon the correct functioning of the kernel. This unique approach - presenting a formal specification of a kernel - starts with basic constructs and develops a set of kernels; proofs are included as part of the text.
This book presents a detailed review of high-performance computing infrastructures for next-generation big data and fast data analytics. Features: includes case studies and learning activities throughout the book and self-study exercises in every chapter; presents detailed case studies on social media analytics for intelligent businesses and on big data analytics (BDA) in the healthcare sector; describes the network infrastructure requirements for effective transfer of big data, and the storage infrastructure requirements of applications which generate big data; examines real-time analytics solutions; introduces in-database processing and in-memory analytics techniques for data mining; discusses the use of mainframes for handling real-time big data and the latest types of data management systems for BDA; provides information on the use of cluster, grid and cloud computing systems for BDA; reviews the peer-to-peer techniques and tools and the common information visualization techniques, used in BDA.
Mission-Critical Microsoft Exchange 2000 is the definitive book on
how to design and maintain extremely reliable and adaptive Exchange
Server messaging systems that rarely crash and that preserve
valuable data and services in spite of technical disruptions.
E-mail systems are now a primary means of communication for
organizations, which can afford e-mail down-time no more than they
can afford to be without phones. Further, messaging systems
increasingly are supporting vital applications in addition to
e-mail, such as workflow and knowledge management, making the data
they store both voluminous and incredibly valuable.
This book examines the issue of design of fully-integrated frequency synthesizers suitable for system-on-a-chip (SOC) processors. This book takes a more global design perspective in jointly examining the design space at the circuit level as well as at the architectural level. The coverage of the book is comprehensive and includes summary chapters on circuit theory as well as feedback control theory relevant to the operation of phase locked loops (PLLs). On the circuit level, the discussion includes low-voltage analog design in deep submicron digital CMOS processes, effects of supply noise, substrate noise, as well device noise. On the architectural level, the discussion includes PLL analysis using continuous-time as well as discrete-time models, linear and nonlinear effects of PLL performance, and detailed analysis of locking behavior. The material then develops into detailed circuit and architectural analysis of specific clock generation blocks. This includes circuits and architectures of PLLs with high power supply noise immunity and digital PLL architectures where the loop filter is digitized. Methods of generating low-spurious sampling clocks for discrete-time analog blocks are then examined. This includes sigma-delta fractional-N PLLs, Direct Digital Synthesis (DDS) techniques and non-conventional uses of PLLs. Design for test (DFT) issues as they arise in PLLs are then discussed. This includes methods of accurately measuring jitter and built-in-self-test (BIST) techniques for PLLs. Finally, clocking issues commonly associated to system-on-a-chip (SOC) designs, such as multiple clock domain interfacing and partitioning, and accurate clock phase generation techniques usingdelay-locked loops (DLLs) are also addressed. The book provides numerous real world applications, as well as practical rules-of-thumb for modern designers to use at the system, architectural, as well as the circuit level. This book is well suited for practitioners as well as graduate level students who wish to learn more about time-domain analysis and design of frequency synthesis techniques.
Process calculi are among the most successful models of concurrent systems. Various behavior equivalences between processes are central notions in CCS (calculus of communicating systems) and other process calculi. In the real applications, specification and implementation are described as two processes, and correctness of programs is treated as a certain behavior equivalence between them. The purpose of this book is to establish a theory of approximate correctness and infinite evolution of concurrent programs by employing some notions and tools from point-set topology. This book is restricted to CCS for simplicity, but the main idea also applies to some other process calculi. The concept of bisimulation limits, useful for the understanding and analysis of infinite evolution of processes, is introduced. In addition, the notions of near bisimulations and bisimulation indexes, suitable in describing approximate correctness of concurrent programs, are proposed. The book will be of particular interest to researchers in the fields of theoretical computer science, especially theory of concurrency and hybrid systems, and graduate students in related disciplines. It will also be valuable to practical system designers developing concurrent and/or real-time systems.
The purpose of the 4th International Conference on Enterprise
Information Systems (ICEIS) was to bring together researchers,
engineers and practitioners interested in the advances and business
applications of information systems. The research papers focused on
real world applications covering four main themes: Enterprise
Database Applications, Artificial Intelligence Applications and
Decision Support Systems, Systems Analysis and Specification, and
Internet and Electronic Commerce.
In three main divisions the book covers combinational circuits, latches, and asynchronous sequential circuits. Combinational circuits have no memorising ability, while sequential circuits have such an ability to various degrees. Latches are the simplest sequential circuits, ones with the shortest memory. The presentation is decidedly non-standard. The design of combinational circuits is discussed in an orthodox manner using normal forms and in an unorthodox manner using set-theoretical evaluation formulas relying heavily on Karnaugh maps. The latter approach allows for a new design technique called composition. Latches are covered very extensively. Their memory functions are expressed mathematically in a time-independent manner allowing the use of (normal, non-temporal) Boolean logic in their calculation. The theory of latches is then used as the basis for calculating asynchronous circuits. Asynchronous circuits are specified in a tree-representation, each internal node of the tree representing an internal latch of the circuit, the latches specified by the tree itself. The tree specification allows solutions of formidable problems such as algorithmic state assignment, finding equivalent states non-recursively, and verifying asynchronous circuits.
This book explores the design implications of emerging, non-volatile memory (NVM) technologies on future computer memory hierarchy architecture designs. Since NVM technologies combine the speed of SRAM, the density of DRAM, and the non-volatility of Flash memory, they are very attractive as the basis for future universal memories. This book provides a holistic perspective on the topic, covering modeling, design, architecture and applications. The practical information included in this book will enable designers to exploit emerging memory technologies to improve significantly the performance/power/reliability of future, mainstream integrated circuits.
This excellent reference proposes and develops new strategies, methodologies and tools for designing low-power and low-area CMOS pipelined A/D converters. The task is tackled by following a scientifically-consistent approach. The book may also be used as a text for advanced reading on the subject.
Distributed and Parallel Systems: From Instruction Parallelism to Cluster Computing is the proceedings of the third Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by the Austrian Computer Society and the MTA SZTAKI Computer and Automation Research Institute. This book contains 18 full papers and 12 short papers from 14 countries around the world, including Japan, Korea and Brazil. The paper sessions cover a broad range of research topics in the area of parallel and distributed systems, including software development environments, performance evaluation, architectures, languages, algorithms, web and cluster computing. This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.
This book discusses the trade-offs involved in designing direct RF
digitization receivers for the radio frequency and digital signal
processing domains. A system-level framework is developed,
quantifying the relevant impairments of the signal processing
chain, through a comprehensive system-level analysis. Special focus
is given to noise analysis (thermal noise, quantization noise,
saturation noise, signal-dependent noise), broadband non-linear
distortion analysis, including the impact of the sampling strategy
(low-pass, band-pass), analysis of time-interleaved ADC channel
mismatches, sampling clock purity and digital channel selection.
The system-level framework described is applied to the design of a
cable multi-channel RF direct digitization receiver. An optimum RF
signal conditioning, and some algorithms (automatic gain control
loop, RF front-end amplitude equalization control loop) are used to
relax the requirements of a 2.7GHz 11-bit ADC.
The Fibre Channel Association is an international organization
devoted to educating and promoting the Fibre Channel standard.
The development of nature-inspired computational techniques has enhanced problem solving in dynamic and uncertain environments. By implementing effective computing strategies, this ensures adaptable, self-organizing, and decentralized behavioral techniques. Recent Developments in Intelligent Nature-Inspired Computing is an authoritative reference source for the latest scholarly material on natural computation methods and applications in diverse fields. Highlighting multidisciplinary studies on swarm intelligence, global optimization, and group technology, this publication is an ideal reference source for professionals, researchers, scholars, and engineers interested in the latest developments in computer science methodologies.
We are extremely pleased to present a comprehensive book comprising a collection of research papers which is basically an outcome of the Second IFIP TC 13.6 Working Group conference on Human Work Interaction Design, HWID2009. The conference was held in Pune, India during October 7-8, 2009. It was hosted by the Centre for Development of Advanced Computing, India, and jointly organized with Copenhagen Business School, Denmark; Aarhus University, Denmark; and Indian Institute of Technology, Guwahati, India. The theme of HWID2009 was Usability in Social, C- tural and Organizational Contexts. The conference was held under the auspices of IFIP TC 13 on Human-Computer Interaction. 1 Technical Committee TC13 on Human-Computer Interaction The committees under IFIP include the Technical Committee TC13 on Human-Computer Interaction within which the work of this volume has been conducted. TC13 on Human-Computer Interaction has as its aim to encourage theoretical and empirical human science research to promote the design and evaluation of human-oriented ICT. Within TC13 there are different working groups concerned with different aspects of human- computer interaction. The flagship event of TC13 is the bi-annual international conference called INTERACT at which both invited and contributed papers are presented. Contributed papers are rigorously refereed and the rejection rate is high.
Grids are a crucial enabling technology for scientific and industrial development. Grid and Services Evolution, the 11th edited volume of the CoreGRID series, was based on The CoreGRID Middleware Workshop, held in Barcelona, Spain, June 5-6, 2008. Grid and Services Evolution provides a bridge between the application community and the developers of middleware services, especially in terms of parallel computing. This edited volume brings together a critical mass of well-established researchers worldwide, from forty-two institutions active in the fields of distributed systems and middleware, programming models, algorithms, tools and environments. Grid and Services Evolution is designed for a professional audience composed of researchers and practitioners within the Grid community industry. This volume is also suitable for advanced-level students in computer science.
Advances in optical technologies have made it possible to implement optical interconnections in future massively parallel processing systems. Photons are non-charged particles, and do not naturally interact. Consequently, there are many desirable characteristics of optical interconnects, e.g. high speed (speed of light), increased fanout, high bandwidth, high reliability, longer interconnection lengths, low power requirements, and immunity to EMI with reduced crosstalk. Optics can utilize free-space interconnects as well as guided wave technology, neither of which has the problems of VLSI technology mentioned above. Optical interconnections can be built at various levels, providing chip-to-chip, module-to-module, board-to-board, and node-to-node communications. Massively parallel processing using optical interconnections poses new challenges; new system configurations need to be designed, scheduling and data communication schemes based on new resource metrics need to be investigated, algorithms for a wide variety of applications need to be developed under the novel computation models that optical interconnections permit, and so on. Parallel Computing Using Optical Interconnections is a collection of survey articles written by leading and active scientists in the area of parallel computing using optical interconnections. This is the first book which provides current and comprehensive coverage of the field, reflects the state of the art from high-level architecture design and algorithmic points of view, and points out directions for further research and development.
Open Radio Access Network (O-RAN) Systems Architecture and Design gives a jump-start to engineers developing O-RAN hardware and software systems, providing a top-down approach to O-RAN systems design. It gives an introduction into why wireless systems look the way they do today before introducing relevant O-RAN and 3GPP standards. The remainder of the book discusses hardware and software aspects of O-RAN system design, including dimensioning and performance targets.
The implementation of object-oriented languages has been an active topic of research since the 1960s when the first Simula compiler was written. The topic received renewed interest in the early 1980s with the growing popularity of object-oriented programming languages such as c++ and Smalltalk, and got another boost with the advent of Java. Polymorphic calls are at the heart of object-oriented languages, and even the first implementation of Simula-67 contained their classic implementation via virtual function tables. In fact, virtual function tables predate even Simula-for example, Ivan Sutherland's Sketchpad drawing editor employed very similar structures in 1960. Similarly, during the 1970s and 1980s the implementers of Smalltalk systems spent considerable efforts on implementing polymorphic calls for this dynamically typed language where virtual function tables could not be used. Given this long history of research into the implementation of polymorphic calls, and the relatively mature standing it achieved over time, why, one might ask, should there be a new book in this field? The answer is simple. Both software and hardware have changed considerably in recent years, to the point where many assumptions underlying the original work in this field are no longer true. In particular, virtual function tables are no longer sufficient to implement polymorphic calls even for statically typed languages; for example, Java's interface calls cannot be implemented this way. Furthermore, today's processors are deeply pipelined and can execute instructions out-of order, making it difficult to predict the execution time of even simple code sequences."
Digital signal processing is an area of science and engineering that has been developed rapidly over the past years. This rapid development is the result of the significant advances in digital computer technology and integrated circuits fabrication. Many of the signal processing tasks conventionally performed by analog means, are realized today by less expensive and often more reliable digital hardware. Multirate Systems: Design and Applications addresses the rapid development of multirate digital signal processing and how it is complemented by the emergence of new applications. |
![]() ![]() You may like...
Pro Oracle Database 11g RAC on Linux
Julian Dyke, Steve Shaw, …
Paperback
R1,870
Discovery Miles 18 700
|