![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems
This book describes for readers a methodology for dynamic power estimation, using Transaction Level Modeling (TLM). The methodology exploits the existing tools for RTL simulation, design synthesis and SystemC prototyping to provide fast and accurate power estimation using Transaction Level Power Modeling (TLPM). Readers will benefit from this innovative way of evaluating power on a high level of abstraction, at an early stage of the product life cycle, decreasing the number of the expensive design iterations.
The development of nature-inspired computational techniques has enhanced problem solving in dynamic and uncertain environments. By implementing effective computing strategies, this ensures adaptable, self-organizing, and decentralized behavioral techniques. Recent Developments in Intelligent Nature-Inspired Computing is an authoritative reference source for the latest scholarly material on natural computation methods and applications in diverse fields. Highlighting multidisciplinary studies on swarm intelligence, global optimization, and group technology, this publication is an ideal reference source for professionals, researchers, scholars, and engineers interested in the latest developments in computer science methodologies.
Distributed and Parallel Systems: From Instruction Parallelism to Cluster Computing is the proceedings of the third Austrian-Hungarian Workshop on Distributed and Parallel Systems organized jointly by the Austrian Computer Society and the MTA SZTAKI Computer and Automation Research Institute. This book contains 18 full papers and 12 short papers from 14 countries around the world, including Japan, Korea and Brazil. The paper sessions cover a broad range of research topics in the area of parallel and distributed systems, including software development environments, performance evaluation, architectures, languages, algorithms, web and cluster computing. This volume will be useful to researchers and scholars interested in all areas related to parallel and distributed computing systems.
This book explores the design implications of emerging, non-volatile memory (NVM) technologies on future computer memory hierarchy architecture designs. Since NVM technologies combine the speed of SRAM, the density of DRAM, and the non-volatility of Flash memory, they are very attractive as the basis for future universal memories. This book provides a holistic perspective on the topic, covering modeling, design, architecture and applications. The practical information included in this book will enable designers to exploit emerging memory technologies to improve significantly the performance/power/reliability of future, mainstream integrated circuits.
This book discusses the trade-offs involved in designing direct RF
digitization receivers for the radio frequency and digital signal
processing domains. A system-level framework is developed,
quantifying the relevant impairments of the signal processing
chain, through a comprehensive system-level analysis. Special focus
is given to noise analysis (thermal noise, quantization noise,
saturation noise, signal-dependent noise), broadband non-linear
distortion analysis, including the impact of the sampling strategy
(low-pass, band-pass), analysis of time-interleaved ADC channel
mismatches, sampling clock purity and digital channel selection.
The system-level framework described is applied to the design of a
cable multi-channel RF direct digitization receiver. An optimum RF
signal conditioning, and some algorithms (automatic gain control
loop, RF front-end amplitude equalization control loop) are used to
relax the requirements of a 2.7GHz 11-bit ADC.
Advances in optical technologies have made it possible to implement optical interconnections in future massively parallel processing systems. Photons are non-charged particles, and do not naturally interact. Consequently, there are many desirable characteristics of optical interconnects, e.g. high speed (speed of light), increased fanout, high bandwidth, high reliability, longer interconnection lengths, low power requirements, and immunity to EMI with reduced crosstalk. Optics can utilize free-space interconnects as well as guided wave technology, neither of which has the problems of VLSI technology mentioned above. Optical interconnections can be built at various levels, providing chip-to-chip, module-to-module, board-to-board, and node-to-node communications. Massively parallel processing using optical interconnections poses new challenges; new system configurations need to be designed, scheduling and data communication schemes based on new resource metrics need to be investigated, algorithms for a wide variety of applications need to be developed under the novel computation models that optical interconnections permit, and so on. Parallel Computing Using Optical Interconnections is a collection of survey articles written by leading and active scientists in the area of parallel computing using optical interconnections. This is the first book which provides current and comprehensive coverage of the field, reflects the state of the art from high-level architecture design and algorithmic points of view, and points out directions for further research and development.
Digital signal processing is an area of science and engineering that has been developed rapidly over the past years. This rapid development is the result of the significant advances in digital computer technology and integrated circuits fabrication. Many of the signal processing tasks conventionally performed by analog means, are realized today by less expensive and often more reliable digital hardware. Multirate Systems: Design and Applications addresses the rapid development of multirate digital signal processing and how it is complemented by the emergence of new applications.
We are extremely pleased to present a comprehensive book comprising a collection of research papers which is basically an outcome of the Second IFIP TC 13.6 Working Group conference on Human Work Interaction Design, HWID2009. The conference was held in Pune, India during October 7-8, 2009. It was hosted by the Centre for Development of Advanced Computing, India, and jointly organized with Copenhagen Business School, Denmark; Aarhus University, Denmark; and Indian Institute of Technology, Guwahati, India. The theme of HWID2009 was Usability in Social, C- tural and Organizational Contexts. The conference was held under the auspices of IFIP TC 13 on Human-Computer Interaction. 1 Technical Committee TC13 on Human-Computer Interaction The committees under IFIP include the Technical Committee TC13 on Human-Computer Interaction within which the work of this volume has been conducted. TC13 on Human-Computer Interaction has as its aim to encourage theoretical and empirical human science research to promote the design and evaluation of human-oriented ICT. Within TC13 there are different working groups concerned with different aspects of human- computer interaction. The flagship event of TC13 is the bi-annual international conference called INTERACT at which both invited and contributed papers are presented. Contributed papers are rigorously refereed and the rejection rate is high.
Grids are a crucial enabling technology for scientific and industrial development. Grid and Services Evolution, the 11th edited volume of the CoreGRID series, was based on The CoreGRID Middleware Workshop, held in Barcelona, Spain, June 5-6, 2008. Grid and Services Evolution provides a bridge between the application community and the developers of middleware services, especially in terms of parallel computing. This edited volume brings together a critical mass of well-established researchers worldwide, from forty-two institutions active in the fields of distributed systems and middleware, programming models, algorithms, tools and environments. Grid and Services Evolution is designed for a professional audience composed of researchers and practitioners within the Grid community industry. This volume is also suitable for advanced-level students in computer science.
The implementation of object-oriented languages has been an active topic of research since the 1960s when the first Simula compiler was written. The topic received renewed interest in the early 1980s with the growing popularity of object-oriented programming languages such as c++ and Smalltalk, and got another boost with the advent of Java. Polymorphic calls are at the heart of object-oriented languages, and even the first implementation of Simula-67 contained their classic implementation via virtual function tables. In fact, virtual function tables predate even Simula-for example, Ivan Sutherland's Sketchpad drawing editor employed very similar structures in 1960. Similarly, during the 1970s and 1980s the implementers of Smalltalk systems spent considerable efforts on implementing polymorphic calls for this dynamically typed language where virtual function tables could not be used. Given this long history of research into the implementation of polymorphic calls, and the relatively mature standing it achieved over time, why, one might ask, should there be a new book in this field? The answer is simple. Both software and hardware have changed considerably in recent years, to the point where many assumptions underlying the original work in this field are no longer true. In particular, virtual function tables are no longer sufficient to implement polymorphic calls even for statically typed languages; for example, Java's interface calls cannot be implemented this way. Furthermore, today's processors are deeply pipelined and can execute instructions out-of order, making it difficult to predict the execution time of even simple code sequences."
Multicore Processors and Systems provides a comprehensive overview of emerging multicore processors and systems. It covers technology trends affecting multicores, multicore architecture innovations, multicore software innovations, and case studies of state-of-the-art commercial multicore systems. A cross-cutting theme of the book is the challenges associated with scaling up multicore systems to hundreds of cores. The book provides an overview of significant developments in the architectures for multicore processors and systems. It includes chapters on fundamental requirements for multicore systems, including processing, memory systems, and interconnect. It also includes several case studies on commercial multicore systems that have recently been developed and deployed across multiple application domains. The architecture chapters focus on innovative multicore execution models as well as infrastructure for multicores, including memory systems and on-chip interconnections. The case studies examine multicore implementations across different application domains, including general purpose, server, media/broadband, network processing, and signal processing. Multicore Processors and Systems is the first book that focuses solely on multicore processors and systems, and in particular on the unique technology implications, architectures, and implementations. The book has contributing authors that are from both the academic and industrial communities.
In this volume organizational learning theory is used to analyse various practices of managing and facilitating knowledge sharing within companies. Experiences with three types of knowledge sharing, namely knowledge acquisition, knowledge reuse, and knowledge creation, at ten large companies are discussed and analyzed. This critical analysis leads to the identification of traps and obstacles when managing knowledge sharing, when supporting knowledge sharing with IT tools, and when organizations try to learn from knowledge sharing practices. The identification of these risks is followed by a discussion of how organizations can avoid them. This work will be of interest to researchers and practitioners working in organization science and business administration. Also, consultants and organizations at large will find the book useful as it will provide them with insights into how other organizations manage and facilitate knowledge sharing and how potential failures can be prevented.
The core idea of this book is that object- oriented technology is a generic technology whose various technical aspects can be presented in a unified and consistent framework. This applies to both practical and formal aspects of object-oriented technology. Course tested in a variety of object-oriented courses, numerous examples, figures and exercises are presented in each chapter. The approach in this book is based on typed technologies, and the core notions fit mainstream object-oriented languages such as Java and C#. The book promotes object-oriented constraints (assertions), their specification and verification. Object-oriented constraints apply to specification and verification of object-oriented programs, specification of the object-oriented platform, more advanced concurrent models, database integrity constraints and object-oriented transactions, their specification and verification.
Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.
This book provides graduate students and practitioners with knowledge of the CORBA standard and practical experience of implementing distributed systems with CORBA's Java mapping. With tested code examples that will run immediately!
In this international collection of papers there is a wealth of knowledge on artificial intelligence (AI) and cognitive science (CS) techniques applied to the problem of providing help systems mainly for the UNIX operating system. The research described here involves the representation of technical computer concepts, but also the representation of how users conceptualise such concepts. The collection looks at computational models and systems such as UC, Yucca, and OSCON programmed in languages such as Lisp, Prolog, OPS-5, and C which have been developed to provide UNIX help. These systems range from being menu-based to ones with natural language interfaces, some providing active help, intervening when they believe the user to have misconceptions, and some based on empirical studies of what users actually do while using UNIX. Further papers investigate planning and knowledge representation where the focus is on discovering what the user wants to do, and figuring out a way to do it, as well as representing the knowledge needed to do so. There is a significant focus on natural language dialogue where consultation systems can become active, metaphors, and users' mistaken beliefs. Much can be learned from seeing how AI and CS techniques can be investigated in depth while being applied to a real test-bed domain such as help on UNIX.
Ferroelectric memories have changed in 10 short years from academic curiosities of the university research labs to commercial devices in large-scale production. This is the first text on ferroelectric memories that is not just an edited collection of papers by different authors. Intended for applied physicists, electrical engineers, materials scientists and ceramists, it includes ferroelectric fundamentals, especially for thin films, circuit diagrams and processsing chapters, but emphazises device physics. Breakdown mechanisms, switching kinetics and leakage current mechanisms have lengthly chapters devoted to them. The book will be welcomed by research scientists in industry and government laboratories and in universities. It also contains 76 problems for students, making it particularly useful as a textbook for fourth-year undergraduate or first-year graduate students.
Praise for the First Edition: "This outstanding book ... gives the reader robust concepts and implementable knowledge of this environment. Graphical user interface (GUI)-based users and developers do not get short shrift, despite the command-line interface's (CLI) full-power treatment. ... Every programmer should read the introduction's Unix/Linux philosophy section. ... This authoritative and exceptionally well-constructed book has my highest recommendation. It will repay careful and recursive study." --Computing Reviews, August 2011 Mastering Modern Linux, Second Edition retains much of the good material from the previous edition, with extensive updates and new topics added. The book provides a comprehensive and up-to-date guide to Linux concepts, usage, and programming. The text helps the reader master Linux with a well-selected set of topics, and encourages hands-on practice. The first part of the textbook covers interactive use of Linux via the Graphical User Interface (GUI) and the Command-Line Interface (CLI), including comprehensive treatment of the Gnome desktop and the Bash Shell. Using different apps, commands and filters, building pipelines, and matching patterns with regular expressions are major focuses. Next comes Bash scripting, file system structure, organization, and usage. The following chapters present networking, the Internet and the Web, data encryption, basic system admin, as well as Web hosting. The Linux Apache MySQL/MariaDB PHP (LAMP) Web hosting combination is also presented in depth. In the last part of the book, attention is turned to C-level programming. Topics covered include the C compiler, preprocessor, debugger, I/O, file manipulation, process control, inter-process communication, and networking. The book includes many examples and complete programs ready to download and run. A summary and exercises of varying degrees of difficulty can be found at the end of each chapter. A companion website (http://mml.sofpower.com) provides appendices, information updates, an example code package, and other resources for instructors, as well as students.
Proven best practices for success with every Azure networking service For cloud environments to operate and scale optimally, their networking services must be designed, deployed, and managed well. Now, there's a complete, best-practice guide to doing just that. Writing for everyone involved in delivering Azure workloads and services, leading cloud consultant Avinash Valiramani provides a deep dive and practical field advice for Azure Virtual Networks, Azure VPN Gateways, Azure Load Balancing, Azure Traffic Manager, Azure Firewall, Azure DNS, Azure Bastion, Azure Front Door and more. Whatever your role in delivering efficient, scalable networking services, this guide will help you make the most of your Azure investment. Leading Azure consultant Avinash Valiramani shows how to: Use Azure Virtual Networks to establish a backbone for hosting other Azure resources Provide HTTP/HTTPS load-balancing and routing for web servers and apps through Azure Application Gateway Connect on-premises and other public networks to Azure for secure communications using the Azure VPN Gateway service Provide secure load balancing to apps from internal and public networks using Azure Load Balancer services Integrate Azure Firewall to centrally protect Azure resources across multiple subscriptions Access globally scaled, fully-managed DNS services with 100% SLA from the closest Azure DNS servers Provide optimal network routing to the closest application endpoint for public-facing applications with Azure Traffic Manager Use Microsoft's global edge network along with Azure Front Door to speed up access, harden security and enhance scalability for consuming-facing and internal web applications Also look for these Definitive Guides to Azure success: Microsoft Azure Compute: The Definitive Guide Microsoft Azure Monitoring and Management: The Definitive Guide Microsoft Azure Storage: The Definitive Guide
At the beginning of the 1990s research started in how to combine soft comput ing with reconfigurable hardware in a quite unique way. One of the methods that was developed has been called evolvable hardware. Thanks to evolution ary algorithms researchers have started to evolve electronic circuits routinely. A number of interesting circuits - with features unreachable by means of con ventional techniques - have been developed. Evolvable hardware is quite pop ular right now; more than fifty research groups are spread out over the world. Evolvable hardware has become a part of the curriculum at some universi ties. Evolvable hardware is being commercialized and there are specialized conferences devoted to evolvable hardware. On the other hand, surprisingly, we can feel the lack of a theoretical background and consistent design methodology in the area. Furthermore, it is quite difficult to implement really innovative and practically successful evolvable systems using contemporary digital reconfigurable technology."
Parallel Numerical Computations with Applications contains selected edited papers presented at the 1998 Frontiers of Parallel Numerical Computations and Applications Workshop, along with invited papers from leading researchers around the world. These papers cover a broad spectrum of topics on parallel numerical computation with applications; such as advanced parallel numerical and computational optimization methods, novel parallel computing techniques, numerical fluid mechanics, and other applications related to material sciences, signal and image processing, semiconductor technology, and electronic circuits and systems design. This state-of-the-art volume will be an up-to-date resource for researchers in the areas of parallel and distributed computing.
The continous development of computer technology supported by the VLSI revolution stimulated the research in the field .of multiprocessors systems. The main motivation for the migration of design efforts from conventional architectures towards multiprocessor ones is the possibi I ity to obtain a significant processing power together with the improvement of price/performance, reliability and flexibility figures. Currently, such systems are moving from research laboratories to real field appl ications. Future technological advances and new generations of components are I ikely to further enhance this trend. This book is intended to provide basic concepts and design methodologies for engineers and researchers involved in the development of mul tiprocessor systems and/or of appl ications based on multiprocessor architectures. In addition the book can be a source of material for computer architecture courses at graduate level. A preliminary knowledge of computer architecture and logical design has been assumed in wri ting this book. Not all the problems related with the development of multiprocessor systems are addressed in th i s book. The covered range spans from the electrical and logical design problems, to architectural issues, to design methodologis for system software. Subj ects such as software development in a multiprocessor environment or loosely coupled multiprocessor systems are out of the scope of the book. Since the basic elements, processors and memories, are now available as standard integrated circuits, the key design problem is how to put them together in an efficient and reliable way."
Recent developments in computer science clearly show the need for a
better theoretical foundation for some central issues. Methods and
results from mathematical logic, in particular proof theory and
model theory, are of great help here and will be used much more in
future than previously. This book provides an excellent
introduction to the interplay of mathematical logic and computer
science. It contains extensively reworked versions of the lectures
given at the 1997 Marktoberdorf Summer School by leading
researchers in the field.
|
![]() ![]() You may like...
|