0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (13)
  • R250 - R500 (38)
  • R500+ (3,214)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design

Turbo Decoder Architecture for Beyond-4G Applications (Hardcover, 2014 ed.): Cheng-Chi Wong, Hsie-Chia Chang Turbo Decoder Architecture for Beyond-4G Applications (Hardcover, 2014 ed.)
Cheng-Chi Wong, Hsie-Chia Chang
R2,932 Discovery Miles 29 320 Ships in 10 - 15 working days

This book describes the most recent techniques for turbo decoder implementation, especially for 4G and beyond 4G applications. The authors reveal techniques for the design of high-throughput decoders for future telecommunication systems, enabling designers to reduce hardware cost and shorten processing time. Coverage includes an explanation of VLSI implementation of the turbo decoder, from basic functional units to advanced parallel architecture. The authors discuss both hardware architecture techniques and experimental results, showing the variations in area/throughput/performance with respect to several techniques. This book also illustrates turbo decoders for 3GPP-LTE/LTE-A and IEEE 802.16e/m standards, which provide a low-complexity but high-flexibility circuit structure to support these standards in multiple parallel modes. Moreover, some solutions that can overcome the limitation upon the speedup of parallel architecture by modification to turbo codec are presented here. Compared to the traditional designs, these methods can lead to at most 33% gain in throughput with similar performance and similar cost.

Broadband Direct RF Digitization Receivers (Hardcover, 2014 ed.): Olivier Jamin Broadband Direct RF Digitization Receivers (Hardcover, 2014 ed.)
Olivier Jamin
R3,772 Discovery Miles 37 720 Ships in 10 - 15 working days

This book discusses the trade-offs involved in designing direct RF digitization receivers for the radio frequency and digital signal processing domains. A system-level framework is developed, quantifying the relevant impairments of the signal processing chain, through a comprehensive system-level analysis. Special focus is given to noise analysis (thermal noise, quantization noise, saturation noise, signal-dependent noise), broadband non-linear distortion analysis, including the impact of the sampling strategy (low-pass, band-pass), analysis of time-interleaved ADC channel mismatches, sampling clock purity and digital channel selection. The system-level framework described is applied to the design of a cable multi-channel RF direct digitization receiver. An optimum RF signal conditioning, and some algorithms (automatic gain control loop, RF front-end amplitude equalization control loop) are used to relax the requirements of a 2.7GHz 11-bit ADC.
A two-chip implementation is presented, using BiCMOS and 65nm CMOS processes, together with the block and system-level measurement results. Readers will benefit from the techniques presented, which are highly competitive, both in terms of cost and RF performance, while drastically reducing power consumption.
"

Computing with Memory for Energy-Efficient Robust Systems (Hardcover, 2014 ed.): Somnath Paul, Swarup Bhunia Computing with Memory for Energy-Efficient Robust Systems (Hardcover, 2014 ed.)
Somnath Paul, Swarup Bhunia
R4,181 Discovery Miles 41 810 Ships in 10 - 15 working days

This book analyzes energy and reliability as major challenges faced by designers of computing frameworks in the nanometer technology regime. The authors describe the existing solutions to address these challenges and then reveal a new reconfigurable computing platform, which leverages high-density nanoscale memory for both data storage and computation to maximize the energy-efficiency and reliability. The energy and reliability benefits of this new paradigm are illustrated and the design challenges are discussed. Various hardware and software aspects of this exciting computing paradigm are described, particularly with respect to hardware-software co-designed frameworks, where the hardware unit can be reconfigured to mimic diverse application behavior. Finally, the energy-efficiency of the paradigm described is compared with other, well-known reconfigurable computing platforms.

OpenMP in the Era of Low Power Devices and Accelerators - 9th International Workshop on OpenMP, IWOMP 2013, Canberra,... OpenMP in the Era of Low Power Devices and Accelerators - 9th International Workshop on OpenMP, IWOMP 2013, Canberra, Australia, September 16-18, 2013, Proceedings (Paperback, 2013 ed.)
Alistair P. Rendell, Barbara M. Chapman, Matthias S. Muller
R1,429 Discovery Miles 14 290 Ships in 10 - 15 working days

This book constitutes the refereed proceedings of the 9th International Workshop on OpenMP, held in Canberra, Australia, in September 2013. The 14 technical full papers presented were carefully reviewed and selected from various submissions. The papers are organized in topical sections on proposed extensions to OpenMP, applications, accelerators, scheduling, and tools.

Correct-by-Construction Approaches for SoC Design (Hardcover, 2014 ed.): Roopak Sinha, Parthasarathi Roop, Samik Basu Correct-by-Construction Approaches for SoC Design (Hardcover, 2014 ed.)
Roopak Sinha, Parthasarathi Roop, Samik Basu
R3,730 Discovery Miles 37 300 Ships in 10 - 15 working days

This book describes an approach for designing Systems-on-Chip such that the system meets precise mathematical requirements. The methodologies presented enable embedded systems designers to reuse intellectual property (IP) blocks from existing designs in an efficient, reliable manner, automatically generating correct SoCs from multiple, possibly mismatching, components.

Architectures for Baseband Signal Processing (Hardcover, 2014 ed.): Frank Kienle Architectures for Baseband Signal Processing (Hardcover, 2014 ed.)
Frank Kienle
R4,016 Discovery Miles 40 160 Ships in 10 - 15 working days

This book addresses challenges faced by both the algorithm designer and the chip designer, who need to deal with the ongoing increase of algorithmic complexity and required data throughput for today s mobile applications. The focus is on implementation aspects and implementation constraints of individual components that are needed in transceivers for current standards, such as UMTS, LTE, WiMAX and DVB-S2. The application domain is the so called outer receiver, which comprises the channel coding, interleaving stages, modulator, and multiple antenna transmission. Throughout the book, the focus is on advanced algorithms that are actually in use
in modern communications systems. Their basic principles are always derived with a focus on the resulting communications and implementation performance. As a result, this book serves as a valuable reference for two, typically disparate audiences in communication systems and hardware design."

Multicore Systems On-Chip: Practical Software/Hardware Design (Hardcover, 2nd Revised edition): Abderazek Ben Abdallah Multicore Systems On-Chip: Practical Software/Hardware Design (Hardcover, 2nd Revised edition)
Abderazek Ben Abdallah
R2,678 Discovery Miles 26 780 Ships in 10 - 15 working days

System on chips designs have evolved from fairly simple unicore, single memory designs to complex heterogeneous multicore SoC architectures consisting of a large number of IP blocks on the same silicon. To meet high computational demands posed by latest consumer electronic devices, most current systems are based on such paradigm, which represents a real revolution in many aspects in computing. The attraction of multicore processing for power reduction is compelling. By splitting a set of tasks among multiple processor cores, the operating frequency necessary for each core can be reduced, allowing to reduce the voltage on each core. Because dynamic power is proportional to the frequency and to the square of the voltage, we get a big gain, even though we may have more cores running. As more and more cores are integrated into these designs to share the ever increasing processing load, the main challenges lie in efficient memory hierarchy, scalable system interconnect, new programming paradigms, and efficient integration methodology for connecting such heterogeneous cores into a single system capable of leveraging their individual flexibility. Current design methods tend toward mixed HW/SW co-designs targeting multicore systems on-chip for specific applications. To decide on the lowest cost mix of cores, designers must iteratively map the device's functionality to a particular HW/SW partition and target architectures. In addition, to connect the heterogeneous cores, the architecture requires high performance complex communication architectures and efficient communication protocols, such as hierarchical bus, point-to-point connection, or Network-on-Chip. Software development also becomes far more complex due to the difficulties in breaking a single processing task into multiple parts that can be processed separately and then reassembled later. This reflects the fact that certain processor jobs cannot be easily parallelized to run concurrently on multiple processing cores and that load balancing between processing cores - especially heterogeneous cores - is very difficult.

SystemVerilog Assertions and Functional Coverage - Guide to Language, Methodology and Applications (Hardcover, 2014 ed.): Ashok... SystemVerilog Assertions and Functional Coverage - Guide to Language, Methodology and Applications (Hardcover, 2014 ed.)
Ashok B. Mehta
R5,894 Discovery Miles 58 940 Ships in 10 - 15 working days

This book provides a hands-on, application-oriented guide to the language and methodology of both SystemVerilog Assertions and SytemVerilog Functional Coverage. Readers will benefit from the step-by-step approach to functional hardware verification, which will enable them to uncover hidden and hard to find bugs, point directly to the source of the bug, provide for a clean and easy way to model complex timing checks and objectively answer the question 'have we functionally verified everything'. Written by a professional end-user of both SystemVerilog Assertions and SystemVerilog Functional Coverage, this book explains each concept with easy to understand examples, simulation logs and applications derived from real projects. Readers will be empowered to tackle the modeling of complex checkers for functional verification, thereby drastically reducing their time to design and debug.

Domain Decomposition Methods in Science and Engineering XX (Hardcover, 2013 ed.): Randolph Bank, Michael Holst, Olof Widlund,... Domain Decomposition Methods in Science and Engineering XX (Hardcover, 2013 ed.)
Randolph Bank, Michael Holst, Olof Widlund, Jinchao Xu
R4,650 Discovery Miles 46 500 Ships in 10 - 15 working days

These are the proceedings of the 20th international conference on domain decomposition methods in science and engineering. Domain decomposition methods are iterative methods for solving the often very large linearor nonlinear systems of algebraic equations that arise when various problems in continuum mechanics are discretized using finite elements. They are designed for massively parallel computers and take the memory hierarchy of such systems in mind. This is essential for approaching peak floating point performance. There is an increasingly well developed theory whichis having a direct impact on the development and improvements of these algorithms.

Trust Networks for Recommender Systems (Paperback, 2011 ed.): Patricia Victor, Chris Cornelis, Martine de Cock Trust Networks for Recommender Systems (Paperback, 2011 ed.)
Patricia Victor, Chris Cornelis, Martine de Cock
R1,532 Discovery Miles 15 320 Ships in 10 - 15 working days

This book describes research performed in the context of trust/distrust propagation and aggregation, and their use in recommender systems. This is a hot research topic with important implications for various application areas. The main innovative contributions of the work are: -new bilattice-based model for trust and distrust, allowing for ignorance and inconsistency -proposals for various propagation and aggregation operators, including the analysis of mathematical properties -Evaluation of these operators on real data, including a discussion on the data sets and their characteristics. -A novel approach for identifying controversial items in a recommender system -An analysis on the utility of including distrust in recommender systems -Various approaches for trust based recommendations (a.o. base on collaborative filtering), an in depth experimental analysis, and proposal for a hybrid approach -Analysis of various user types in recommender systems to optimize bootstrapping of cold start users.

Algorithms, Software and Hardware of Parallel Computers (Paperback, Softcover reprint of the original 1st ed. 1984): J. Miklosko Algorithms, Software and Hardware of Parallel Computers (Paperback, Softcover reprint of the original 1st ed. 1984)
J. Miklosko; Contributions by J Chudik; Edited by V J Kotov; Contributions by G. David, V E Kotov, …
R1,605 Discovery Miles 16 050 Ships in 10 - 15 working days

Both algorithms and the software . and hardware of automatic computers have gone through a rapid development in the past 35 years. The dominant factor in this development was the advance in computer technology. Computer parameters were systematically improved through electron tubes, transistors and integrated circuits of ever-increasing integration density, which also influenced the development of new algorithms and programming methods. Some years ago the situation in computers development was that no additional enhancement of their performance could be achieved by increasing the speed of their logical elements, due to the physical barrier of the maximum transfer speed of electric signals. Another enhancement of computer performance has been achieved by parallelism, which makes it possible by a suitable organization of n processors to obtain a perform ance increase of up to n times. Research into parallel computations has been carried out for several years in many countries and many results of fundamental importance have been obtained. Many parallel computers have been designed and their algorithmic and program ming systems built. Such computers include ILLIAC IV, DAP, STARAN, OMEN, STAR-100, TEXAS INSTRUMENTS ASC, CRAY-1, C mmp, CM*, CLIP-3, PEPE. This trend is supported by the fact that: a) many algorithms and programs are highly parallel in their structure, b) the new LSI and VLSI technologies have allowed processors to be combined into large parallel structures, c) greater and greater demands for speed and reliability of computers are made."

Design Technologies for Green and Sustainable Computing Systems (Hardcover, 2014 ed.): Partha Pratim Pande, Amlan Ganguly,... Design Technologies for Green and Sustainable Computing Systems (Hardcover, 2014 ed.)
Partha Pratim Pande, Amlan Ganguly, Krishnendu Chakrabarty
R2,976 Discovery Miles 29 760 Ships in 10 - 15 working days

This book provides a comprehensive guide to the design of sustainable and green computing systems (GSC). Coverage includes important breakthroughs in various aspects of GSC, including multi-core architectures, interconnection technology, data centers, high performance computing (HPC), and sensor networks. The authors address the challenges of power efficiency and sustainability in various contexts, including system design, computer architecture, programming languages, compilers and networking.

The Engineering of Complex Real-Time Computer Control Systems (Paperback, Softcover reprint of the original 1st ed. 1996):... The Engineering of Complex Real-Time Computer Control Systems (Paperback, Softcover reprint of the original 1st ed. 1996)
George W. Irwin
R2,894 Discovery Miles 28 940 Ships in 10 - 15 working days

The Engineering of Complex Real-Time Computer Control Systems brings together in one place important contributions and up-to-date research results in this important area. The Engineering of Complex Real-Time Computer Control Systems serves as an excellent reference, providing insight into some of the most important research issues in the field.

Formal Techniques for Networked and Distributed Systems - FORTE 2001 (Paperback, Softcover reprint of the original 1st ed.... Formal Techniques for Networked and Distributed Systems - FORTE 2001 (Paperback, Softcover reprint of the original 1st ed. 2002)
Myungchul Kim, Byoungmoon Chin, Sungwon Kang, Danhyung Lee
R5,815 Discovery Miles 58 150 Ships in 10 - 15 working days

FORTE 2001, formerly FORTE/PSTV conference, is a combined conference of FORTE (Formal Description Techniques for Distributed Systems and Communication Protocols) and PSTV (Protocol Specification, Testing and Verification) conferences. This year the conference has a new name FORTE (Formal Techniques for Networked and Distributed Systems). The previous FORTE began in 1989 and the PSTV conference in 1981. Therefore the new FORTE conference actually has a long history of 21 years. The purpose of this conference is to introduce theories and formal techniques applicable to various engineering stages of networked and distributed systems and to share applications and experiences of them. This FORTE 2001 conference proceedings contains 24 refereed papers and 4 invited papers on the subjects. We regret that many good papers submitted could not be published in this volume due to the lack of space. FORTE 2001 was organized under the auspices of IFIP WG 6.1 by Information and Communications University of Korea. It was financially supported by Ministry of Information and Communication of Korea. We would like to thank every author who submitted a paper to FORTE 2001 and thank the reviewers who generously spent their time on reviewing. Special thanks are due to the reviewers who kindly conducted additional reviews for rigorous review process within a very short time frame. We would like to thank Prof. Guy Leduc, the chairman of IFIP WG 6.1, who made valuable suggestions and shared his experiences for conference organization.

Shared-Memory Synchronization (Paperback): Michael L. Scott Shared-Memory Synchronization (Paperback)
Michael L. Scott
R1,424 Discovery Miles 14 240 Ships in 10 - 15 working days

From driving, flying, and swimming, to digging for unknown objects in space exploration, autonomous robots take on varied shapes and sizes. In part, autonomous robots are designed to perform tasks that are too dirty, dull, or dangerous for humans. With nontrivial autonomy and volition, they may soon claim their own place in human society. These robots will be our allies as we strive for understanding our natural and man-made environments and build positive synergies around us. Although we may never perfect replication of biological capabilities in robots, we must harness the inevitable emergence of robots that synchronizes with our own capacities to live, learn, and grow. This book is a snapshot of motivations and methodologies for our collective attempts to transform our lives and enable us to cohabit with robots that work with and for us. It reviews and guides the reader to seminal and continual developments that are the foundations for successful paradigms. It attempts to demystify the abilities and limitations of robots. It is a progress report on the continuing work that will fuel future endeavors. Table of Contents: Part I: Preliminaries/Agency, Motion, and Anatomy/Behaviors / Architectures / Affect/Sensors / Manipulators/Part II: Mobility/Potential Fields/Roadmaps / Reactive Navigation / Multi-Robot Mapping: Brick and Mortar Strategy / Part III: State of the Art / Multi-Robotics Phenomena / Human-Robot Interaction / Fuzzy Control / Decision Theory and Game Theory / Part IV: On the Horizon / Applications: Macro and Micro Robots / References / Author Biography / Discussion

Dependence Analysis for Supercomputing (Paperback, Softcover reprint of the original 1st ed. 1988): Utpal Banerjee Dependence Analysis for Supercomputing (Paperback, Softcover reprint of the original 1st ed. 1988)
Utpal Banerjee
R1,517 Discovery Miles 15 170 Ships in 10 - 15 working days

This book is on dependence concepts and general methods for dependence testing. Here, dependence means data dependence and the tests are compile-time tests. We felt the time was ripe to create a solid theory of the subject, to provide the research community with a uniform conceptual framework in which things fit together nicely. How successful we have been in meeting these goals, of course, remains to be seen. We do not try to include all the minute details that are known, nor do we deal with clever tricks that all good programmers would want to use. We do try to convince the reader that there is a mathematical basis consisting of theories of bounds of linear functions and linear diophantine equations, that levels and direction vectors are concepts that arise rather natu rally, that different dependence tests are really special cases of some general tests, and so on. Some mathematical maturity is needed for a good understand ing of the book: mainly calculus and linear algebra. We have cov ered diophantine equations rather thoroughly and given a descrip of some matrix theory ideas that are not very widely known. tion A reader familiar with linear programming would quickly recog nize several concepts. We have learned a great deal from the works of M. Wolfe, and K. Kennedy and R. Allen. Wolfe's Ph. D. thesis at the University of Illinois and Kennedy & Allen's paper on vectorization of Fortran programs are still very useful sources on this subject."

Advances in Network and Distributed Systems Security - IFIP TC11 WG11.4 First Annual Working Conference on Network Security... Advances in Network and Distributed Systems Security - IFIP TC11 WG11.4 First Annual Working Conference on Network Security November 26-27, 2001, Leuven, Belgium (Paperback, Softcover reprint of the original 1st ed. 2002)
Bart De Decker, Frank Piessens, Jan Smits, Els Van Herreweghen
R4,461 Discovery Miles 44 610 Ships in 10 - 15 working days

The first Annual Working Conference ofWG11.4oftheInter nationalFederationforInformation Processing (IFIP),focuseson variousstate of the art concepts in the field of Network and Dis tributedSystemsSecurity. Oursocietyisrapidly evolvingand irreversibly set onacourse governedby electronicinteractions. Wehave seen thebirthofe mail in the early seventies, and are now facing new challenging applicationssuchase commerce, e government,...Themoreour societyrelies on electronicforms ofcommunication,themorethe securityofthesecommunicationnetworks isessentialforitswell functioning. Asaconsequence,researchonmethodsandtechniques toimprove network security iso fparam ount importance. ThisWorking Conference bringstogetherresearchersandprac tionersofvariousdisciplines,organisationsandcountries,todiscuss thelatestdevelopmentsinsecurity protocols, secure software engin eering,mobileagentsecurity,e commercesecurityandsecurityfor distributedcomputing. Wearealsopleasedtohaveattractedtwointernationalspeakers topresenttwo case studies,one dealing withBelgium'sintentionto replacetheidentity card ofitscitizensbyanelectronicversion,and theotherdiscussingtheimplicationsofthesecuritycertificationin amultinationalcorporation. ThisWorking Conference s houldalsobeconsideredasthekick off activity ofWG11.4, the aimsof which can be summarizedas follows: topromoteresearch on technical measures forsecuringcom puternetworks, including bothhardware andsoftware based techniques. to promote dissemination of research results in the field of network security in real lifenetworks in industry, academia and administrative ins titutions. viii topromoteeducationintheapplicationofsecuritytechniques, andtopromotegeneral awarenessa boutsecurityproblems in thebroadfieldofinformationtechnology. Researchers and practioners who want to get involved in this Working Group, are kindlyrequestedtocontactthechairman. MoreinformationontheworkingsofWG11.4isavailable from the officialIFIP website:http://www.ifip.at.org/. Finally,wewish toexpressour gratitudetoallthosewho have contributedtothisconference in one wayoranother. Wearegr ate fultothe internationalrefereeboard whoreviewedallthe papers andtotheauthorsandinvitedspeakers,whosecontributionswere essential to the successof the conference. We would alsoliketo thanktheparticipantswhosepresenceand interest, togetherwith thechangingimperativesofsociety,willprovea drivingforce for futureconferencestocome.

Reversible Computation - 5th International Conference, RC 2013, Victoria, BC, Canada, July 4-5, 2013. Proceedings (Paperback,... Reversible Computation - 5th International Conference, RC 2013, Victoria, BC, Canada, July 4-5, 2013. Proceedings (Paperback, 2013 ed.)
Gerhard W. Dueck, D.Michael Miller
R1,990 Discovery Miles 19 900 Ships in 10 - 15 working days

This book constitutes the refereed proceedings of the 5th International Conference on Reversible Computation, RC 2013, held in Victoria, BC, Canada, in July 2013. The 19 contributions presented together with one invited paper were carefully reviewed and selected from 37 submissions. The papers are organized in topical sections on physical implementation; arithmetic; programming and data structures; modelling; synthesis and optimization; and alternative technologies.

High Performance Computing for Computational Science - VECPAR 2012 - 10th International Conference, Kope, Japan, July 17-20,... High Performance Computing for Computational Science - VECPAR 2012 - 10th International Conference, Kope, Japan, July 17-20, 2012, Revised Selected Papers (Paperback, 2013 ed.)
Michel Dayde, Osni Marques, Kengo Nakajima
R1,619 Discovery Miles 16 190 Ships in 10 - 15 working days

This book constitutes the thoroughly refereed post-conference proceedings of the 10th International Conference on High Performance Computing for Computational Science, VECPAR 2012, held in Kope, Japan, in July 2012. The 28 papers presented together with 7 invited talks were carefully selected during two rounds of reviewing and revision. The papers are organized in topical sections on CPU computing, applications, finite element method from various viewpoints, cloud and visualization performance, method and tools for advanced scientific computing, algorithms and data analysis, parallel iterative solvers on multicore architectures.

Scientific Computing on Supercomputers III (Paperback, Softcover reprint of the original 1st ed. 1992): J. T Devreese, P. E Van... Scientific Computing on Supercomputers III (Paperback, Softcover reprint of the original 1st ed. 1992)
J. T Devreese, P. E Van Camp
R4,464 Discovery Miles 44 640 Ships in 10 - 15 working days

The International Workshop on "The Use of Supercomputers in Theoretical Science" took place on January 24 and 25, 1991, at the University of Antwerp (UIA), Antwerpen, Belgium. It was the sixth in a series of workshops, the fIrst of which took place in 1984. The principal aim of these workshops is to present the state of the art in scientific large-scale and high speed-computation. Computational science has developed into a third methodology equally important now as its theoretical and experimental companions. Gradually academic researchers acquired access to a variety of supercomputers and as a consequence computational science has become a major tool for their work. It is a pleasure to thank the Belgian National Science Foundation (NFWO-FNRS) and the Ministry of ScientifIc Affairs for sponsoring the workshop. It was organized both in the framework of the Third Cycle "Vectorization, Parallel Processing and Supercomputers" and the "Governemental Program in Information Technology." We also very much would like to thank the University of Antwerp (Universitaire Instelling Antwerpen -VIA) for fInancial and material support. Special thanks are due to Mrs. H. Evans for the typing and editing of the manuscripts and for the preparation of the author and subject indexes. J.T. Devreese P.E. Van Camp University of Antwerp July 1991 v CONlENTS High Perfonnance Numerically Intensive Applications on Distributed Memory Parallel Computers .................... . F.W. Wray Abstract ......................................... .

New Developments in Distributed Applications and Interoperable Systems - IFIP TC6 / WG6.1 Third International Working... New Developments in Distributed Applications and Interoperable Systems - IFIP TC6 / WG6.1 Third International Working Conference on Distributed Applications and Interoperable Systems September 17-19, 2001, Krakow, Poland (Paperback, Softcover reprint of the original 1st ed. 2002)
Zielinski, Kurt Geihs, Aleksander Laurentowski
R5,774 Discovery Miles 57 740 Ships in 10 - 15 working days

Distributed applications are a necessity in most central application sectors of the contemporary information society, including e-commerce, e-banking, e-learning, e-health, telecommunication and transportation. This results from a tremendous growth of the role that the Internet plays in business, administration and our everyday activities. This trend is going to be even further expanded in the context of advances in broadband wireless communication. New Developments in Distributed Applications and Interoperable Systems focuses on the techniques available or under development with the goal to ease the burden of constructing reliable and maintainable interoperable information systems providing services in the global communicating environment. The topics covered in this book include: * Context-aware applications; * Integration and interoperability of distributed systems; * Software architectures and services for open distributed systems; * Management, security and quality of service issues in distributed systems; * Software agents and mobility; * Internet and other related problem areas.The book contains the proceedings of the Third International Working Conference on Distributed Applications and Interoperable Systems (DAIS'2001), which was held in September 2001 in Krakow, Poland, and sponsored by the International Federation on Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported.

Solutions on Embedded Systems (Paperback, 2011 ed.): Massimo Conti, Simone Orcioni, Natividad Martinez Madrid, Ralf E. D.... Solutions on Embedded Systems (Paperback, 2011 ed.)
Massimo Conti, Simone Orcioni, Natividad Martinez Madrid, Ralf E. D. Seepold
R2,967 Discovery Miles 29 670 Ships in 10 - 15 working days

Embedded systems have an increasing importance in our everyday lives. The growing complexity of embedded systems and the emerging trend to interconnections between them lead to new challenges. Intelligent solutions are necessary to overcome these challenges and to provide reliable and secure systems to the customer under a strict time and financial budget. Solutions on Embedded Systems documents results of several innovative approaches that provide intelligent solutions in embedded systems. The objective is to present mature approaches, to provide detailed information on the implementation and to discuss the results obtained.

Resilient Architecture Design for Voltage Variation (Paperback): Vijay Janapa  Reddi, Meeta Sharma Gupta Resilient Architecture Design for Voltage Variation (Paperback)
Vijay Janapa Reddi, Meeta Sharma Gupta
R1,087 Discovery Miles 10 870 Ships in 10 - 15 working days

Shrinking feature size and diminishing supply voltage are making circuits sensitive to supply voltage fluctuations within the microprocessor, caused by normal workload activity changes. If left unattended, voltage fluctuations can lead to timing violations or even transistor lifetime issues that degrade processor robustness. Mechanisms that learn to tolerate, avoid, and eliminate voltage fluctuations based on program and microarchitectural events can help steer the processor clear of danger, thus enabling tighter voltage margins that improve performance or lower power consumption. We describe the problem of voltage variation and the factors that influence this variation during processor design and operation. We also describe a variety of runtime hardware and software mitigation techniques that either tolerate, avoid, and/or eliminate voltage violations. We hope processor architects will find the information useful since tolerance, avoidance, and elimination are generalizable constructs that can serve as a basis for addressing other reliability challenges as well. Table of Contents: Introduction / Modeling Voltage Variation / Understanding the Characteristics of Voltage Variation / Traditional Solutions and Emerging Solution Forecast / Allowing and Tolerating Voltage Emergencies / Predicting and Avoiding Voltage Emergencies / Eliminiating Recurring Voltage Emergencies / Future Directions on Resiliency

Foundations of Dependable Computing - Paradigms for Dependable Applications (Paperback, Softcover reprint of the original 1st... Foundations of Dependable Computing - Paradigms for Dependable Applications (Paperback, Softcover reprint of the original 1st ed. 1994)
Gary M. Koob, Clifford G. Lau
R4,465 Discovery Miles 44 650 Ships in 10 - 15 working days

Foundations of Dependable Computing: Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. The companion volume subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.

Infrastructure for Electronic Business on the Internet (Paperback, Softcover reprint of the original 1st ed. 2001): Veljko... Infrastructure for Electronic Business on the Internet (Paperback, Softcover reprint of the original 1st ed. 2001)
Veljko Milutinovic
R4,537 Discovery Miles 45 370 Ships in 10 - 15 working days

Design is an art form in which the designer selects from a myriad of alternatives to bring an "optimum" choice to a user. In many complex of "optimum" is difficult to define. Indeed, the users systems the notion themselves will not agree, so the "best" system is simply the one in which the designer and the user have a congruent viewpoint. Compounding the design problem are tradeoffs that span a variety of technologies and user requirements. The electronic business system is a classically complex system whose tradeoff criteria and user views are constantly changing with rapidly developing underlying technology. Professor Milutinovic has chosen this area for his capstone contribution to the computer systems design. This book completes his trilogy on design issue in computer systems. His first work, "Surviving the Design of a 200 MHz RISC Microprocessor" (1997) focused on the tradeoffs and design issues within a processor. His second work, "Surviving the Design of Microprocessor and Multiprocessor Systems" (2000) considers the design issues involved with assembling a number of processors into a coherent system. Finally, this book generalizes the system design problem to electronic commerce on the Internet, a global system of immense consequence.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
The Chemical Biology of Thrombin
Roger L. Lundblad Paperback R1,561 Discovery Miles 15 610
Sluban Girls Dream Snack House (102…
 (1)
R129 R106 Discovery Miles 1 060
An Introduction to Substructural Logics
Greg Restall Paperback R1,213 Discovery Miles 12 130
Bacteria - A Curious Collection from a…
Ludger Wess Hardcover R507 Discovery Miles 5 070
Machine Learning - A First Course for…
Andreas Lindholm, Niklas Wahlstroem, … Hardcover R1,727 Discovery Miles 17 270
Khwezi - The Remarkable Story Of…
Redi Tlhabi Paperback  (7)
R675 Discovery Miles 6 750
Color & Frame - Bible Coloring: Hymns…
New Seasons, Publications International Ltd Spiral bound R264 R226 Discovery Miles 2 260
Six Years With Al Qaeda - The Stephen…
Tudor Caradoc-Davies Paperback R282 Discovery Miles 2 820
Dockside Reading - Hydrocolonialism And…
Isabel Hofmeyr Paperback R300 R234 Discovery Miles 2 340
Wavelets in Medicine and Biology
Akram Aldroubi, Michael Unser Paperback R1,858 Discovery Miles 18 580

 

Partners