![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
This state-of-the-art survey features topics related to the impact of multicore, manycore, and coprocessor technologies in science and large-scale applications in an interdisciplinary environment. The papers included in this survey cover research in mathematical modeling, design of parallel algorithms, aspects of microprocessor architecture, parallel programming languages, hardware-aware computing, heterogeneous platforms, manycore technologies, performance tuning, and requirements for large-scale applications. The contributions presented in this volume are an outcome of an inspiring conference conceived and organized by the editors at the University of Applied Sciences (HfT) in Stuttgart, Germany, in September 2012. The 10 revised full papers selected from 21 submissions are presented together with the twelve poster abstracts and focus on combination of new aspects of microprocessor technologies, parallel applications, numerical simulation, and software development; thus they clearly show the potential of emerging technologies in the area of multicore and manycore processors that are paving the way towards personal supercomputing and very likely towards exascale computing.
Formal Methods for Open Object-Based Distributed Systems IV presents the leading edge in the fields of object-oriented programming, open distributed systems, and formal methods for object-oriented systems. With increased support within industry regarding these areas, this book captures the most up-to-date information on the subject. Papers in this volume focus on the following specific technologies: * components; * mobile code; * Java(R); * The Unified Modeling Language (UML); * refinement of specifications; * types and subtyping; * temporal and probabilistic systems. This volume comprises the proceedings of the Fourth International Workshop on Formal Methods for Open Object-Based Distributed Systems (FMOODS 2000), which was sponsored by the International Federation for Information Processing (IFIP) and held in Stanford, California, USA, in September 2000.
FGCS'88 was held with the objective of reporting on the current status and results of the research activities conducted by the Institute for New Generation Computer Technology (ICOT), Japan, in the intermediate stage of its FGCS projects, as well as encouraging researchers and representatives from business and the government to present papers, report research results and exchange opinions. The proceedings are published in three volumes. Volume 1 includes the plenary sessions and reports on ICOT research topics. Volumes 2 and 3 include a collection of invited and submitted technical papers in the areas of foundations, software, architecture and applications.
This book constitutes the thoroughly refereed proceedings of the 2011 ICSOC Workshops consisting of 5 scientific satellite events, organized in 4 tracks: workshop track (WESOA 2011; NFPSLAM-SOC 2011), PhD symposium track, demonstration track, and industry track; held in conjunction with the 2011 International Conference on Service-Oriented Computing (ICSOC), in Paphos, Greece, December 2011. The 39 revised papers presented together with 2 introductory descriptions address topics such as software engineering services; the management of service level agreements; Web services and service composition; general or domain-specific challenges of service-oriented computing and its transition towards cloud computing; architecture and modeling of services; workflow management; performance analysis as well as crowdsourcing for improving service processes and for knowledge discovery.
Foundations of Dependable Computing: System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead. A companion to this volume (published by Kluwer) subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems.
The role of arithmetic in datapath design in VLSI design has been increasing in importance over the last several years due to the demand for processors that are smaller, faster, and dissipate less power. Unfortunately, this means that many of these datapaths will be complex both algorithmically and circuit wise. As the complexity of the chips increases, less importance will be placed on understanding how a particular arithmetic datapath design is implemented and more importance will be given to when a product will be placed on the market. This is because many tools that are available today, are automated to help the digital system designer maximize their efficiently. Unfortunately, this may lead to problems when implementing particular datapaths. The design of high-performance architectures is becoming more compli cated because the level of integration that is capable for many of these chips is in the billions. Many engineers rely heavily on software tools to optimize their work, therefore, as designs are getting more complex less understanding is going into a particular implementation because it can be generated automati cally. Although software tools are a highly valuable asset to designer, the value of these tools does not diminish the importance of understanding datapath ele ments. Therefore, a digital system designer should be aware of how algorithms can be implemented for datapath elements. Unfortunately, due to the complex ity of some of these algorithms, it is sometimes difficult to understand how a particular algorithm is implemented without seeing the actual code."
This book introduces the fundamental concepts and practical simulation te- niques for modeling different aspects of operating systems to study their g- eral behavior and their performance. The approaches applied are obje- oriented modeling and process interaction approach to discrete-event simu- tion. The book depends on the basic modeling concepts and is more specialized than my previous book: Practical Process Simulation with Object-Oriented Techniques and C++, published by Artech House, Boston 1999. For a more detailed description see the Web location: http: //science.kennesaw.edu/ jgarrido/mybook, html. Most other books on performance modeling use only analytical approaches, and very few apply these concepts to the study of operating systems. Thus, the unique feature of the book is that it concentrates on design aspects of operating systems using practical simulation techniques. In addition, the book illustrates the dynamic behavior of different aspects of operating systems using the various simulation models, with a general hands-on approac
In Symbolic Analysis for Parallelizing Compilers the author presents an excellent demonstration of the effectiveness of symbolic analysis in tackling important optimization problems, some of which inhibit loop parallelization. The framework that Haghighat presents has proved extremely successful in induction and wraparound variable analysis, strength reduction, dead code elimination and symbolic constant propagation. The approach can be applied to any program transformation or optimization problem that uses properties and value ranges of program names. Symbolic analysis can be used on any transformational system or optimization problem that relies on compile-time information about program variables. This covers the majority of, if not all optimization and parallelization techniques. The book makes a compelling case for the potential of symbolic analysis, applying it for the first time - and with remarkable results - to a number of classical optimization problems: loop scheduling, static timing or size analysis, and dependence analysis. It demonstrates how symbolic analysis can solve these problems faster and more accurately than existing hybrid techniques.
Replication Techniques in Distributed Systems organizes and surveys the spectrum of replication protocols and systems that achieve high availability by replicating entities in failure-prone distributed computing environments. The entities discussed in this book vary from passive untyped data objects, to typed and complex objects, to processes and messages. Replication Techniques in Distributed Systems contains definitions and introductory material suitable for a beginner, theoretical foundations and algorithms, an annotated bibliography of commercial and experimental prototype systems, as well as short guides to recommended further readings in specialized subtopics. This book can be used as recommended or required reading in graduate courses in academia, as well as a handbook for designers and implementors of systems that must deal with replication issues in distributed systems.
Foundations of Dependable Computing: Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. A companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead. Formal Description Techniques and Protocol Specification, Testing and Verification addresses formal description techniques (FDTs) applicable to distributed systems and communication protocols. It aims to present the state of the art in theory, application, tools and industrialization of FDTs. Among the important features presented are: FDT-based system and protocol engineering; FDT-application to distributed systems; Protocol engineering; Practical experience and case studies. Formal Description Techniques and Protocol Specification, Testing and Verification comprises the proceedings of the Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols and Protocol Specification, Testing and Verification, sponsored by the International Federation for Information Processing, held in November 1998, Paris, France. Formal Description Techniques and Protocol Specification, Testing and Verification is suitable as a secondary text for a graduate-level course on Distributed Systems or Communications, and as a reference for researchers and practitioners in industry.
High Performance Computing Systems and Applications contains the fully refereed papers from the 13th Annual Symposium on High Performance Computing, held in Kingston, Canada, in June 1999. This book presents the latest research in HPC architectures, distributed and shared memory performance, algorithms and solvers, with special sessions on atmospheric science, computational chemistry and physics. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
Supercomputers are the largest and fastest computers available at any point in time. The term was used for the first time in the New York World, March 1920, to describe "new statistical machines with the mental power of 100 skilled mathematicians in solving even highly complex algebraic problems. " Invented by Mendenhall and Warren, these machines were used at Columbia University'S Statistical Bureau. Recently, supercomputers have been used primarily to solve large-scale prob lems in science and engineering. Solutions of systems of partial differential equa tions, such as those found in nuclear physics, meteorology, and computational fluid dynamics, account for the majority of supercomputer use today. The early computers, such as EDVAC, SSEC, 701, and UNIVAC, demonstrated the feasibility of building fast electronic computing machines which could become commercial products. The next generation of computers focused on attaining the highest possible computational speeds. This book discusses the architectural approaches used to yield significantly higher computing speeds while preserving the conventional, von Neumann, machine organization (Chapters 2-4). Subsequent improvements depended on developing a new generation of computers employing a new model of computation: single-instruction multiple data (SIMD) processors (Chapters 5-7). Later machines refmed SIMD architec ture and technology (Chapters 8-9). SUPERCOMPUTER ARCHITECI'URE CHAPTER INTRODUCTION THREE ERAS OF SUPERCOMPUTERS Supercomputers -- the largest and fastest computers available at any point in time -- have been the products of complex interplay among technological, architectural, and algorithmic developments."
Although integrating security into the design of applications has proven to deliver resilient products, there are few books available that provide guidance on how to incorporate security into the design of an application. Filling this need, Security for Service Oriented Architectures examines both application and security architectures and illustrates the relationship between the two. Supplying authoritative guidance on how to design distributed and resilient applications, the book provides an overview of the various standards that service oriented and distributed applications leverage, including SOAP, HTML 5, SAML, XML Encryption, XML Signature, WS-Security, and WS-SecureConversation. It examines emerging issues of privacy and discusses how to design applications within a secure context to facilitate the understanding of these technologies you need to make intelligent decisions regarding their design. This complete guide to security for web services and SOA considers the malicious user story of the abuses and attacks against applications as examples of how design flaws and oversights have subverted the goals of providing resilient business functionality. It reviews recent research on access control for simple and conversation-based web services, advanced digital identity management techniques, and access control for web-based workflows. Filled with illustrative examples and analyses of critical issues, this book provides both security and software architects with a bridge between software and service-oriented architectures and security architectures, with the goal of providing a means to develop software architectures that leverage security architectures. It is also a reliable source of reference on Web services standards. Coverage includes the four types of architectures, implementing and securing SOA, Web 2.0, other SOA platforms, auditing SOAs, and defending and detecting attacks.
This is the sixth conference in the series which started in 1981 in Paris, followed by conferences held in Zurich (1984), Rio de Janeirio (1987), Barcelona (1991), and Raleigh (1993). The main objective of this IFIP conference series is to provide a platform for the exchange of recent and original contributions in communications systems in the areas of performance analysis, architectures, and applications. There are many exiciting trends and developments in the communications industry, several of which are related to advances in Asynchronous Transfer Mode*(ATM), multimedia services, and high speed protocols. It is commonly believed in the communications industry that ATM represents the next generation of networking. Yet, there are a number of issues that has been worked on in various standards bodies, government and industry research and development labs, and universities towards enabling high speed networks in general and ATM networks in particular. Reflecting these trends, the technical program of the Sixth IFIP W.G. 6.3 Conference on Performance of Computer Networks consists of papers addressing a wide range of technical challenges and proposing various state of the art solutions to a subset of them. The program includes 25 papers selected by the program committee out of 57 papers submitted.
In the past five years, the field of electrostatic discharge (ESD) control has under gone some notable changes. Industry standards have multiplied, though not all of these, in our view, are realistic and meaningful. Increasing importance has been ascribed to the Charged Device Model (CDM) versus the Human Body Model (HBM) as a cause of device damage and, presumably, premature (latent) failure. Packaging materials have significantly evolved. Air ionization techniques have improved, and usage has grown. Finally, and importantly, the government has ceased imposing MIL-STD-1686 on all new contracts, leaving companies on their own to formulate an ESD-control policy and write implementing documents. All these changes are dealt with in five new chapters and ten new reprinted papers added to this revised edition of ESD from A to Z. Also, the original chapters have been augmented with new material such as more troubleshooting examples in Chapter 8 and a 20-question multiple-choice test for certifying operators in Chapter 9. More than ever, the book seeks to provide advice, guidance, and practical ex amples, not just a jumble of facts and generalizations. For instance, the added tailored versions of the model specifications for ESD-safe handling and packaging are actually in use at medium-sized corporations and could serve as patterns for many readers.
The primary audience for this book are advanced undergraduate students and graduate students. Computer architecture, as it happened in other fields such as electronics, evolved from the small to the large, that is, it left the realm of low-level hardware constructs, and gained new dimensions, as distributed systems became the keyword for system implementation. As such, the system architect, today, assembles pieces of hardware that are at least as large as a computer or a network router or a LAN hub, and assigns pieces of software that are self-contained, such as client or server programs, Java applets or pro tocol modules, to those hardware components. The freedom she/he now has, is tremendously challenging. The problems alas, have increased too. What was before mastered and tested carefully before a fully-fledged mainframe or a closely-coupled computer cluster came out on the market, is today left to the responsibility of computer engineers and scientists invested in the role of system architects, who fulfil this role on behalf of software vendors and in tegrators, add-value system developers, R&D institutes, and final users. As system complexity, size and diversity grow, so increases the probability of in consistency, unreliability, non responsiveness and insecurity, not to mention the management overhead. What System Architects Need to Know The insight such an architect must have includes but goes well beyond, the functional properties of distributed systems.
Most of the papers in this volume were presented at the NATO Advanced Research Workshop High Performance Computing: Technology and Application, held in Cetraro, Italy from 24 to 26 of June, 1996. The main purpose of the Workshop was to discuss some key scientific and technological developments in high performance computing, identify significant trends and defme desirable research objectives. The volume structure corresponds, in general, to the outline of the workshop technical agenda: general concepts and emerging systems, software technology, algorithms and applications. One of the Workshop innovations was an effort to extend slightly the scope of the meeting from scientific/engineering computing to enterprise-wide computing. The papers on performance and scalability of database servers, and Oracle DBMS reflect this attempt We hope that after reading this collection of papers the readers will have a good idea about some important research and technological issues in high performance computing. We wish to give our thanks to the NATO Scientific and Environmental Affairs Division for being the principal sponsor for the Workshop. Also we are pleased to acknowledge other institutions and companies that supported the Workshop: European Union: European Commission DGIII-Industry, CNR: National Research Council of Italy, University of Calabria, Alenia Spazio, Centro Italiano Ricerche Aerospaziali, ENEA: Italian National Agency for New Technology, Energy and the Environment, Fujitsu, Hewlett Packard-Convex, Hitachi, NEC, Oracle, and Silicon Graphics-Cray Research. Editors January 1997 vii LIST OF CONTRIBUTORS Ecole Nonnale Sucentsrieure de Lyon, 69364 Abarbanel. Robert
IC designers appraise currently MOS transistor geometries and currents to compromise objectives like gain-bandwidth, slew-rate, dynamic range, noise, non-linear distortion, etc. Making optimal choices is a difficult task. How to minimize for instance the power consumption of an operational amplifier without too much penalty regarding area while keeping the gain-bandwidth unaffected in the same time? Moderate inversion yields high gains, but the concomitant area increase adds parasitics that restrict bandwidth. Which methodology to use in order to come across the best compromise(s)? Is synthesis a mixture of design experience combined with cut and tries or is it a constrained multivariate optimization problem, or a mixture? Optimization algorithms are attractive from a system perspective of course, but what about low-voltage low-power circuits, requiring a more physical approach? The connections amid transistor physics and circuits are intricate and their interactions not always easy to describe in terms of existing software packages. The gm/ID synthesis methodology is adapted to CMOS analog circuits for the transconductance over drain current ratio combines most of the ingredients needed in order to determine transistors sizes and DC currents.
Instruction-Level Parallelism presents a collection of papers that attempts to capture the most significant work that took place during the 1980s in the area of instruction-level (ILP) parallel processing. The papers in this book discuss both compiler techniques and actual implementation experience on very long instruction word (VLIW) and superscalar architectures.
Computersystemsresearch is heavilyinfluencedby changesincomputertechnol- ogy. As technology changes alterthe characteristics ofthe underlying hardware com- ponents of the system, the algorithms used to manage the system need to be re- examinedand newtechniques need to bedeveloped. Technological influencesare par- ticularly evident in the design of storage management systems such as disk storage managers and file systems. The influences have been so pronounced that techniques developed as recently as ten years ago are being made obsolete. The basic problem for disk storage managers is the unbalanced scaling of hard- warecomponenttechnologies. Disk storage managerdesign depends on the technolo- gy for processors, main memory, and magnetic disks. During the 1980s, processors and main memories benefited from the rapid improvements in semiconductortechnol- ogy and improved by several orders ofmagnitude in performance and capacity. This improvement has not been matched by disk technology, which is bounded by the me- chanics ofrotating magnetic media. Magnetic disks ofthe 1980s have improved by a factor of 10in capacity butonly a factor of2 in performance. This unbalanced scaling ofthe hardware components challenges the disk storage manager to compensate for the slower disks and allow performance to scale with the processor and main memory technology. Unless the performance of file systems can be improved over that of the disks, I/O-bound applications will be unable to use the rapid improvements in processor speeds to improve performance for computer users. Disk storage managers must break this bottleneck and decouple application perfor- mance from the disk.
This book constitutes the refereed proceedings of the 4th International Workshop on Reversible Computation, RC 2012, held in Copenhagen, Denmark, in July 2012. The 19 contributions presented in this volume were carefully reviewed and selected from 46 submissions. The papers cover theoretical considerations, reversible software and reversible hardware, and physical realizations and applications in quantum computing.
Software architectures have gained wide popularity in the last decade. They generally play a fundamental role in coping with the inherent difficulties of the development of large-scale and complex software systems. Component-oriented and aspect-oriented programming enables software engineers to implement complex applications from a set of pre-defined components. Software Architectures and Component Technology collects excellent chapters on software architectures and component technologies from well-known authors, who not only explain the advantages, but also present the shortcomings of the current approaches while introducing novel solutions to overcome the shortcomings.The unique features of this book are: * evaluates the current architecture design methods and component composition techniques and explains their shortcomings; * presents three practical architecture design methods in detail; * gives four industrial architecture design examples; * presents conceptual models for distributed message-based architectures; * explains techniques for refining architectures into components; * presents the recent developments in component and aspect-oriented techniques; * explains the status of research on Piccola, Hyper/J(R), Pluggable Composite Adapters and Composition Filters. Software Architectures and Component Technology is a suitable text for graduate level students in computer science and engineering, and as a reference for researchers and practitioners in industry.
In 1968 the Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense began implementation of a computer communication network which permits the interconnection of heter ogeneous computers at geographically distributed centres through out the United States. This network has come to be known as the ARPANET and has grown from the initial four node configuration in 1969 to almost forty nodes (including satellite nodes in Hawaii, Norway, and London) in late 1973. The major goal of ARPANET is to achieve resource sharing among the network users. The resources to be shared include not only programs, but also unique facilities such as the powerful ILLIAC IV computer and large global weather data bases that are economically feasible when widely shared. The ARPANEr employs a distributed store-and-forward packet switching approach that is much better suited for computer communications networks than the more conventional circuit-switch ing approach. Reasons favouring packet switching include lower cost, higher capacity, greater reliability and minimal delay. All of these factors are discussed in these Proceedings."
Multicore Processors and Systems provides a comprehensive overview of emerging multicore processors and systems. It covers technology trends affecting multicores, multicore architecture innovations, multicore software innovations, and case studies of state-of-the-art commercial multicore systems. A cross-cutting theme of the book is the challenges associated with scaling up multicore systems to hundreds of cores. The book provides an overview of significant developments in the architectures for multicore processors and systems. It includes chapters on fundamental requirements for multicore systems, including processing, memory systems, and interconnect. It also includes several case studies on commercial multicore systems that have recently been developed and deployed across multiple application domains. The architecture chapters focus on innovative multicore execution models as well as infrastructure for multicores, including memory systems and on-chip interconnections. The case studies examine multicore implementations across different application domains, including general purpose, server, media/broadband, network processing, and signal processing. Multicore Processors and Systems is the first book that focuses solely on multicore processors and systems, and in particular on the unique technology implications, architectures, and implementations. The book has contributing authors that are from both the academic and industrial communities. |
You may like...
Computer Architecture Tutorial Using an…
Robert Dunne
Hardcover
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
Intelligent Applications for…
Kandarpa Kumar Sarma, Manash Pratim Sarma, …
Hardcover
R6,324
Discovery Miles 63 240
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
Clean Architecture - A Craftsman's Guide…
Robert Martin
Paperback
(1)
|