![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
In brief summary, the following results were presented in this work: * A linear time approach was developed to find register requirements for any specified CS schedule or filled MRT. * An algorithm was developed for finding register requirements for any kernel that has a dependence graph that is acyclic and has no data reuse on machines with depth independent instruction templates. * We presented an efficient method of estimating register requirements as a function of pipeline depth. * We developed a technique for efficiently finding bounds on register require ments as a function of pipeline depth. * Presented experimental data to verify these new techniques. * discussed some interesting design points for register file size on a number of different architectures. REFERENCES [1] Robert P. Colwell, Robert P. Nix, John J O'Donnell, David B Papworth, and Paul K. Rodman. A VLIW Architecture for a Trace Scheduling Com piler. In Architectural Support for Programming Languages and Operating Systems, pages 180-192, 1982. [2] C. Eisenbeis, W. Jalby, and A. Lichnewsky. Compile-Time Optimization of Memory and Register Usage on the Cray-2. In Proceedings of the Second Workshop on Languages and Compilers, Urbana l/inois, August 1989. [3] C. Eisenbeis, William Jalby, and Alain Lichnewsky. Squeezing More CPU Performance Out of a Cray-2 by Vector Block Scheduling. In Proceedings of Supercomputing '88, pages 237-246, 1988. [4] Michael J. Flynn. Very High-Speed Computing Systems. Proceedings of the IEEE, 54:1901-1909, December 1966.
New to the Second Edition: offers the latest developments in standards activities (JPEG-LS, MPEG-4, MPEG-7, and H.263) provides a comprehensive review of recent activities on multimedia enhanced processors, multimedia coprocessors, and dedicated processors, including examples from industry. Image and Video Compression Standards: Algorithms and Architectures, Second Edition presents an introduction to the algorithms and architectures that form the underpinnings of the image and video compressions standards, including JPEG (compression of still-images), H.261 and H.263 (video teleconferencing), and MPEG-1 and MPEG-2 (video storage and broadcasting). The next generation of audiovisual coding standards, such as MPEG-4 and MPEG-7, are also briefly described. In addition, the book covers the MPEG and Dolby AC-3 audio coding standards and emerging techniques for image and video compression, such as those based on wavelets and vector quantization. Image and Video Compression Standards: Algorithms and Architectures, Second Edition emphasizes the foundations of these standards; namely, techniques such as predictive coding, transform-based coding such as the discrete cosine transform (DCT), motion estimation, motion compensation, and entropy coding, as well as how they are applied in the standards. The implementation details of each standard are avoided; however, the book provides all the material necessary to understand the workings of each of the compression standards, including information that can be used by the reader to evaluate the efficiency of various software and hardware implementations conforming to these standards. Particular emphasis is placed on those algorithms and architectures that have been found to be useful in practical software or hardware implementations. Image and Video Compression Standards: Algorithms and Architectures, Second Edition uniquely covers all major standards (JPEG, MPEG-1, MPEG-2, MPEG-4, H.261, H.263) in a simple and tutorial manner, while fully addressing the architectural considerations involved when implementing these standards. As such, it serves as a valuable reference for the graduate student, researcher or engineer. The book is also used frequently as a text for courses on the subject, in both academic and professional settings.
Application-Driven Architecture Synthesis describes the state of the art of architectural synthesis for complex real-time processing. In order to deal with the stringent timing requirements and the intricacies of complex real-time signal and data processing, target architecture styles and target application domains have been adopted to make the synthesis approach feasible. These approaches are also heavily application-driven, which is illustrated by many realistic demonstrations, used as examples in the book. The focus is on domains where application-specific solutions are attractive, such as significant parts of audio, telecom, instrumentation, speech, robotics, medical and automotive processing, image and video processing, TV, multi-media, radar, sonar. Application-Driven Architecture Synthesis is of interest to both academics and senior design engineers and CAD managers in industry. It provides an excellent overview of what capabilities to expect from future practical design tools, and includes an extensive bibliography.
Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms.
FORTE 2001, formerly FORTE/PSTV conference, is a combined conference of FORTE (Formal Description Techniques for Distributed Systems and Communication Protocols) and PSTV (Protocol Specification, Testing and Verification) conferences. This year the conference has a new name FORTE (Formal Techniques for Networked and Distributed Systems). The previous FORTE began in 1989 and the PSTV conference in 1981. Therefore the new FORTE conference actually has a long history of 21 years. The purpose of this conference is to introduce theories and formal techniques applicable to various engineering stages of networked and distributed systems and to share applications and experiences of them. This FORTE 2001 conference proceedings contains 24 refereed papers and 4 invited papers on the subjects. We regret that many good papers submitted could not be published in this volume due to the lack of space. FORTE 2001 was organized under the auspices of IFIP WG 6.1 by Information and Communications University of Korea. It was financially supported by Ministry of Information and Communication of Korea. We would like to thank every author who submitted a paper to FORTE 2001 and thank the reviewers who generously spent their time on reviewing. Special thanks are due to the reviewers who kindly conducted additional reviews for rigorous review process within a very short time frame. We would like to thank Prof. Guy Leduc, the chairman of IFIP WG 6.1, who made valuable suggestions and shared his experiences for conference organization.
This book constitutes the refereed proceedings of the 17th National Conference on Computer Engineering and Technology, NCCET 2013, held in Xining, China, in July 2013. The 26 papers presented were carefully reviewed and selected from 234 submissions. They are organized in topical sections named: Application Specific Processors; Communication Architecture; Computer Application and Software Optimization; IC Design and Test; Processor Architecture; Technology on the Horizon.
This volume contains the papers presented at the NATO Advanced Study Institute on the Interlinking of Computer Networks held between August 28th and September 8th 1978 at Bonas, France. The development of computer networks has proceeded over the last few decades to the point where a number of scientific and commercial networks are firmly established - albeit using different philosophies of design and operation. Many of these networks are serving similar communities having the same basic computer needs and those communities where the computer resources are complementary. Consequently there is now a considerable interest in the possibility of linking computer networks to provide resource sharing over quite wide geographical distances. The purpose of the Institute organisers was to consider the problems that arise when this form of interlinking is attempted. The problems fall into three categories, namely technical problems, compatibility and management. Only within the last few years have the technical problems been understood sufficiently well to enable interlinking to take place. Consequently considerable value was given during the meeting to discussing the compatibility and management problems that require solution before x FOREWORD global interlinking becomes an accepted and cost effective operation. Existing computer networks were examined in depth and case-histories of their operations were presented by delegates drawn from the international community. The scope and detail of the papers presented should provide a valuable contribution to this emerging field and be useful to Communications Specialists and Managers as well as those concerned with Computer Operations and Development."
Adiabatic logic is a potential successor for static CMOS circuit design when it comes to ultra-low-power energy consumption. Future development like the evolutionary shrinking of the minimum feature size as well as revolutionary novel transistor concepts will change the gate level savings gained by adiabatic logic. In addition, the impact of worsening degradation effects has to be considered in the design of adiabatic circuits. The impact of the technology trends on the figures of merit of adiabatic logic, energy saving potential and optimum operating frequency, are investigated, as well as degradation related issues. Adiabatic logic benefits from future devices, is not susceptible to Hot Carrier Injection, and shows less impact of Bias Temperature Instability than static CMOS circuits. Major interest also lies on the efficient generation of the applied power-clock signal. This oscillating power supply can be used to save energy in short idle times by disconnecting circuits. An efficient way to generate the power-clock is by means of the synchronous 2N2P LC oscillator, which is also robust with respect to pattern-induced capacitive variations. An easy to implement but powerful power-clock gating supplement is proposed by gating the synchronization signals. Diverse implementations to shut down the system are presented and rated for their applicability and other aspects like energy reduction capability and data retention. Advantageous usage of adiabatic logic requires compact and efficient arithmetic structures. A broad variety of adder structures and a Coordinate Rotation Digital Computer are compared and rated according to energy consumption and area usage, and the resulting energy saving potential against static CMOS proves the ultra-low-power capability of adiabatic logic. In the end, a new circuit topology has to compete with static CMOS also in productivity. On a 130nm test chip, a large scale test vehicle containing an FIR filter was implemented in adiabatic logic, utilizing a standard, library-based design flow, fabricated, measured and compared to simulations of a static CMOS counterpart, with measured saving factors compliant to the values gained by simulation. This leads to the conclusion that adiabatic logic is ready for productive design due to compatibility not only to CMOS technology, but also to electronic design automation (EDA) tools developed for static CMOS system design.
N etwork-based computing domain unifies all best research efforts presented from single computer systems to networked systems to render overwhelming computational power for several modern day applications. Although this power is expected to grow with respect to time due to tech nological advancements, application requirements impose a continuous thrust on network utilization and on the resources to deliver supreme quality of service. Strictly speaking, network-based computing dornain has no confined scope and each element offers considerable challenges. Any modern day networked application strongly thrives on efficient data storage and management system, which is essentially a Database System. There have been nurnber of books-to-date in this domain that discuss fundamental principles of designing a database systern. Research in this dornain is now far matured and rnany researchers are venturing in this dornain continuously due to a wide variety of challenges posed. In this book, our dornain of interest is in exposing the underlying key challenges in designing algorithms to handle unpredictable requests that arrive at a Distributed Database System(DDBS) and evaluating their performance. These requests are otherwise called as on-line requests arriving at a system to process. Transactions in an on-line Banking service, Airline Reservation systern, Video-on-Demand systern, etc, are few examples of on-line requests."
This book constitutes the refereed proceedings of the 28th International Supercomputing Conference, ISC 2013, held in Leipzig, Germany, in June 2013. The 35 revised full papers presented together were carefully reviewed and selected from 89 submissions. The papers cover the following topics: scalable applications with 50K+ cores; performance improvements in algorithms; accelerators; performance analysis and optimization; library development; administration and management of supercomputers; energy efficiency; parallel I/O; grid and cloud.
This state-of-the-art survey features topics related to the impact of multicore, manycore, and coprocessor technologies in science and large-scale applications in an interdisciplinary environment. The papers included in this survey cover research in mathematical modeling, design of parallel algorithms, aspects of microprocessor architecture, parallel programming languages, hardware-aware computing, heterogeneous platforms, manycore technologies, performance tuning, and requirements for large-scale applications. The contributions presented in this volume are an outcome of an inspiring conference conceived and organized by the editors at the University of Applied Sciences (HfT) in Stuttgart, Germany, in September 2012. The 10 revised full papers selected from 21 submissions are presented together with the twelve poster abstracts and focus on combination of new aspects of microprocessor technologies, parallel applications, numerical simulation, and software development; thus they clearly show the potential of emerging technologies in the area of multicore and manycore processors that are paving the way towards personal supercomputing and very likely towards exascale computing.
Formal Methods for Open Object-Based Distributed Systems IV presents the leading edge in the fields of object-oriented programming, open distributed systems, and formal methods for object-oriented systems. With increased support within industry regarding these areas, this book captures the most up-to-date information on the subject. Papers in this volume focus on the following specific technologies: * components; * mobile code; * Java(R); * The Unified Modeling Language (UML); * refinement of specifications; * types and subtyping; * temporal and probabilistic systems. This volume comprises the proceedings of the Fourth International Workshop on Formal Methods for Open Object-Based Distributed Systems (FMOODS 2000), which was sponsored by the International Federation for Information Processing (IFIP) and held in Stanford, California, USA, in September 2000.
FGCS'88 was held with the objective of reporting on the current status and results of the research activities conducted by the Institute for New Generation Computer Technology (ICOT), Japan, in the intermediate stage of its FGCS projects, as well as encouraging researchers and representatives from business and the government to present papers, report research results and exchange opinions. The proceedings are published in three volumes. Volume 1 includes the plenary sessions and reports on ICOT research topics. Volumes 2 and 3 include a collection of invited and submitted technical papers in the areas of foundations, software, architecture and applications.
This book constitutes the thoroughly refereed proceedings of the 2011 ICSOC Workshops consisting of 5 scientific satellite events, organized in 4 tracks: workshop track (WESOA 2011; NFPSLAM-SOC 2011), PhD symposium track, demonstration track, and industry track; held in conjunction with the 2011 International Conference on Service-Oriented Computing (ICSOC), in Paphos, Greece, December 2011. The 39 revised papers presented together with 2 introductory descriptions address topics such as software engineering services; the management of service level agreements; Web services and service composition; general or domain-specific challenges of service-oriented computing and its transition towards cloud computing; architecture and modeling of services; workflow management; performance analysis as well as crowdsourcing for improving service processes and for knowledge discovery.
Foundations of Dependable Computing: System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead. A companion to this volume (published by Kluwer) subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems.
The role of arithmetic in datapath design in VLSI design has been increasing in importance over the last several years due to the demand for processors that are smaller, faster, and dissipate less power. Unfortunately, this means that many of these datapaths will be complex both algorithmically and circuit wise. As the complexity of the chips increases, less importance will be placed on understanding how a particular arithmetic datapath design is implemented and more importance will be given to when a product will be placed on the market. This is because many tools that are available today, are automated to help the digital system designer maximize their efficiently. Unfortunately, this may lead to problems when implementing particular datapaths. The design of high-performance architectures is becoming more compli cated because the level of integration that is capable for many of these chips is in the billions. Many engineers rely heavily on software tools to optimize their work, therefore, as designs are getting more complex less understanding is going into a particular implementation because it can be generated automati cally. Although software tools are a highly valuable asset to designer, the value of these tools does not diminish the importance of understanding datapath ele ments. Therefore, a digital system designer should be aware of how algorithms can be implemented for datapath elements. Unfortunately, due to the complex ity of some of these algorithms, it is sometimes difficult to understand how a particular algorithm is implemented without seeing the actual code."
This book introduces the fundamental concepts and practical simulation te- niques for modeling different aspects of operating systems to study their g- eral behavior and their performance. The approaches applied are obje- oriented modeling and process interaction approach to discrete-event simu- tion. The book depends on the basic modeling concepts and is more specialized than my previous book: Practical Process Simulation with Object-Oriented Techniques and C++, published by Artech House, Boston 1999. For a more detailed description see the Web location: http: //science.kennesaw.edu/ jgarrido/mybook, html. Most other books on performance modeling use only analytical approaches, and very few apply these concepts to the study of operating systems. Thus, the unique feature of the book is that it concentrates on design aspects of operating systems using practical simulation techniques. In addition, the book illustrates the dynamic behavior of different aspects of operating systems using the various simulation models, with a general hands-on approac
In Symbolic Analysis for Parallelizing Compilers the author presents an excellent demonstration of the effectiveness of symbolic analysis in tackling important optimization problems, some of which inhibit loop parallelization. The framework that Haghighat presents has proved extremely successful in induction and wraparound variable analysis, strength reduction, dead code elimination and symbolic constant propagation. The approach can be applied to any program transformation or optimization problem that uses properties and value ranges of program names. Symbolic analysis can be used on any transformational system or optimization problem that relies on compile-time information about program variables. This covers the majority of, if not all optimization and parallelization techniques. The book makes a compelling case for the potential of symbolic analysis, applying it for the first time - and with remarkable results - to a number of classical optimization problems: loop scheduling, static timing or size analysis, and dependence analysis. It demonstrates how symbolic analysis can solve these problems faster and more accurately than existing hybrid techniques.
Replication Techniques in Distributed Systems organizes and surveys the spectrum of replication protocols and systems that achieve high availability by replicating entities in failure-prone distributed computing environments. The entities discussed in this book vary from passive untyped data objects, to typed and complex objects, to processes and messages. Replication Techniques in Distributed Systems contains definitions and introductory material suitable for a beginner, theoretical foundations and algorithms, an annotated bibliography of commercial and experimental prototype systems, as well as short guides to recommended further readings in specialized subtopics. This book can be used as recommended or required reading in graduate courses in academia, as well as a handbook for designers and implementors of systems that must deal with replication issues in distributed systems.
Foundations of Dependable Computing: Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. A companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead. Formal Description Techniques and Protocol Specification, Testing and Verification addresses formal description techniques (FDTs) applicable to distributed systems and communication protocols. It aims to present the state of the art in theory, application, tools and industrialization of FDTs. Among the important features presented are: FDT-based system and protocol engineering; FDT-application to distributed systems; Protocol engineering; Practical experience and case studies. Formal Description Techniques and Protocol Specification, Testing and Verification comprises the proceedings of the Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols and Protocol Specification, Testing and Verification, sponsored by the International Federation for Information Processing, held in November 1998, Paris, France. Formal Description Techniques and Protocol Specification, Testing and Verification is suitable as a secondary text for a graduate-level course on Distributed Systems or Communications, and as a reference for researchers and practitioners in industry.
High Performance Computing Systems and Applications contains the fully refereed papers from the 13th Annual Symposium on High Performance Computing, held in Kingston, Canada, in June 1999. This book presents the latest research in HPC architectures, distributed and shared memory performance, algorithms and solvers, with special sessions on atmospheric science, computational chemistry and physics. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
This book constitutes thoroughly refereed post-conference proceedings of the workshops of the 17th International Conference on Parallel Computing, Euro-Par 2011, held in Bordeaux, France, in August 2011. The papers of these 12 workshops CCPI, CGWS, HeteroPar, HiBB, HPCVirt, HPPC, HPSS HPCF, PROPER, CCPI, and VHPC focus on promotion and advancement of all aspects of parallel and distributed computing.
This book constitutes thoroughly refereed post-conference proceedings of the workshops of the 17th International Conference on Parallel Computing, Euro-Par 2011, held in Bordeaux, France, in August 2011. The papers of these 12 workshops CCPI, CGWS, HeteroPar, HiBB, HPCVirt, HPPC, HPSS HPCF, PROPER, CCPI, and VHPC focus on promotion and advancement of all aspects of parallel and distributed computing.
Supercomputers are the largest and fastest computers available at any point in time. The term was used for the first time in the New York World, March 1920, to describe "new statistical machines with the mental power of 100 skilled mathematicians in solving even highly complex algebraic problems. " Invented by Mendenhall and Warren, these machines were used at Columbia University'S Statistical Bureau. Recently, supercomputers have been used primarily to solve large-scale prob lems in science and engineering. Solutions of systems of partial differential equa tions, such as those found in nuclear physics, meteorology, and computational fluid dynamics, account for the majority of supercomputer use today. The early computers, such as EDVAC, SSEC, 701, and UNIVAC, demonstrated the feasibility of building fast electronic computing machines which could become commercial products. The next generation of computers focused on attaining the highest possible computational speeds. This book discusses the architectural approaches used to yield significantly higher computing speeds while preserving the conventional, von Neumann, machine organization (Chapters 2-4). Subsequent improvements depended on developing a new generation of computers employing a new model of computation: single-instruction multiple data (SIMD) processors (Chapters 5-7). Later machines refmed SIMD architec ture and technology (Chapters 8-9). SUPERCOMPUTER ARCHITECI'URE CHAPTER INTRODUCTION THREE ERAS OF SUPERCOMPUTERS Supercomputers -- the largest and fastest computers available at any point in time -- have been the products of complex interplay among technological, architectural, and algorithmic developments." |
You may like...
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
Novel Approaches to Information Systems…
Naveen Prakash, Deepika Prakash
Hardcover
R5,924
Discovery Miles 59 240
Routing Algorithms in Networks-on-Chip
Maurizio Palesi, Masoud Daneshtalab
Hardcover
R4,882
Discovery Miles 48 820
Constraint Decision-Making Systems in…
Santosh Kumar Das, Nilanjan Dey
Hardcover
R6,687
Discovery Miles 66 870
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
|