![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design
The Engineering of Complex Real-Time Computer Control Systems brings together in one place important contributions and up-to-date research results in this important area. The Engineering of Complex Real-Time Computer Control Systems serves as an excellent reference, providing insight into some of the most important research issues in the field.
This book constitutes the thoroughly refereed post-conference proceedings of the 10th International Conference on High Performance Computing for Computational Science, VECPAR 2012, held in Kope, Japan, in July 2012. The 28 papers presented together with 7 invited talks were carefully selected during two rounds of reviewing and revision. The papers are organized in topical sections on CPU computing, applications, finite element method from various viewpoints, cloud and visualization performance, method and tools for advanced scientific computing, algorithms and data analysis, parallel iterative solvers on multicore architectures.
This book describes research performed in the context of trust/distrust propagation and aggregation, and their use in recommender systems. This is a hot research topic with important implications for various application areas. The main innovative contributions of the work are: -new bilattice-based model for trust and distrust, allowing for ignorance and inconsistency -proposals for various propagation and aggregation operators, including the analysis of mathematical properties -Evaluation of these operators on real data, including a discussion on the data sets and their characteristics. -A novel approach for identifying controversial items in a recommender system -An analysis on the utility of including distrust in recommender systems -Various approaches for trust based recommendations (a.o. base on collaborative filtering), an in depth experimental analysis, and proposal for a hybrid approach -Analysis of various user types in recommender systems to optimize bootstrapping of cold start users.
The International Workshop on "The Use of Supercomputers in Theoretical Science" took place on January 24 and 25, 1991, at the University of Antwerp (UIA), Antwerpen, Belgium. It was the sixth in a series of workshops, the fIrst of which took place in 1984. The principal aim of these workshops is to present the state of the art in scientific large-scale and high speed-computation. Computational science has developed into a third methodology equally important now as its theoretical and experimental companions. Gradually academic researchers acquired access to a variety of supercomputers and as a consequence computational science has become a major tool for their work. It is a pleasure to thank the Belgian National Science Foundation (NFWO-FNRS) and the Ministry of ScientifIc Affairs for sponsoring the workshop. It was organized both in the framework of the Third Cycle "Vectorization, Parallel Processing and Supercomputers" and the "Governemental Program in Information Technology." We also very much would like to thank the University of Antwerp (Universitaire Instelling Antwerpen -VIA) for fInancial and material support. Special thanks are due to Mrs. H. Evans for the typing and editing of the manuscripts and for the preparation of the author and subject indexes. J.T. Devreese P.E. Van Camp University of Antwerp July 1991 v CONlENTS High Perfonnance Numerically Intensive Applications on Distributed Memory Parallel Computers .................... . F.W. Wray Abstract ......................................... .
Distributed applications are a necessity in most central application sectors of the contemporary information society, including e-commerce, e-banking, e-learning, e-health, telecommunication and transportation. This results from a tremendous growth of the role that the Internet plays in business, administration and our everyday activities. This trend is going to be even further expanded in the context of advances in broadband wireless communication. New Developments in Distributed Applications and Interoperable Systems focuses on the techniques available or under development with the goal to ease the burden of constructing reliable and maintainable interoperable information systems providing services in the global communicating environment. The topics covered in this book include: * Context-aware applications; * Integration and interoperability of distributed systems; * Software architectures and services for open distributed systems; * Management, security and quality of service issues in distributed systems; * Software agents and mobility; * Internet and other related problem areas.The book contains the proceedings of the Third International Working Conference on Distributed Applications and Interoperable Systems (DAIS'2001), which was held in September 2001 in Krakow, Poland, and sponsored by the International Federation on Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported.
Both algorithms and the software . and hardware of automatic computers have gone through a rapid development in the past 35 years. The dominant factor in this development was the advance in computer technology. Computer parameters were systematically improved through electron tubes, transistors and integrated circuits of ever-increasing integration density, which also influenced the development of new algorithms and programming methods. Some years ago the situation in computers development was that no additional enhancement of their performance could be achieved by increasing the speed of their logical elements, due to the physical barrier of the maximum transfer speed of electric signals. Another enhancement of computer performance has been achieved by parallelism, which makes it possible by a suitable organization of n processors to obtain a perform ance increase of up to n times. Research into parallel computations has been carried out for several years in many countries and many results of fundamental importance have been obtained. Many parallel computers have been designed and their algorithmic and program ming systems built. Such computers include ILLIAC IV, DAP, STARAN, OMEN, STAR-100, TEXAS INSTRUMENTS ASC, CRAY-1, C mmp, CM*, CLIP-3, PEPE. This trend is supported by the fact that: a) many algorithms and programs are highly parallel in their structure, b) the new LSI and VLSI technologies have allowed processors to be combined into large parallel structures, c) greater and greater demands for speed and reliability of computers are made."
This book is on dependence concepts and general methods for dependence testing. Here, dependence means data dependence and the tests are compile-time tests. We felt the time was ripe to create a solid theory of the subject, to provide the research community with a uniform conceptual framework in which things fit together nicely. How successful we have been in meeting these goals, of course, remains to be seen. We do not try to include all the minute details that are known, nor do we deal with clever tricks that all good programmers would want to use. We do try to convince the reader that there is a mathematical basis consisting of theories of bounds of linear functions and linear diophantine equations, that levels and direction vectors are concepts that arise rather natu rally, that different dependence tests are really special cases of some general tests, and so on. Some mathematical maturity is needed for a good understand ing of the book: mainly calculus and linear algebra. We have cov ered diophantine equations rather thoroughly and given a descrip of some matrix theory ideas that are not very widely known. tion A reader familiar with linear programming would quickly recog nize several concepts. We have learned a great deal from the works of M. Wolfe, and K. Kennedy and R. Allen. Wolfe's Ph. D. thesis at the University of Illinois and Kennedy & Allen's paper on vectorization of Fortran programs are still very useful sources on this subject."
The first Annual Working Conference ofWG11.4oftheInter nationalFederationforInformation Processing (IFIP),focuseson variousstate of the art concepts in the field of Network and Dis tributedSystemsSecurity. Oursocietyisrapidly evolvingand irreversibly set onacourse governedby electronicinteractions. Wehave seen thebirthofe mail in the early seventies, and are now facing new challenging applicationssuchase commerce, e government,...Themoreour societyrelies on electronicforms ofcommunication,themorethe securityofthesecommunicationnetworks isessentialforitswell functioning. Asaconsequence,researchonmethodsandtechniques toimprove network security iso fparam ount importance. ThisWorking Conference bringstogetherresearchersandprac tionersofvariousdisciplines,organisationsandcountries,todiscuss thelatestdevelopmentsinsecurity protocols, secure software engin eering,mobileagentsecurity,e commercesecurityandsecurityfor distributedcomputing. Wearealsopleasedtohaveattractedtwointernationalspeakers topresenttwo case studies,one dealing withBelgium'sintentionto replacetheidentity card ofitscitizensbyanelectronicversion,and theotherdiscussingtheimplicationsofthesecuritycertificationin amultinationalcorporation. ThisWorking Conference s houldalsobeconsideredasthekick off activity ofWG11.4, the aimsof which can be summarizedas follows: topromoteresearch on technical measures forsecuringcom puternetworks, including bothhardware andsoftware based techniques. to promote dissemination of research results in the field of network security in real lifenetworks in industry, academia and administrative ins titutions. viii topromoteeducationintheapplicationofsecuritytechniques, andtopromotegeneral awarenessa boutsecurityproblems in thebroadfieldofinformationtechnology. Researchers and practioners who want to get involved in this Working Group, are kindlyrequestedtocontactthechairman. MoreinformationontheworkingsofWG11.4isavailable from the officialIFIP website:http://www.ifip.at.org/. Finally,wewish toexpressour gratitudetoallthosewho have contributedtothisconference in one wayoranother. Wearegr ate fultothe internationalrefereeboard whoreviewedallthe papers andtotheauthorsandinvitedspeakers,whosecontributionswere essential to the successof the conference. We would alsoliketo thanktheparticipantswhosepresenceand interest, togetherwith thechangingimperativesofsociety,willprovea drivingforce for futureconferencestocome.
Dynamic Reconfiguration: Architectures and Algorithms offers a comprehensive treatment of dynamically reconfigurable computer architectures and algorithms for them. The coverage is broad starting from fundamental algorithmic techniques, ranging across algorithms for a wide array of problems and applications, to simulations between models. The presentation employs a single reconfigurable model (the reconfigurable mesh) for most algorithms, to enable the reader to distill key ideas without the cumbersome details of a myriad of models. In addition to algorithms, the book discusses topics that provide a better understanding of dynamic reconfiguration such as scalability and computational power, and more recent advances such as optical models, run-time reconfiguration (on FPGA and related platforms), and implementing dynamic reconfiguration. The book, featuring many examples and a large set of exercises, is an excellent textbook or reference for a graduate course. It is also a useful reference to researchers and system developers in the area.
This book constitutes the refereed proceedings of the 5th International Conference on Reversible Computation, RC 2013, held in Victoria, BC, Canada, in July 2013. The 19 contributions presented together with one invited paper were carefully reviewed and selected from 37 submissions. The papers are organized in topical sections on physical implementation; arithmetic; programming and data structures; modelling; synthesis and optimization; and alternative technologies.
Synthesis and Optimization of DSP Algorithms describes approaches taken to synthesising structural hardware descriptions of digital circuits from high-level descriptions of Digital Signal Processing (DSP) algorithms. The book contains: -A tutorial on the subjects of digital design and architectural synthesis, intended for DSP engineers, -A tutorial on the subject of DSP, intended for digital designers, -A discussion of techniques for estimating the peak values likely to occur in a DSP system, thus enabling an appropriate signal scaling. Analytic techniques, simulation techniques, and hybrids are discussed. The applicability of different analytic approaches to different types of DSP design is covered, -The development of techniques to optimise the precision requirements of a DSP algorithm, aiming for efficient implementation in a custom parallel processor. The idea is to trade-off numerical accuracy for area or power-consumption advantages. Again, both analytic and simulation techniques for estimating numerical accuracy are described and contrasted. Optimum and heuristic approaches to precision optimisation are discussed, -A discussion of the importance of the scheduling, allocation, and binding problems, and development of techniques to automate these processes with reference to a precision-optimized algorithm, -Future perspectives for synthesis and optimization of DSP algorithms.
A new advanced textbook/reference providing a comprehensive survey of hardware and software architectural principles and methods of computer systems organization and design. The book is suitable for a first course in computer organization. The style is similar to that of the author's book on assembly language in that it strongly supports self-study by students. This organization facilitates compressed presentation of material. Emphasis is also placed on related concepts to practical designs/chips. Topics: material presentation suitable for self- study; concepts related to practical designs and implementations; extensive examples and figures; details provided on several digital logic simulation packages; free MASM download instructions provided; and end-of-chapter exercises.
Regular Nanofabrics in Emerging Technologies gives a deep insight into both fabrication and design aspects of emerging semiconductor technologies, that represent potential candidates for the post-CMOS era. Its approach is unique, across different fields, and it offers a synergetic view for a public of different communities ranging from technologists, to circuit designers, and computer scientists. The book presents two technologies as potential candidates for future semiconductor devices and systems and it shows how fabrication issues can be addressed at the design level and vice versa. The reader either for academic or research purposes will find novel material that is explained carefully for both experts and non-initiated readers. Regular Nanofabrics in Emerging Technologies is a survey of post-CMOS technologies. It explains processing, circuit and system level design for people with various backgrounds.
This book brings together a selection of the best papers from the thirteenth edition of the Forum on specification and Design Languages Conference (FDL), which was held in Southampton, UK in September 2010. FDL is a well established international forum devoted to dissemination of research results, practical experiences and new ideas in the application of specification, design and verification languages to the design, modelling and verification of integrated circuits, complex hardware/software embedded systems, and mixed-technology systems.
New collaboration mechanisms and organizational forms supported by networking tools not only induce new business domains but invade all traditional sectors and organizations, which requires thinking of each business as part of a wider economic ecosystem and environment. The virtual enterprise / virtual organization developments, although initially technology-driven, are gathering more and more contributions of a multi-disciplinary nature, namely from the socio-economic and organizational areas. New behavioral forms, new cooperation agreements and social contracts, new liability agreements and risk negotiation practices, new ways of generating value for common developments, and correspondingly new challenges on intellectual property and ownership identification are among the major trends. This book contains selected articles from PRO-VE'02, the third working conference on Infrastructures for Virtual Enterprises, which was sponsored by the International Federation for Information Processing (IFIP) and held in Sesimbra, Portugal in May 2002. The included articles represent relevant examples of the current state of the art in virtual enterprises and other collaborative and networked organizations, and provide valuable insights on future trends and challenges. The book contents clearly reflect a growing maturation and diversification of the area. Important development directions are well represented, such as: modeling and reference architectures, formation of virtual organizations, including contract management and negotiation, operation support functionalities, infrastructures and interoperability, virtual communities and new collaboration forms, best practices and strategic planning, economic aspects and performance metrics, training and new ways of working. Collaborative Business Ecosystems and Virtual Enterprises is essential reading for researchers, engineers, managers, practitioners, sociologists, and students in production engineering, computer science, electrical engineering, mechanical engineering, organizational science, and industrial sociology.
Embedded systems have an increasing importance in our everyday lives. The growing complexity of embedded systems and the emerging trend to interconnections between them lead to new challenges. Intelligent solutions are necessary to overcome these challenges and to provide reliable and secure systems to the customer under a strict time and financial budget. Solutions on Embedded Systems documents results of several innovative approaches that provide intelligent solutions in embedded systems. The objective is to present mature approaches, to provide detailed information on the implementation and to discuss the results obtained.
Foundations of Dependable Computing: Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. The companion volume subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.
Formal Methods for Protocol Engineering and Distributed Systems addresses formal description techniques (FDTs) applicable to distributed systems and communication protocols. It aims to present the state of the art in theory, application, tools an industrialization of FDTs. Among the important features presented are: FDT-based system and protocol engineering; FDT application to distributed systems; Protocol engineeering; Practical experience and case studies. Formal Methods for Protocol Engineering and Distributed Systems contains the proceedings of the Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols and Protocol Specification, Testing, and Verification, which was sponsored by the International Federation for Information Processing (IFIP) and was held in Beijing, China, in October 1999. This volume is suitable as a secondary text for a graduate level course on Distributed Systems or Communications, and as a reference for researchers and industry practitioners.
Design is an art form in which the designer selects from a myriad of alternatives to bring an "optimum" choice to a user. In many complex of "optimum" is difficult to define. Indeed, the users systems the notion themselves will not agree, so the "best" system is simply the one in which the designer and the user have a congruent viewpoint. Compounding the design problem are tradeoffs that span a variety of technologies and user requirements. The electronic business system is a classically complex system whose tradeoff criteria and user views are constantly changing with rapidly developing underlying technology. Professor Milutinovic has chosen this area for his capstone contribution to the computer systems design. This book completes his trilogy on design issue in computer systems. His first work, "Surviving the Design of a 200 MHz RISC Microprocessor" (1997) focused on the tradeoffs and design issues within a processor. His second work, "Surviving the Design of Microprocessor and Multiprocessor Systems" (2000) considers the design issues involved with assembling a number of processors into a coherent system. Finally, this book generalizes the system design problem to electronic commerce on the Internet, a global system of immense consequence.
Transformational programming and parallel computation are two emerging fields that may ultimately depend on each other for success. Perhaps because ad hoc programming on sequential machines is so straightforward, sequential programming methodology has had little impact outside the academic community, and transformational methodology has had little impact at all. However, because ad hoc programming for parallel machines is so hard, and because progress in software construction has lagged behind architectural advances for such machines, there is a much greater need to develop parallel programming and transformational methodologies. Parallel Algorithm Derivation and Program Transformation stimulates the investigation of formal ways to overcome problems of parallel computation, with respect to both software development and algorithm design. It represents perspectives from two different communities: transformational programming and parallel algorithm design, to discuss programming, transformational, and compiler methodologies for parallel architectures, and algorithmic paradigms, techniques, and tools for parallel machine models.Parallel Algorithm Derivation and Program Transformation is an excellent reference for graduate students and researchers in parallel programming and transformational methodology. Each chapter contains a few initial sections in the style of a first-year, graduate textbook with many illustrative examples. The book may also be used as the text for a graduate seminar course or as a reference book for courses in software engineering, parallel programming or formal methods in program development.
The last decade has seen tremendous growth in usage of the World Wide Web. Web caching is a technology aimed at reducing the transmission of redundant network traffic and improving access to the Web. The key idea in Web caching is to cache frequently- accessed content so that it may be used profitably later. This leads to cost savings, reduction in network traffic, improved access and better content availability. Web Caching and Its Applications gives the reader an understanding of the latest developments in Web caching research. Topics covered include architectural aspects, aspects requiring coordination among caches, aspects related to network traffic, techniques that complement caching, practical aspects, and aspects related to performance. While Web Caching and Its Applications is designed for a professional audience, students will appreciate the exercises for applying the knowledge to solving practical problems related to Web caching and Internet performance. The book includes an exhaustive list of references for further study.
Load Balancing in Parallel Computers: Theory and Practice is about the essential software technique of load balancing in distributed memory message-passing parallel computers, also called multicomputers. Each processor has its own address space and has to communicate with other processors by message passing. In general, a direct, point-to-point interconnection network is used for the communications. Many commercial parallel computers are of this class, including the Intel Paragon, the Thinking Machine CM-5, and the IBM SP2. Load Balancing in Parallel Computers: Theory and Practice presents a comprehensive treatment of the subject using rigorous mathematical analyses and practical implementations. The focus is on nearest-neighbor load balancing methods in which every processor at every step is restricted to balancing its workload with its direct neighbours only. Nearest-neighbor methods are iterative in nature because a global balanced state can be reached through processors' successive local operations. Since nearest-neighbor methods have a relatively relaxed requirement for the spread of local load information across the system, they are flexible in terms of allowing one to control the balancing quality, effective for preserving communication locality, and can be easily scaled in parallel computers with a direct communication network. Load Balancing in Parallel Computers: Theory and Practice serves as an excellent reference source and may be used as a text for advanced courses on the subject.
JR is an extension of the Java programming language with additional concurrency mechanisms based on those in the SR (Synchronizing Resources) programming language. The JR implementation executes on UNIX-based systems (Linux, Mac OS X, and Solaris) and Windows-based systems. It is available free from the JR webpage. This book describes the JR programming language and illustrates how it can be used to write concurrent programs for a variety of applications. This text presents numerous small and large example programs. The source code for all programming examples and the given parts of all programming exercises are available on the JR webpage. Dr. Ronald A. Olsson and Dr. Aaron W. Keen, the authors of this text, are the designers and implementors of JR.
Anyone who can interpret decision diagrams using the spectral approach can advance both the utility and understanding of classical DD techniques. This approach also provides a framework for developing advanced solutions for digital design and a host of other applications. Scientists, computer science and engineering professionals, and researchers with an interest in the spectral methods of representing discrete functions, as well as the foundations of logic design, will find the book a clearly explained, well-organized, and essential resource.
Formal Methods for Open Object-Based Distributed Systems V brings together research in three important and related fields: * Formal methods; * Distributed systems; * Object-based technology. Such a convergence is representative of recent advances in the field of distributed systems, and provides links between several scientific and technological communities. The wide scope of topics covered in this volume range in subject from UML to object-based languages and calculi and security, and in approach from specification to case studies and verification. This volume comprises the proceedings of the Fifth International Conference on Formal Methods for Open Object-Based Distributed Systems (FMOODS 2002), which was sponsored by the International Federation for Information Processing (IFIP) and held in Enschede, The Netherlands in March 2002. |
![]() ![]() You may like...
Data Science and Big Data: An…
Witold Pedrycz, Shyi-Ming Chen
Hardcover
R5,000
Discovery Miles 50 000
Clinical Videoconferencing in Telehealth…
Peter W. Tuerk, Peter Shore
Hardcover
R4,968
Discovery Miles 49 680
Translational Recurrences - From…
Norbert Marwan, Michael Riley, …
Hardcover
R3,545
Discovery Miles 35 450
Research Anthology on Implementing…
Information R Management Association
Hardcover
R17,073
Discovery Miles 170 730
Bayesian Nonparametric Data Analysis
Peter Muller, Fernando Andres Quintana, …
Hardcover
R3,211
Discovery Miles 32 110
Temporal Information Systems in Medicine
Carlo Combi, Elpida Keravnou-Papailiou, …
Hardcover
R1,752
Discovery Miles 17 520
|