![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
This book describes research performed in the context of trust/distrust propagation and aggregation, and their use in recommender systems. This is a hot research topic with important implications for various application areas. The main innovative contributions of the work are: -new bilattice-based model for trust and distrust, allowing for ignorance and inconsistency -proposals for various propagation and aggregation operators, including the analysis of mathematical properties -Evaluation of these operators on real data, including a discussion on the data sets and their characteristics. -A novel approach for identifying controversial items in a recommender system -An analysis on the utility of including distrust in recommender systems -Various approaches for trust based recommendations (a.o. base on collaborative filtering), an in depth experimental analysis, and proposal for a hybrid approach -Analysis of various user types in recommender systems to optimize bootstrapping of cold start users.
This book will enable the reader to very quickly begin programming in assembly language. Through this hands-on programming, readers will also learn more about the computer architecture of the Intel 32-bit processor, as well as the relationship between high-level and low-level languages. Topics: presents an overview of assembly language, and an introduction to general purpose registers; illustrates the key concepts of each chapter with complete programs, chapter summaries, and exercises; covers input/output, basic arithmetic instructions, selection structures, and iteration structures; introduces logic, shift, arithmetic shift, rotate, and stack instructions; discusses procedures and macros, and examines arrays and strings; investigates machine language from a discovery perspective. This textbook is an ideal introduction to programming in assembly language for undergraduate students, and a concise guide for professionals wishing to learn how to write logically correct programs in a minimal amount of time.
This is the first book to focus on designing run-time reconfigurable systems on FPGAs, in order to gain resource and power efficiency, as well as to improve speed. Case studies in partial reconfiguration guide readers through the FPGA jungle, straight toward a working system. The discussion of partial reconfiguration is comprehensive and practical, with models introduced together with methods to implement efficiently the corresponding systems. Coverage includes concepts for partial module integration and corresponding communication architectures, floorplanning of the on-FPGA resources, physical implementation aspects starting from constraining primitive placement and routing all the way down to the bitstream required to configure the FPGA, and verification of reconfigurable systems.
The International Workshop on "The Use of Supercomputers in Theoretical Science" took place on January 24 and 25, 1991, at the University of Antwerp (UIA), Antwerpen, Belgium. It was the sixth in a series of workshops, the fIrst of which took place in 1984. The principal aim of these workshops is to present the state of the art in scientific large-scale and high speed-computation. Computational science has developed into a third methodology equally important now as its theoretical and experimental companions. Gradually academic researchers acquired access to a variety of supercomputers and as a consequence computational science has become a major tool for their work. It is a pleasure to thank the Belgian National Science Foundation (NFWO-FNRS) and the Ministry of ScientifIc Affairs for sponsoring the workshop. It was organized both in the framework of the Third Cycle "Vectorization, Parallel Processing and Supercomputers" and the "Governemental Program in Information Technology." We also very much would like to thank the University of Antwerp (Universitaire Instelling Antwerpen -VIA) for fInancial and material support. Special thanks are due to Mrs. H. Evans for the typing and editing of the manuscripts and for the preparation of the author and subject indexes. J.T. Devreese P.E. Van Camp University of Antwerp July 1991 v CONlENTS High Perfonnance Numerically Intensive Applications on Distributed Memory Parallel Computers .................... . F.W. Wray Abstract ......................................... .
Distributed applications are a necessity in most central application sectors of the contemporary information society, including e-commerce, e-banking, e-learning, e-health, telecommunication and transportation. This results from a tremendous growth of the role that the Internet plays in business, administration and our everyday activities. This trend is going to be even further expanded in the context of advances in broadband wireless communication. New Developments in Distributed Applications and Interoperable Systems focuses on the techniques available or under development with the goal to ease the burden of constructing reliable and maintainable interoperable information systems providing services in the global communicating environment. The topics covered in this book include: * Context-aware applications; * Integration and interoperability of distributed systems; * Software architectures and services for open distributed systems; * Management, security and quality of service issues in distributed systems; * Software agents and mobility; * Internet and other related problem areas.The book contains the proceedings of the Third International Working Conference on Distributed Applications and Interoperable Systems (DAIS'2001), which was held in September 2001 in Krakow, Poland, and sponsored by the International Federation on Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported.
Both algorithms and the software . and hardware of automatic computers have gone through a rapid development in the past 35 years. The dominant factor in this development was the advance in computer technology. Computer parameters were systematically improved through electron tubes, transistors and integrated circuits of ever-increasing integration density, which also influenced the development of new algorithms and programming methods. Some years ago the situation in computers development was that no additional enhancement of their performance could be achieved by increasing the speed of their logical elements, due to the physical barrier of the maximum transfer speed of electric signals. Another enhancement of computer performance has been achieved by parallelism, which makes it possible by a suitable organization of n processors to obtain a perform ance increase of up to n times. Research into parallel computations has been carried out for several years in many countries and many results of fundamental importance have been obtained. Many parallel computers have been designed and their algorithmic and program ming systems built. Such computers include ILLIAC IV, DAP, STARAN, OMEN, STAR-100, TEXAS INSTRUMENTS ASC, CRAY-1, C mmp, CM*, CLIP-3, PEPE. This trend is supported by the fact that: a) many algorithms and programs are highly parallel in their structure, b) the new LSI and VLSI technologies have allowed processors to be combined into large parallel structures, c) greater and greater demands for speed and reliability of computers are made."
The first Annual Working Conference ofWG11.4oftheInter nationalFederationforInformation Processing (IFIP),focuseson variousstate of the art concepts in the field of Network and Dis tributedSystemsSecurity. Oursocietyisrapidly evolvingand irreversibly set onacourse governedby electronicinteractions. Wehave seen thebirthofe mail in the early seventies, and are now facing new challenging applicationssuchase commerce, e government,...Themoreour societyrelies on electronicforms ofcommunication,themorethe securityofthesecommunicationnetworks isessentialforitswell functioning. Asaconsequence,researchonmethodsandtechniques toimprove network security iso fparam ount importance. ThisWorking Conference bringstogetherresearchersandprac tionersofvariousdisciplines,organisationsandcountries,todiscuss thelatestdevelopmentsinsecurity protocols, secure software engin eering,mobileagentsecurity,e commercesecurityandsecurityfor distributedcomputing. Wearealsopleasedtohaveattractedtwointernationalspeakers topresenttwo case studies,one dealing withBelgium'sintentionto replacetheidentity card ofitscitizensbyanelectronicversion,and theotherdiscussingtheimplicationsofthesecuritycertificationin amultinationalcorporation. ThisWorking Conference s houldalsobeconsideredasthekick off activity ofWG11.4, the aimsof which can be summarizedas follows: topromoteresearch on technical measures forsecuringcom puternetworks, including bothhardware andsoftware based techniques. to promote dissemination of research results in the field of network security in real lifenetworks in industry, academia and administrative ins titutions. viii topromoteeducationintheapplicationofsecuritytechniques, andtopromotegeneral awarenessa boutsecurityproblems in thebroadfieldofinformationtechnology. Researchers and practioners who want to get involved in this Working Group, are kindlyrequestedtocontactthechairman. MoreinformationontheworkingsofWG11.4isavailable from the officialIFIP website:http://www.ifip.at.org/. Finally,wewish toexpressour gratitudetoallthosewho have contributedtothisconference in one wayoranother. Wearegr ate fultothe internationalrefereeboard whoreviewedallthe papers andtotheauthorsandinvitedspeakers,whosecontributionswere essential to the successof the conference. We would alsoliketo thanktheparticipantswhosepresenceand interest, togetherwith thechangingimperativesofsociety,willprovea drivingforce for futureconferencestocome.
Dynamic Reconfiguration: Architectures and Algorithms offers a comprehensive treatment of dynamically reconfigurable computer architectures and algorithms for them. The coverage is broad starting from fundamental algorithmic techniques, ranging across algorithms for a wide array of problems and applications, to simulations between models. The presentation employs a single reconfigurable model (the reconfigurable mesh) for most algorithms, to enable the reader to distill key ideas without the cumbersome details of a myriad of models. In addition to algorithms, the book discusses topics that provide a better understanding of dynamic reconfiguration such as scalability and computational power, and more recent advances such as optical models, run-time reconfiguration (on FPGA and related platforms), and implementing dynamic reconfiguration. The book, featuring many examples and a large set of exercises, is an excellent textbook or reference for a graduate course. It is also a useful reference to researchers and system developers in the area.
Synthesis and Optimization of DSP Algorithms describes approaches taken to synthesising structural hardware descriptions of digital circuits from high-level descriptions of Digital Signal Processing (DSP) algorithms. The book contains: -A tutorial on the subjects of digital design and architectural synthesis, intended for DSP engineers, -A tutorial on the subject of DSP, intended for digital designers, -A discussion of techniques for estimating the peak values likely to occur in a DSP system, thus enabling an appropriate signal scaling. Analytic techniques, simulation techniques, and hybrids are discussed. The applicability of different analytic approaches to different types of DSP design is covered, -The development of techniques to optimise the precision requirements of a DSP algorithm, aiming for efficient implementation in a custom parallel processor. The idea is to trade-off numerical accuracy for area or power-consumption advantages. Again, both analytic and simulation techniques for estimating numerical accuracy are described and contrasted. Optimum and heuristic approaches to precision optimisation are discussed, -A discussion of the importance of the scheduling, allocation, and binding problems, and development of techniques to automate these processes with reference to a precision-optimized algorithm, -Future perspectives for synthesis and optimization of DSP algorithms.
A new advanced textbook/reference providing a comprehensive survey of hardware and software architectural principles and methods of computer systems organization and design. The book is suitable for a first course in computer organization. The style is similar to that of the author's book on assembly language in that it strongly supports self-study by students. This organization facilitates compressed presentation of material. Emphasis is also placed on related concepts to practical designs/chips. Topics: material presentation suitable for self- study; concepts related to practical designs and implementations; extensive examples and figures; details provided on several digital logic simulation packages; free MASM download instructions provided; and end-of-chapter exercises.
Regular Nanofabrics in Emerging Technologies gives a deep insight into both fabrication and design aspects of emerging semiconductor technologies, that represent potential candidates for the post-CMOS era. Its approach is unique, across different fields, and it offers a synergetic view for a public of different communities ranging from technologists, to circuit designers, and computer scientists. The book presents two technologies as potential candidates for future semiconductor devices and systems and it shows how fabrication issues can be addressed at the design level and vice versa. The reader either for academic or research purposes will find novel material that is explained carefully for both experts and non-initiated readers. Regular Nanofabrics in Emerging Technologies is a survey of post-CMOS technologies. It explains processing, circuit and system level design for people with various backgrounds.
This book brings together a selection of the best papers from the thirteenth edition of the Forum on specification and Design Languages Conference (FDL), which was held in Southampton, UK in September 2010. FDL is a well established international forum devoted to dissemination of research results, practical experiences and new ideas in the application of specification, design and verification languages to the design, modelling and verification of integrated circuits, complex hardware/software embedded systems, and mixed-technology systems.
Embedded systems have an increasing importance in our everyday lives. The growing complexity of embedded systems and the emerging trend to interconnections between them lead to new challenges. Intelligent solutions are necessary to overcome these challenges and to provide reliable and secure systems to the customer under a strict time and financial budget. Solutions on Embedded Systems documents results of several innovative approaches that provide intelligent solutions in embedded systems. The objective is to present mature approaches, to provide detailed information on the implementation and to discuss the results obtained.
Foundations of Dependable Computing: Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems. The companion volume subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion book (published by Kluwer) subtitled System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead.
Formal Methods for Protocol Engineering and Distributed Systems addresses formal description techniques (FDTs) applicable to distributed systems and communication protocols. It aims to present the state of the art in theory, application, tools an industrialization of FDTs. Among the important features presented are: FDT-based system and protocol engineering; FDT application to distributed systems; Protocol engineeering; Practical experience and case studies. Formal Methods for Protocol Engineering and Distributed Systems contains the proceedings of the Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols and Protocol Specification, Testing, and Verification, which was sponsored by the International Federation for Information Processing (IFIP) and was held in Beijing, China, in October 1999. This volume is suitable as a secondary text for a graduate level course on Distributed Systems or Communications, and as a reference for researchers and industry practitioners.
Transformational programming and parallel computation are two emerging fields that may ultimately depend on each other for success. Perhaps because ad hoc programming on sequential machines is so straightforward, sequential programming methodology has had little impact outside the academic community, and transformational methodology has had little impact at all. However, because ad hoc programming for parallel machines is so hard, and because progress in software construction has lagged behind architectural advances for such machines, there is a much greater need to develop parallel programming and transformational methodologies. Parallel Algorithm Derivation and Program Transformation stimulates the investigation of formal ways to overcome problems of parallel computation, with respect to both software development and algorithm design. It represents perspectives from two different communities: transformational programming and parallel algorithm design, to discuss programming, transformational, and compiler methodologies for parallel architectures, and algorithmic paradigms, techniques, and tools for parallel machine models.Parallel Algorithm Derivation and Program Transformation is an excellent reference for graduate students and researchers in parallel programming and transformational methodology. Each chapter contains a few initial sections in the style of a first-year, graduate textbook with many illustrative examples. The book may also be used as the text for a graduate seminar course or as a reference book for courses in software engineering, parallel programming or formal methods in program development.
Given the widespread use of real-time multitasking systems, there are tremendous optimization opportunities if reconfigurable computing can be effectively incorporated while maintaining performance and other design constraints of typical applications. The focus of this book is to describe the dynamic reconfiguration techniques that can be safely used in real-time systems. This book provides comprehensive approaches by considering synergistic effects of computation, communication as well as storage together to significantly improve overall performance, power, energy and temperature.
The last decade has seen tremendous growth in usage of the World Wide Web. Web caching is a technology aimed at reducing the transmission of redundant network traffic and improving access to the Web. The key idea in Web caching is to cache frequently- accessed content so that it may be used profitably later. This leads to cost savings, reduction in network traffic, improved access and better content availability. Web Caching and Its Applications gives the reader an understanding of the latest developments in Web caching research. Topics covered include architectural aspects, aspects requiring coordination among caches, aspects related to network traffic, techniques that complement caching, practical aspects, and aspects related to performance. While Web Caching and Its Applications is designed for a professional audience, students will appreciate the exercises for applying the knowledge to solving practical problems related to Web caching and Internet performance. The book includes an exhaustive list of references for further study.
Load Balancing in Parallel Computers: Theory and Practice is about the essential software technique of load balancing in distributed memory message-passing parallel computers, also called multicomputers. Each processor has its own address space and has to communicate with other processors by message passing. In general, a direct, point-to-point interconnection network is used for the communications. Many commercial parallel computers are of this class, including the Intel Paragon, the Thinking Machine CM-5, and the IBM SP2. Load Balancing in Parallel Computers: Theory and Practice presents a comprehensive treatment of the subject using rigorous mathematical analyses and practical implementations. The focus is on nearest-neighbor load balancing methods in which every processor at every step is restricted to balancing its workload with its direct neighbours only. Nearest-neighbor methods are iterative in nature because a global balanced state can be reached through processors' successive local operations. Since nearest-neighbor methods have a relatively relaxed requirement for the spread of local load information across the system, they are flexible in terms of allowing one to control the balancing quality, effective for preserving communication locality, and can be easily scaled in parallel computers with a direct communication network. Load Balancing in Parallel Computers: Theory and Practice serves as an excellent reference source and may be used as a text for advanced courses on the subject.
JR is an extension of the Java programming language with additional concurrency mechanisms based on those in the SR (Synchronizing Resources) programming language. The JR implementation executes on UNIX-based systems (Linux, Mac OS X, and Solaris) and Windows-based systems. It is available free from the JR webpage. This book describes the JR programming language and illustrates how it can be used to write concurrent programs for a variety of applications. This text presents numerous small and large example programs. The source code for all programming examples and the given parts of all programming exercises are available on the JR webpage. Dr. Ronald A. Olsson and Dr. Aaron W. Keen, the authors of this text, are the designers and implementors of JR.
Formal Methods for Open Object-Based Distributed Systems V brings together research in three important and related fields: * Formal methods; * Distributed systems; * Object-based technology. Such a convergence is representative of recent advances in the field of distributed systems, and provides links between several scientific and technological communities. The wide scope of topics covered in this volume range in subject from UML to object-based languages and calculi and security, and in approach from specification to case studies and verification. This volume comprises the proceedings of the Fifth International Conference on Formal Methods for Open Object-Based Distributed Systems (FMOODS 2002), which was sponsored by the International Federation for Information Processing (IFIP) and held in Enschede, The Netherlands in March 2002.
Active networking is an exciting new paradigm in digital networking that has the potential to revolutionize the manner in which communication takes place. It is an emerging technology, one in which new ideas are constantly being formulated and new topics of research are springing up even as this book is being written. This technology is very likely to appeal to a broad spectrum of users from academia and industry. Therefore, this book was written in a way that enables all these groups to understand the impact of active networking in their sphere of interest. Information services managers, network administrators, and e-commerce developers would like to know the potential benefits of the new technology to their businesses, networks, and applications. The book introduces the basic active networking paradigm and its potential impacts on the future of information handling in general and on communications in particular. This is useful for forward-looking businesses that wish to actively participate in the development of active networks and ensure a head start in the integration of the technology in their future products, be they applications or networks. Areas in which active networking is likely to make significant impact are identified, and the reader is pointed to any related ongoing research efforts in the area. The book also provides a deeper insight into the active networking model for students and researchers, who seek challenging topics that define or extend frontiers of the technology. It describes basic components of the model, explains some of the terms used by the active networking community, and provides the reader with taxonomy of the research being conducted at the time this book was written. Current efforts are classified based on typical research areas such as mobility, security, and management. The intent is to introduce the serious reader to the background regarding some of the models adopted by the community, to outline outstanding issues concerning active networking, and to provide a snapshot of the fast-changing landscape in active networking research. Management is a very important issue in active networks because of its open nature. The latter half of the book explains the architectural concepts of a model for managing active networks and the motivation for a reference model that addresses limitations of the current network management framework by leveraging the powerful features of active networking to develop an integrated framework. It also describes a novel application enabled by active network technology called the Active Virtual Network Management Prediction (AVNMP) algorithm. AVNMP is a pro-active management system; in other words, it provides the ability to solve a potential problem before it impacts the system by modeling network devices within the network itself and running that model ahead of real time.
Dependence Analysis may be considered to be the second edition of the author's 1988 book, Dependence Analysis for Supercomputing. It is, however, a completely new work that subsumes the material of the 1988 publication. This book is the third volume in the series Loop Transformations for Restructuring Compilers. This series has been designed to provide a complete mathematical theory of transformations that can be used to automatically change a sequential program containing FORTRAN-like do loops into an equivalent parallel form. In Dependence Analysis, the author extends the model to a program consisting of do loops and assignment statements, where the loops need not be sequentially nested and are allowed to have arbitrary strides. In the context of such a program, the author studies, in detail, dependence between statements of the program caused by program variables that are elements of arrays. Dependence Analysis is directed toward graduate and undergraduate students, and professional writers of restructuring compilers. The prerequisite for the book consists of some knowledge of programming languages, and familiarity with calculus and graph theory. No knowledge of linear programming is required.
Automatic transformation of a sequential program into a parallel form is a subject that presents a great intellectual challenge and promises great practical rewards. There is a tremendous investment in existing sequential programs, and scientists and engineers continue to write their application programs in sequential languages (primarily in Fortran),but the demand for increasing speed is constant. The job of a restructuring compiler is to discover the dependence structure of a given program and transform the program in a way that is consistent with both that dependence structure and the characteristics of the given machine. Much attention in this field of research has been focused on the Fortran do loop. This is where one expects to find major chunks of computation that need to be performed repeatedly for different values of the index variable. Many loop transformations have been designed over the years, and several of them can be found in any parallelizing compiler currently in use in industry or at a university research facility. Loop Transformations for Restructuring Compilers: The Foundations provides a rigorous theory of loop transformations. The transformations are developed in a consistent mathematical framework using objects like directed graphs, matrices and linear equations. The algorithms that implement the transformations can then be precisely described in terms of certain abstract mathematical algorithms. The book provides the general mathematical background needed for loop transformations (including those basic mathematical algorithms), discusses data dependence, and introduces the major transformations. The next volume will build a detailed theory of loop transformations based on the material developed here. Loop Transformations for Restructuring Compilers: The Foundations presents a theory of loop transformations that is rigorous and yet reader-friendly.
In Symbolic Analysis for Parallelizing Compilers the author presents an excellent demonstration of the effectiveness of symbolic analysis in tackling important optimization problems, some of which inhibit loop parallelization. The framework that Haghighat presents has proved extremely successful in induction and wraparound variable analysis, strength reduction, dead code elimination and symbolic constant propagation. The approach can be applied to any program transformation or optimization problem that uses properties and value ranges of program names. Symbolic analysis can be used on any transformational system or optimization problem that relies on compile-time information about program variables. This covers the majority of, if not all optimization and parallelization techniques. The book makes a compelling case for the potential of symbolic analysis, applying it for the first time - and with remarkable results - to a number of classical optimization problems: loop scheduling, static timing or size analysis, and dependence analysis. It demonstrates how symbolic analysis can solve these problems faster and more accurately than existing hybrid techniques. |
You may like...
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Advances in Delay-Tolerant Networks…
Joel J. P. C. Rodrigues
Paperback
R4,669
Discovery Miles 46 690
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,315
Discovery Miles 53 150
Best Practices and New Perspectives in…
Patricia Ordonez De Pablos, Robert Tennyson
Hardcover
R4,729
Discovery Miles 47 290
|