![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems
Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms.
This is an introductory book on supercomputer applications written by a researcher who is working on solving scientific and engineering application problems on parallel computers. The book is intended to quickly bring researchers and graduate students working on numerical solutions of partial differential equations with various applications into the area of parallel processing.The book starts from the basic concepts of parallel processing, like speedup, efficiency and different parallel architectures, then introduces the most frequently used algorithms for solving PDEs on parallel computers, with practical examples. Finally, it discusses more advanced topics, including different scalability metrics, parallel time stepping algorithms and new architectures and heterogeneous computing networks which have emerged in the last few years of high performance computing. Hundreds of references are also included in the book to direct interested readers to more detailed and in-depth discussions of specific topics.
This book is for researchers in computer science, mathematical logic, and philosophical logic. It shows the state of the art in current investigations of process calculi with mainly two major paradigms at work: linear logic and modal logic. The combination of approaches and pointers for further integration also suggests a grander vision for the field.
Computer Networks, Architecture and Applications covers many aspects of research in modern communications networks for computing purposes.
Advanced Fiber Access Networks takes a holistic view of broadband access networks-from architecture to network technologies and network economies. The book reviews pain points and challenges that broadband service providers face (such as network construction, fiber cable efficiency, transmission challenges, network scalability, etc.) and how these challenges are tackled by new fiber access transmission technologies, protocols and architecture innovations. Chapters cover fiber-to-the-home (FTTH) applications as well as fiber backhauls in other access networks such as 5G wireless and hybrid-fiber-coax (HFC) networks. In addition, it covers the network economy, challenges in fiber network construction and deployment, and more. Finally, the book examines scaling issues and bottlenecks in an end-to-end broadband network, from Internet backbones to inside customer homes, something rarely covered in books.
Multiprocessor Execution of Logic Programs addresses the problem of efficient implementation of logic programming languages, specifically Prolog, on multiprocessor architectures. The approaches and implementations developed attempt to take full advantage of sequential implementation technology developed for Prolog (such as the WAM) while exploiting all forms of control parallelism present in logic programs, namely, or-parallelism, independent and-parallelism and dependent and-parallelism. Coverage includes a thorough survey of parallel implementation techniques and parallel systems developed for Prolog. Multiprocessor Execution of Logic Programs is recommended for people implementing parallel logic programming systems, parallel symbolic systems, parallel AI systems, and parallel theorem proving systems. It will also be useful to people who wish to learn about the implementation of parallel logic programming systems.
Real-time systems are of importance to a large number of university laboratories and research institutes worldwide, and without the proper integration of real-time into distributed computing, institutions simply could not function. Achieving Real-Time in Distributed Computing: From Grids to Clouds offers over 400 accounts from a wide range of specific research efforts. Major focus is given to the need for methodologies, tools, and architectures for complex distributed systems that address the practical issues of performance guarantees, timed execution, real-time management of resources, synchronized communication under various load conditions, satisfaction of QoS constraints, and dealing with the trade-offs between these aspects.
If you're grounded in the basics of Swift, Xcode, and the Cocoa framework, this book provides a structured explanation of all essential real-world iOS app components. Through deep exploration and copious code examples, you'll learn how to create views, manipulate view controllers, and add features from iOS frameworks. Create, arrange, draw, layer, and animate views that respond to touch Use view controllers to manage multiple screens of interface Master interface classes for scroll views, table views, text, popovers, split views, web views, and controls Dive into frameworks for sound, video, maps, and sensors Access user libraries: music, photos, contacts, and calendar Explore additional topics, including files, networking, and threads Stay up-to-date on iOS 11 innovations, such as: Drag and drop Autolayout changes (including the new safe area) Stretchable navigation bars Table cell swipe buttons Dynamic type improvements Offline sound file rendering, image picker controller changes, new map annotation types, and more
A compact guide to knowledge management, this book makes the subject accessible without oversimplifying it. Organizational issues like strategy and culture are discussed in the context of typical knowledge management processes. The focus is always on pointing out all the issues that need to be taken into account in order to make knowledge management a success. The book then goes on to explore the role of information technology as an enabler of knowledge management relating various technologies to the knowledge management processes, showing the reader what can, and what cannot, be achieved through technology. Throughout the book, references to lessons learned from past projects underline the arguments. Managers will find this book a valuable guide for implementing their own initiatives, while researchers and system designers will find plenty of ideas for future work.
This book describes scalable and near-optimal, processor-level design space exploration (DSE) methodologies. The authors present design methodologies for data storage and processing in real-time, cost-sensitive data-dominated embedded systems. Readers will be enabled to reduce time-to-market, while satisfying system requirements for performance, area, and energy consumption, thereby minimizing the overall cost of the final design.
This IMA Volume in Mathematics and its Applications ALGORITHMS FOR PARALLEL PROCESSING is based on the proceedings of a workshop that was an integral part of the 1996-97 IMA program on "MATHEMATICS IN HIGH-PERFORMANCE COMPUTING. " The workshop brought together algorithm developers from theory, combinatorics, and scientific computing. The topics ranged over models, linear algebra, sorting, randomization, and graph algorithms and their analysis. We thank Michael T. Heath of University of lllinois at Urbana (Com puter Science), Abhiram Ranade of the Indian Institute of Technology (Computer Science and Engineering), and Robert S. Schreiber of Hewlett Packard Laboratories for their excellent work in organizing the workshop and editing the proceedings. We also take this opportunity to thank the National Science Founda tion (NSF) and the Army Research Office (ARO), whose financial support made the workshop possible. A vner Friedman Robert Gulliver v PREFACE The Workshop on Algorithms for Parallel Processing was held at the IMA September 16 - 20, 1996; it was the first workshop of the IMA year dedicated to the mathematics of high performance computing. The work shop organizers were Abhiram Ranade of The Indian Institute of Tech nology, Bombay, Michael Heath of the University of Illinois, and Robert Schreiber of Hewlett Packard Laboratories. Our idea was to bring together researchers who do innovative, exciting, parallel algorithms research on a wide range of topics, and by sharing insights, problems, tools, and methods to learn something of value from one another."
Pipelined ADCs have seen phenomenal improvements in performance over the last few years. As such, when designing a pipelined ADC a clear understanding of the design tradeoffs, and state of the art techniques is required to implement today's high performance low power ADCs.
During the last three decades, breakthroughs in computer technology have made a tremendous impact on optimization. In particular, parallel computing has made it possible to solve larger and computationally more difficult problems. The book covers recent developments in novel programming and algorithmic aspects of parallel computing as well as technical advances in parallel optimization. Each contribution is essentially expository in nature, but of scholarly treatment. In addition, each chapter includes a collection of carefully selected problems. The first two chapters discuss theoretical models for parallel algorithm design and their complexity. The next chapter gives the perspective of the programmer practicing parallel algorithm development on real world platforms. Solving systems of linear equations efficiently is of great importance not only because they arise in many scientific and engineering applications but also because algorithms for solving many optimization problems need to call system solvers and subroutines (chapters four and five). Chapters six through thirteen are dedicated to optimization problems and methods. They include parallel algorithms for network problems, parallel branch and bound techniques, parallel heuristics for discrete and continuous problems, decomposition methods, parallel algorithms for variational inequality problems, parallel algorithms for stochastic programming, and neural networks. Audience: Parallel Computing in Optimization is addressed not only to researchers of mathematical programming, but to all scientists in various disciplines who use optimization methods in parallel and multiprocessing environments to model and solve problems.
This volume reports new developments on work in the quantum flux parametron (QFP) project. It makes complete a series on Josephson supercomputers, which includes four earlier volumes, also published by World Scientific. QFP technology has great potential especially in the design of computer architecture. It is regarded as being able to go beyond the horizon of current technology, and is a leading direction for the advancement of computer technology in the next decade.
Information security concerns the confidentiality, integrity, and availability of information processed by a computer system. With an emphasis on prevention, traditional information security research has focused little on the ability to survive successful attacks, which can seriously impair the integrity and availability of a system. Trusted Recovery And Defensive Information Warfare uses database trusted recovery, as an example, to illustrate the principles of trusted recovery in defensive information warfare. Traditional database recovery mechanisms do not address trusted recovery, except for complete rollbacks, which undo the work of benign transactions as well as malicious ones, and compensating transactions, whose utility depends on application semantics. Database trusted recovery faces a set of unique challenges. In particular, trusted database recovery is complicated mainly by (a) the presence of benign transactions that depend, directly or indirectly on malicious transactions; and (b) the requirement by many mission-critical database applications that trusted recovery should be done on-the-fly without blocking the execution of new user transactions. Trusted Recovery And Defensive Information Warfare proposes a new model and a set of innovative algorithms for database trusted recovery. Both read-write dependency based and semantics based trusted recovery algorithms are proposed. Both static and dynamic database trusted recovery algorithms are proposed. These algorithms can typically save a lot of work by innocent users and can satisfy a variety of attack recovery requirements of real world database applications. Trusted Recovery And Defensive Information Warfare is suitable as a secondary text for a graduate level course in computer science, and as a reference for researchers and practitioners in information security.
Grid Computing: Achievements and Prospects, the 9th edited volume of the CoreGRID series, includes selected papers from the CoreGRID Integration Workshop, held April 2008 in Heraklion-Crete, Greece. This event brings together representatives of the academic and industrial communities performing Grid research in Europe. The workshop was organized in the context of the CoreGRID Network of Excellence in order to provide a forum for the presentation and exchange of views on the latest developments in grid technology research. Grid Computing: Achievements and Prospects is designed for a professional audience, composed of researchers and practitioners in industry. This volume is also suitable for graduate-level students in computer science.
Motivation Modem enterprises rely on database management systems (DBMS) to collect, store and manage corporate data, which is considered a strategic corporate re source. Recently, with the proliferation of personal computers and departmen tal computing, the trend has been towards the decentralization and distribution of the computing infrastructure, with autonomy and responsibility for data now residing at the departmental and workgroup level of the organization. Users want their data delivered to their desktops, allowing them to incor porate data into their personal databases, spreadsheets, word processing doc uments, and most importantly, into their daily tasks and activities. They want to be able to share their information while retaining control over its access and distribution. There are also pressures from corporate leaders who wish to use information technology as a strategic resource in offering specialized value-added services to customers. Database technology is being used to manage the data associated with corporate processes and activities. Increasingly, the data being managed are not simply formatted tables in relational databases, but all types of ob jects, including unstructured text, images, audio, and video. Thus, the database management providers are being asked to extend the capabilities of DBMS to include object-relational models as well as full object-oriented database man agement systems."
This book is the product of Research Study Group (RSG) 13 on "Human Engineering Evaluation on the Use of Colour in Electronic Displays," of Panel 8, "Defence Applications of Human and Biomedical Sciences," of the NATO Defence Research Group. RSG 13 was chaired by Heino Widdel (Germany) and consisted of Jeffrey Grossman (United States), Jean-Pierre Menu (France), Giampaolo Noja (Italy, point of contact), David Post (United States), and Jan Walraven (Netherlands). Initially, Christopher Gibson (United Kingdom) and Sharon McFaddon (Canada) participated also. Most of these representatives served previously on the NATO program committee that produced Proceedings of a Workshop on Colour Coded vs. Monochrome Displays (edited by Christopher Gibson and published by the Royal Aircraft Establishment, Farnborough, England) in 1984. RSG 13 can be regarded as a descendent of that program committee. RSG 13 was formed in 1987 for the purpose of developing and distributing guidance regarding the use of color on electronic displays. During our first meeting, we discussed the fact that, although there is a tremendous amount of information available concerning color vision, color perception, colorimetry, and color displays-much of it relevant to display design-it is scattered across numerous texts, journals, conference proceedings, and technical reports. We decided that we could fulfill the RSG's purpose best by producing a book that consolidates and summarizes this information, emphasizing those aspects that are most applicable to display design.
This book describes fault tolerance techniques based on software and hardware to create hybrid techniques. They are able to reduce overall performance degradation and increase error detection when associated with applications implemented in embedded processors. Coverage begins with an extensive discussion of the current state-of-the-art in fault tolerance techniques. The authors then discuss the best trade-off between software-based and hardware-based techniques and introduce novel hybrid techniques. Proposed techniques increase existing fault detection rates up to 100%, while maintaining low performance overheads in area and application execution time."
This state-of-the-art survey gives a systematic presentation of recent advances in the design and validation of computer architectures. Based on advanced research ideas and approaches, and written by eminent researchers in the field, seven chapters cover the whole range from computer aided high-level design of VLSI circuits and systems to layout and testable design, including modeling and synthesis of behavior, of control, and of dataflow, cell based logic optimization, machine assisted verification, and virtual machine design. The chapters presuppose only basic familiarity with computer architecture. They are self-contained and lead the reader gently and informatively to the forefront of current research. A special feature of the book is the comprehensive range of architecture design and validation topics covered, giving the reader a clear view of the problems and of advanced techniques for their solution.
This textbook serves as an introduction to the subject of embedded
systems design, using microcontrollers as core components. It
develops concepts from the ground up, covering the development of
embedded systems technology, architectural and organizational
aspects of controllers and systems, processor models, and
peripheral devices. Since microprocessor-based embedded systems
tightly blend hardware and software components in a single
application, the book also introduces the subjects of data
representation formats, data operations, and programming styles.
The practical component of the book is tailored around the
architecture of a widely used
This book presents the theory behind software-implemented hardware fault tolerance, as well as the practical aspects needed to put it to work on real examples. By evaluating accurately the advantages and disadvantages of the already available approaches, the book provides a guide to developers willing to adopt software-implemented hardware fault tolerance in their applications. Moreover, the book identifies open issues for researchers willing to improve the already available techniques.
Computer interfaces and documentation are notoriously difficult for any user, regardless of his or her level of experience. Advances in technology are not making applications more friendly. Introducing concepts from linguistics and language teaching, Language and Communication proposes a new approach to computer interface design. The book explains for the first time why the much hyped user-friendly interface is treated with such derision by the user community. The author argues that software and hardware designers should consider such fundamental language concepts as meaning, context, function, variety, and equivalence. She goes on to show how imagining an interface as a new language can be an invaluable design exercise, calling into question deeply held beliefs and assumptions about what users will or will not understand. Written for a wide range of computer scientists and professionals, and presuming no prior knowledge of language-related terminology, this volume is a key step in the on-going information revolution.
High Performance Computing Systems and Applications contains the fully refereed papers from the 13th Annual Symposium on High Performance Computing, held in Kingston, Canada, in June 1999. This book presents the latest research in HPC architectures, distributed and shared memory performance, algorithms and solvers, with special sessions on atmospheric science, computational chemistry and physics. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry. |
You may like...
Representation Theory of the Virasoro…
Kenji Iohara, Yoshiyuki Koga
Hardcover
R2,917
Discovery Miles 29 170
Black Like You - An Autobiography
Herman Mashaba, Isabella Morris
Paperback
(4)
|