![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Operating systems & graphical user interfaces (GUIs) > General
Replication Techniques in Distributed Systems organizes and surveys the spectrum of replication protocols and systems that achieve high availability by replicating entities in failure-prone distributed computing environments. The entities discussed in this book vary from passive untyped data objects, to typed and complex objects, to processes and messages. Replication Techniques in Distributed Systems contains definitions and introductory material suitable for a beginner, theoretical foundations and algorithms, an annotated bibliography of commercial and experimental prototype systems, as well as short guides to recommended further readings in specialized subtopics. This book can be used as recommended or required reading in graduate courses in academia, as well as a handbook for designers and implementors of systems that must deal with replication issues in distributed systems.
This book constitutes the refereed proceedings of the International Conference on Multiscore Software Engineering, Performance, and Tools, MSEPT 2012, held in Prague in May/June 2012. The 9 revised papers, 4 of which are short papers were carefully reviewed and selected from 24 submissions. The papers address new work on optimization of multicore software, program analysis, and automatic parallelization. They also provide new perspectives on programming models as well as on applications of multicore systems.
The object of this book is to cover most of the currently relevant areas of data communications and networks. These include: Communications protocols (especially TCP/IP) Networking (especially in Ethernet, Fast Ethernet, FDDI and ATM) Networking operating systems (especially in Windows NT, Novell NetWare and UNIX) Communications programs (especially in serial communications, parallel communications and TCP/IP) Computer hardware (especially in PC hardware, serial communications and parallel communication) The book thus splits into 15 different areas, these are: General data compression (Chapters 2 and 3) Video, images and sound (Chapters 4-11 ) Error coding and encryption (Chapters 12-17) TCP/IP, WWW, Internets and Intranets (Chapters 18-20 and 23) Electronic Mail (Chapter 21 ) HTML (Chapters 25 and 26) Java (Chapters 27-29) Communication Programs (Chapters 20, 29 and 49) Network Operating Systems (Chapters 31-34) LANs/WANs (Chapters 35, 38-46) Serial Communications (Chapters 47 and 48) Parallel Communications (Chapters 50-52) Local Communications (Chapters 53-57) Routing and Protocols (Chapters 36 and 37) Cables and connectors (Chapters 58--60) Many handbooks and reference guides on the market contain endless tables and mathematics, or are dry to read and contain very little insight in their subject area. I have tried to make this book readable, but also contain key information which can be used by professionals.
This book constitutes the proceedings of the 10th IFIP International Conference on Network and Parallel Computing, NPC 2013, held in Guiyang, China, in September 2013. The 34 papers presented in this volume were carefully reviewed and selected from 109 submissions. They are organized in topical sections named: parallel programming and algorithms; cloud resource management; parallel architectures; multi-core computing and GPU; and miscellaneous.
This text comprises the edited collection of papers presented at the NATO Advanced Study Institute which took place at Altmyunus,
This book is a result of the Tenth International Conference on Information Systems Development (ISD2001) held at Royal Holloway, University of London, United Kingdom, during September 5-7, 2001. ISD 2001 carries on the fine tradition established by the first Polish-Scandinavian Seminar on Current Trends in Information Systems Development Methodologies, held in Gdansk, Poland in 1988. Through the years, this seminar evolved into an International Conference on Information Systems Development. The Conference gives participants an opportunity to express ideas on the current state of the art in information systems development, and to discuss and exchange views on new methods, tools, applications as well as theory. In all, 55 papers were presented at ISD2001 organised into twelve tracks covering the following themes: Systems Analysis and Development, Modelling, Methodology, Database Systems, Collaborative Systems, Theory, Knowledge Management, Project Management, IS Education, Management issues, E-Commerce. and Technical Issues. We would like to thank all the contributing authors for making this book possible and for their participation in ISD200 1. We are grateful to our panel of paper reviewers for their help and support. We would also like to express our sincere thanks to Ceri Bowyer and Steve Brown for their unfailing support with organising ISD2001.
This book constitutes the refereed proceedings of the 14th International Conference on Passive and Active Measurement, PAM 2013, held in Hong Kong, China, in March 2013. The 24 revised full papers presented were carefully reviewed and selected from 74 submissions. The papers have been organized in the following topical sections: measurement design, experience and analysis; Internet wireless and mobility; performance measurement; protocol and application behavior; characterization of network usage; and network security and privacy. In addition, 9 poster abstracts have been included.
This book constitutes the refereed proceedings of the International Conference on Multiscore Software Engineering, Performance, and Tools, MUSEPAT 2013, held in Saint Petersburg, Russia, in August 2013. The 9 revised papers were carefully reviewed and selected from 25 submissions. The accepted papers are organized into three main sessions and cover topics such as software engineering for multicore systems; specification, modeling and design; programing models, languages, compiler techniques and development tools; verification, testing, analysis, debugging and performance tuning, security testing; software maintenance and evolution; multicore software issues in scientific computing, embedded and mobile systems; energy-efficient computing as well as experience reports.
I love virtual machines (VMs) and I have done for a long time.If that makes me "sad" or an "anorak," so be it. I love them because they are so much fun, as well as being so useful. They have an element of original sin (writing assembly programs and being in control of an entire machine), while still being able to claim that one is being a respectable member of the community (being structured, modular, high-level, object-oriented, and so on). They also allow one to design machines of one's own, unencumbered by the restrictions of a starts optimising it for some physical particular processor (at least, until one processor or other). I have been building virtual machines, on and off, since 1980 or there abouts. It has always been something of a hobby for me; it has also turned out to be a technique of great power and applicability. I hope to continue working on them, perhaps on some of the ideas outlined in the last chapter (I certainly want to do some more work with register-based VMs and concur rency). I originally wanted to write the book from a purely semantic viewpoint."
This volume contains a selection of papers that focus on the state-of the-art in real-time scheduling and resource management. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D.C. A companion volume by the title Foundations of Real-Time Computing: Fonnal Specifications and Methods complements this book by addressing many of the most advanced approaches currently being investigated in the arena of formal specification and verification of real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real-time computing systems on a scientific basis. Many of the papers in this book take care to define the notion of real-time system precisely, because it is often easy to misunderstand what is meant by that term. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the very difficult problems of scheduling tasks and resource management in computer systems whose performance is inextricably fused with the achievement of deadlines. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End-use applications of deadline-driven real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."
Compilers and Operating Systems for Low Power focuses on both application-level compiler directed energy optimization and low-power operating systems. Chapters have been written exclusively for this volume by several of the leading researchers and application developers active in the field. The first six chapters focus on low energy operating systems, or more in general, energy-aware middleware services. The next five chapters are centered on compilation and code optimization. Finally, the last chapter takes a more general viewpoint on mobile computing. The material demonstrates the state-of-the-art work and proves that to obtain the best energy/performance characteristics, compilers, system software, and architecture must work together. The relationship between energy-aware middleware and wireless microsensors, mobile computing and other wireless applications are covered. This work will be of interest to researchers in the areas of low-power computing, embedded systems, compiler optimizations, and operating systems.
As we continue to build faster and fast. er computers, their performance is be coming increasingly dependent on the memory hierarchy. Both the clock speed of the machine and its throughput per clock depend heavily on the memory hierarchy. The time to complet. e a cache acce88 is oft. en the factor that det. er mines the cycle time. The effectiveness of the hierarchy in keeping the average cost of a reference down has a major impact on how close the sustained per formance is to the peak performance. Small changes in the performance of the memory hierarchy cause large changes in overall system performance. The strong growth of ruse machines, whose performance is more tightly coupled to the memory hierarchy, has created increasing demand for high performance memory systems. This trend is likely to accelerate: the improvements in main memory performance will be small compared to the improvements in processor performance. This difference will lead to an increasing gap between prOCe880r cycle time and main memory acce. time. This gap must be closed by improving the memory hierarchy. Computer architects have attacked this gap by designing machines with cache sizes an order of magnitude larger than those appearing five years ago. Microproce880r-based RISe systems now have caches that rival the size of those in mainframes and supercomputers."
The engineering, deployment and security of the future smart grid will be an enormous project requiring the consensus of many stakeholders with different views on the security and privacy requirements, not to mention methods and solutions. The fragmentation of research agendas and proposed approaches or solutions for securing the future smart grid becomes apparent observing the results from different projects, standards, committees, etc, in different countries. The different approaches and views of the papers in this collection also witness this fragmentation. This book contains three full-paper length invited papers and 7 corrected and extended papers from the First International Workshop on Smart Grid Security, SmartGridSec 2012, which brought together researchers from different communities from academia and industry in the area of securing the Future Smart Grid and was held in Berlin, Germany, on December 3, 2012.
This book is on dependence concepts and general methods for dependence testing. Here, dependence means data dependence and the tests are compile-time tests. We felt the time was ripe to create a solid theory of the subject, to provide the research community with a uniform conceptual framework in which things fit together nicely. How successful we have been in meeting these goals, of course, remains to be seen. We do not try to include all the minute details that are known, nor do we deal with clever tricks that all good programmers would want to use. We do try to convince the reader that there is a mathematical basis consisting of theories of bounds of linear functions and linear diophantine equations, that levels and direction vectors are concepts that arise rather natu rally, that different dependence tests are really special cases of some general tests, and so on. Some mathematical maturity is needed for a good understand ing of the book: mainly calculus and linear algebra. We have cov ered diophantine equations rather thoroughly and given a descrip of some matrix theory ideas that are not very widely known. tion A reader familiar with linear programming would quickly recog nize several concepts. We have learned a great deal from the works of M. Wolfe, and K. Kennedy and R. Allen. Wolfe's Ph. D. thesis at the University of Illinois and Kennedy & Allen's paper on vectorization of Fortran programs are still very useful sources on this subject."
This book is the result of the 11 th International Conference on Information Systems Development -Methods and Tools, Theory and Practice, held in Riga, Latvia, September 12-14,2002. The purpose of this conference was to address issues facing academia and industry when specifying, developing, managing, reengineering and improving information systems. Recently many new concepts and approaches have emerged in the Information Systems Development (ISD) field. Various theories, methodologies, methods and tools available to system developers also created new problems, such as choosing the most effective approach for a specific task, or solving problems of advanced technology integration into information systems. This conference provides a meeting place for ISD researchers and practitioners from Eastern and Western Europe as well as from other parts of the world. Main objectives of this conference are to share scientific knowledge and interests and to establish strong professional ties among the participants. The 11th International Conference on Information Systems Development (ISD'02) continues the tradition started with the first Polish-Scandinavian Seminar on Current Trends in Information Systems Development Methodologies, held in Gdansk, Poland in 1988. Through the years this Seminar has evolved into the International Conference on Information Systems Development. ISD'02 is the first ISD conference held in Eastern Europe, namely, in Latvia, one of the three Baltic countries.
Concurrent systems abound in human experience but their fully adequate conceptualization as yet eludes our most able thinkers. The COSY (ConcurrentSystem) notation and theory was developed in the last decade as one of a number of mathematical approaches for conceptualizing and analyzing concurrent and reactive systems. The COSY approach extends theconventional notions of grammar and automaton from formal language and automata theory to collections of "synchronized" grammars and automata, permitting system specification and analysis of "true" concurrency without reduction to non-determinism. COSY theory is developed to a great level of detail and constitutes the first uniform and self-contained presentationof all results about COSY published in the past, as well as including many new results. COSY theory is used to analyze a sufficient number of typical problems involving concurrency, synchronization and scheduling, to allow the reader to apply the techniques presented tosimilar problems. The COSY model is also related to many alternative models of concurrency, particularly Petri Nets, Communicating Sequential Processes and the Calculus of Communicating Systems.
Parallel Language and Compiler Research in Japan offers the international community an opportunity to learn in-depth about key Japanese research efforts in the particular software domains of parallel programming and parallelizing compilers. These are important topics that strongly bear on the effectiveness and affordability of high performance computing systems. The chapters of this book convey a comprehensive and current depiction of leading edge research efforts in Japan that focus on parallel software design, development, and optimization that could be obtained only through direct and personal interaction with the researchers themselves.
This book constitutes the refereed proceedings of the 26th International Conference on Architecture of Computing Systems, ARCS 2013, held in Prague, Czech Republic, in February 2013. The 29 papers presented were carefully reviewed and selected from 73 submissions. The topics covered are computer architecture topics such as multi-cores, memory systems, and parallel computing, adaptive system architectures such as reconfigurable systems in hardware and software, customization and application specific accelerators in heterogeneous architectures, organic and autonomic computing including both theoretical and practical results on self-organization, self-configuration, self-optimization, self-healing, and self-protection techniques, operating systems including but not limited to scheduling, memory management, power management, RTOS, energy-awareness, and green computing.
This book constitutes the thoroughly refereed post-conference
proceedings of the 6th International Workshop on Security and Trust
Management, STM 2010, held in Athens, Greece, in September 2010.
PC viruses are not necessarily a major disaster despite what is sometimes written about them. But a virus infection is at the very least a nuisance, and potentially can lead to loss of data. Quite often it is the user's panic reaction to discovering a virus infection that does more than the virus itself. This book demystifies PC viruses, providing clear, accurate information about this relatively new PC problem. It enables managers and PC users to formulate an appropriate response; adequate for prevention and cure, but not `over the top'. Over 100 PC viruses and variants are documented in detail. You are told how to recognise each one, what it does, how it copies itself, and how to get rid of it. Other useful and relevant technical information is also provided. Strategies for dealing with potential and actual virus outbreaks are described for business, academic and other environments, with the emphasis on sensible but not unreasonable precautions. All users of IBM PC or compatible computers - from single machines to major LAN's - will find this book invaluable. All that is required is a working knowledge of DOS. Dr. Alan Solomon has been conducting primary research into PC viruses since they first appeared, and has developed the best-selling virus protection software Dr. Solomon's Anti-Virus Toolkit.
This book constitutes the refereed proceedings of the 9th International Symposium on Advanced Parallel Processing Technologies, APPT 2011, held in Shanghai, China, in September 2011. The 13 revised full papers presented were carefully reviewed and selected from 40 submissions. The papers are organized in topical sections on parallel distributed system architectures, architecture, parallel application and software, distributed and cloud computing.
This book constitutes the thoroughly refereed proceedings of the 16th International Workshop on Job Scheduling Strategies for Parallel Processing, JSSPP 2012, which was held in Shanghai, China, in May 2012. The 14 revised papers presented were carefully reviewed and selected from 24 submissions. The papers cover the following topics: parallel batch scheduling; workload analysis and modeling; resource management system software studies; and Web scheduling.
The two-volume set LNCS 6852/6853 constitutes the refereed proceedings of the 17th International Euro-Par Conference held in Bordeaux, France, in August/September 2011.The 81 revised full papers presented were carefully reviewed and selected from 271 submissions. The papers are organized in topical sections on support tools and environments; performance prediction and evaluation; scheduling and load-balancing; high-performance architectures and compilers; parallel and distributed data management; grid, cluster and cloud computing; peer to peer computing; distributed systems and algorithms; parallel and distributed programming; parallel numerical algorithms; multicore and manycore programming; theory and algorithms for parallel computation; high performance networks and mobile ubiquitous computing.
Real-time computing systems are vital to a wide range of applications. For example, they are used in the control of nuclear reactors and automated manufacturing facilities, in controlling and tracking air traffic, and in communication systems. In recent years, real-time systems have also grown larger and become more critical. For instance, advanced aircraft such as the space shuttle must depend heavily on computer sys tems Carlow 84]. The centralized control of manufacturing facilities and assembly plants operated by robots are other examples at the heart of which lie embedded real-time systems. Military defense systems deployed in the air, on the ocean surface, land and underwater, have also been increasingly relying upon real-time systems for monitoring and operational safety purposes, and for retaliatory and containment measures. In telecommunications and in multi-media applications, real time characteristics are essential to maintain the integrity of transmitted data, audio and video signals. Many of these systems control, monitor or perform critical operations, and must respond quickly to emergency events in a wide range of embedded applications. They are therefore required to process tasks with stringent timing requirements and must perform these tasks in a way that these timing requirements are guaranteed to be met. Real-time scheduling al gorithms attempt to ensure that system timing behavior meets its specifications, but typically assume that tasks do not share logical or physical resources. Since resource-sharing cannot be eliminated, synchronization primitives must be used to ensure that resource consis tency constraints are not violated."
Intelligent Integration of Information presents a collection of chapters bringing the science of intelligent integration forward. The focus on integration defines tasks that increase the value of information when information from multiple sources is accessed, related, and combined. This contributed volume has also been published as a special double issue of the Journal of Intelligent Information Systems (JIIS), Volume 6:2/3. |
You may like...
The President's Daily Brief Fifty Years…
Central Intelligence Agency
Hardcover
R739
Discovery Miles 7 390
Teaching Information Literacy for…
Mark Hepworth, Geoff Walton
Paperback
R1,640
Discovery Miles 16 400
Data Science and Big Data: An…
Witold Pedrycz, Shyi-Ming Chen
Hardcover
R4,705
Discovery Miles 47 050
|