![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
This book intends to inculcate the innovative ideas for the scheduling aspect in distributed computing systems. Although the models in this book have been designed for distributed systems, the same information is applicable for any type of system. The book will dramatically improve the design and management of the processes for industry professionals. It deals exclusively with the scheduling aspect, which finds little space in other distributed operating system books. Structured for a professional audience composed of researchers and practitioners in industry, this book is also suitable as a reference for graduate-level students.
This book studies algorithmic issues associated with cooperative execution of multiple independent tasks by distributed computing agents including partitionable networks. It provides the most significant algorithmic solution developed and available today for do-all computing for distributed systems (including partitionable networks), and is the first monograph that deals with do-all computing for distributed systems. The book is structured to meet the needs of a professional audience composed of researchers and practitioners in industry. This volume is also suitable for graduate-level students in computer science.
Software-Implemented Hardware Fault Tolerance addresses the innovative topic of software-implemented hardware fault tolerance (SIHFT), i.e., how to deal with faults affecting the hardware by only (or mainly) acting on the software. The first SIHFT techniques were proposed and adopted several decades ago, but they have been the object of new interest in the past few years, mainly due to the need for developing low-cost safety-critical computer-based applications in fields such as automotive, biomedics, and telecommunications. Therefore, several new approaches to detect, and when possible correct, transient and permanent faults in the hardware have been recently proposed. These approaches are innovative (with respect to those proposed in the past) since they are of higher applicability (often starting from the source-level code of an application) and generality, being capable of coping with many different fault types. The book presents the theory behind software-implemented hardware fault tolerance, as well as the practical aspects related to put it at work on real examples. By evaluating accurately the advantages and disadvantages of the already available approaches, the book provides a guide to developers willing to adopt software-implemented hardware fault tolerance in their applications. Moreover, the book identifies open issues for researchers willing to improve the already available techniques.
This book presents the most recent concerns and research results in industrial fault diagnosis using intelligent techniques. It focuses on computational intelligence applications to fault diagnosis with real-world applications used in different chapters to validate the different diagnosis methods. The book includes one chapter dealing with a novel coherent fault diagnosis distributed methodology for complex systems.
"The Testing Network" presents an integrated approach to testing based on cutting-edge methodologies, processes and tools in today's IT context. It means complex network-centric applications to be tested in heterogeneous IT infrastructures and in multiple test environments (also geographically distributed). The added-value of this book is the in-depth explanation of all processes and relevant methodologies and tools to address this complexity. Main aspects of testing are explained using TD/QC - the world-leader test platform. This up-to-date know-how is based on real-life IT experiences gained in large-scale projects of companies operating worldwide. The book is abundantly illustrated to better show all technical aspects of modern testing in a national and international context. The author has a deep expertise by designing and giving testing training in large companies using the above-mentioned tools and processes. "The Testing Network" is a unique synthesis of core test topics applied in real-life.
Stochastic discrete-event systems (SDES) capture the randomness in choices due to activity delays and the probabilities of decisions. This book delivers a comprehensive overview on modeling with a quantitative evaluation of SDES. It presents an abstract model class for SDES as a pivotal unifying result and details important model classes. The book also includes nontrivial examples to explain real-world applications of SDES.
Fundamental Problems in Computing is in honor of Professor Daniel J. Rosenkrantz, a distinguished researcher in Computer Science. Professor Rosenkrantz has made seminal contributions to many subareas of Computer Science including formal languages and compilers, automata theory, algorithms, database systems, very large scale integrated systems, fault-tolerant computing and discrete dynamical systems. For many years, Professor Rosenkrantz served as the Editor-in-Chief of the Journal of the Association for Computing Machinery (JACM), a very prestigious archival journal in Computer Science. His contributions to Computer Science have earned him many awards including the Fellowship from ACM and the ACM SIGMOD Contributions Award.
This book proposes novel memory hierarchies and software optimization techniques for the optimal utilization of memory hierarchies. It presents a wide range of optimizations, progressively increasing in the complexity of analysis and of memory hierarchies. The final chapter covers optimization techniques for applications consisting of multiple processes found in most modern embedded devices.
Protection of enterprise networks from malicious intrusions is critical to the economy and security of our nation. This article gives an overview of the techniques and challenges for security risk analysis of enterprise networks. A standard model for security analysis will enable us to answer questions such as "are we more secure than yesterday" or "how does the security of one network configuration compare with another one". In this article, we will present a methodology for quantitative security risk analysis that is based on the model of attack graphs and the Common Vulnerability Scoring System (CVSS). Our techniques analyze all attack paths through a network, for an attacker to reach certain goal(s).
This book constitutes thoroughly refereed post-conference proceedings of the workshops of the 16th International Conference on Parallel Computing, Euro-Par 2010, held in Ischia, Italy, in August/September 2010. The papers of these 9 workshops HeteroPar, HPCC, HiBB, CoreGrid, UCHPC, HPCF, PROPER, CCPI, and VHPC focus on promotion and advancement of all aspects of parallel and distributed computing.
Network calculus is a theory dealing with queuing systems found in computer networks. Its focus is on performance guarantees. Central to the theory is the use of alternate algebras such as the min-plus algebra to transform complex network systems into analytically tractable systems. To simplify the ana- sis, another idea is to characterize tra?c and service processes using various bounds. Since its introduction in the early 1990s, network calculus has dev- oped along two tracks-deterministic and stochastic. This book is devoted to summarizing results for stochastic network calculus that can be employed in the design of computer networks to provide stochastic service guarantees. Overview and Goal Like conventional queuing theory, stochastic network calculus is based on properly de?ned tra?c models and service models. However, while in c- ventional queuing theory an arrival process is typically characterized by the inter-arrival times of customers and a service process by the service times of customers, the arrival process and the service process are modeled in n- work calculus respectively by some arrival curve that (maybe probabilis- cally) upper-bounds the cumulative arrival and by some service curve that (maybe probabilistically) lower-bounds the cumulative service. The idea of usingboundstocharacterizetra?candservicewasinitiallyintroducedfor- terministic network calculus. It has also been extended to stochastic network calculus by exploiting the stochastic nature of arrival and service processes.
The book provides complete coverage of fundamental IP networking in Java. It introduces the concepts behind TCP/IP and UDP and their intended use and purpose; gives complete coverage of Java networking APIs, includes an extended discussion of advanced server design, so that the various design principles and tradeoffs concerned are discussed and equips the reader with analytic queuing-theory tools to evaluate design alternatives; covers UDP multicasting, and covers multi-homed hosts, leading the reader to understand the extra programming steps and design considerations required in such environments. After reading this book the reader will have an advanced knowledge of fundamental network design and programming concepts in the Java language, enabling them to design and implement distributed applications with advanced features and to predict their performance. Special emphasis is given to the scalable I/O facilities of Java 1.4 as well as complete treatments of multi-homing and UDP both unicast and multicast.
Service provisioning in ad hoc networks is challenging given the difficulties of communicating over a wireless channel and the potential heterogeneity and mobility of the devices that form the network. Service placement is the process of selecting an optimal set of nodes to host the implementation of a service in light of a given service demand and network topology. The key advantage of active service placement in ad hoc networks is that it allows for the service configuration to be adapted continuously at run time. "Service Placement in Ad Hoc Networks" proposes the SPi service placement framework as a novel approach to service placement in ad hoc networks. The SPi framework takes advantage of the interdependencies between service placement, service discovery and the routing of service requests to minimize signaling overhead. The work also proposes the Graph Cost / Single Instance and the Graph Cost / Multiple Instances placement algorithms.
This book constitutes the refereed proceedings of the 15th International Conference on Principles of Distributed Systems, OPODIS 2011, held in Toulouse, France, in December 2011. The 26 revised papers presented in this volume were carefully reviewed and selected from 96 submissions. They represent the current state of the art of the research in the field of the design, analysis and development of distributed and real-time systems.
This book constitutes the refereed proceedings of the 14th International Conference on Model Driven Engineering Languages and Systems, MODELS 2011, held in Wellington, New Zealand, in October 2011. The papers address a wide range of topics in research (foundations track) and practice (applications track). For the first time a new category of research papers, vision papers, are included presenting "outside the box" thinking. The foundations track received 167 full paper submissions, of which 34 were selected for presentation. Out of these, 3 papers were vision papers. The application track received 27 submissions, of which 13 papers were selected for presentation. The papers are organized in topical sections on model transformation, model complexity, aspect oriented modeling, analysis and comprehension of models, domain specific modeling, models for embedded systems, model synchronization, model based resource management, analysis of class diagrams, verification and validation, refactoring models, modeling visions, logics and modeling, development methods, and model integration and collaboration.
This book constitutes the refereed proceedings of the 11th
International Conference on Next Generation Teletraffic and
Wired/Wireless Advanced Networking, NEW2AN 2011 and the 4th
Conference on Smart Spaces, ruSMART 2011 jointly held in St.
Petersburg, Russia, in August 2011.
Supervisory Control Theory (SCT) provides a tool to model and control human-engineered complex systems, such as computer networks, World Wide Web, identification and spread of malicious executables, and command, control, communication, and information systems. Although there are some excellent monographs and books on SCT to control and diagnose discrete-event systems, there is a need for a research monograph that provides a coherent quantitative treatment of SCT theory for decision and control of complex systems. This new monograph will assimilate many new concepts that have been recently reported or are in the process of being reported in open literature. The major objectives here are to present a) a quantitative approach, supported by a formal theory, for discrete-event decision and control of human-engineered complex systems; and b) a set of applications to emerging technological areas such as control of software systems, malicious executables, and complex engineering systems. The monograph will provide the necessary background materials in automata theory and languages for supervisory control. It will introduce a new paradigm of language measure to quantitatively compare the performance of different automata models of a physical system. A novel feature of this approach is to generate discrete-event robust optimal decision and control algorithms for both military and commercial systems.
This report describes the partially completed correctness proof of the Viper 'block model'. Viper 7,8,9,11,23] is a microprocessor designed by W. J. Cullyer, C. Pygott and J. Kershaw at the Royal Signals and Radar Establishment in Malvern, England, (henceforth 'RSRE') for use in safety-critical applications such as civil aviation and nuclear power plant control. It is currently finding uses in areas such as the de ployment of weapons from tactical aircraft. To support safety-critical applications, Viper has a particulary simple design about which it is relatively easy to reason using current techniques and models. The designers, who deserve much credit for the promotion of formal methods, intended from the start that Viper be formally verified. Their idea was to model Viper in a sequence of decreasingly abstract levels, each of which concentrated on some aspect ofthe design, such as the flow ofcontrol, the processingofinstructions, and so on. That is, each model would be a specification of the next (less abstract) model, and an implementation of the previous model (if any). The verification effort would then be simplified by being structured according to the sequence of abstraction levels. These models (or levels) of description were characterized by the design team. The first two levels, and part of the third, were written by them in a logical language amenable to reasoning and proof."
This year, the IFIP Working Conference on Distributed and Parallel Embedded Sys tems (DIPES 2008) is held as part of the IFIP World Computer Congress, held in Milan on September 7 10, 2008. The embedded systems world has a great deal of experience with parallel and distributed computing. Many embedded computing systems require the high performance that can be delivered by parallel computing. Parallel and distributed computing are often the only ways to deliver adequate real time performance at low power levels. This year's conference attracted 30 submissions, of which 21 were accepted. Prof. Jor ] g Henkel of the University of Karlsruhe graciously contributed a keynote address on embedded computing and reliability. We would like to thank all of the program committee members for their diligence. Wayne Wolf, Bernd Kleinjohann, and Lisa Kleinjohann Acknowledgements We would like to thank all people involved in the organization of the IFIP World Computer Congress 2008, especially the IPC Co Chairs Judith Bishop and Ivo De Lotto, the Organization Chair Giulio Occhini, as well as the Publications Chair John Impagliazzo. Further thanks go to the authors for their valuable contributions to DIPES 2008. Last but not least we would like to acknowledge the considerable amount of work and enthusiasm spent by our colleague Claudius Stern in preparing theproceedingsofDIPES2008. Hemadeitpossibletoproducethemintheircurrent professional and homogeneous style."
This book constitutes the refereed proceedings of the 8th International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR 2011, held in St. Petersburg, Russia in July, 2011. The book presents 30 revised full papers selected from a total of 52 submissions. The book is divided in sections on discrete and continuous optimization, segmentation, motion and video, learning and shape analysis.
The Second International Workshop on Traffic Monitoring and Analysis (TMA 2010) was an initiative of the COST Action IC0703 "Data Traffic Monitoring and Analysis: Theory, Techniques, Tools and Applications for the Future Networks" (http: // www.tma-portal.eu/cost-tma-action). The COST program is an intergovernmental framework for European cooperation in science and technology, promoting the coordination of nationally funded research on a European level. Each COST Action aims at reducing the fragmentation in - search and opening the European research area to cooperation worldwide. Traffic monitoring and analysis (TMA) is nowadays an important research topic within the field of computer networks. It involves many research groups worldwide that are collectively advancing our understanding of the Internet. The importance of TMA research is motivated by the fact that modern packet n- works are highly complex and ever-evolving objects. Understanding, developing and managing such environments is difficult and expensive in practice. Traffic monitoring is a key methodology for understanding telecommunication technology and improving its operation, and the recent advances in this field suggest that evolved TMA-based techniques can play a key role in the operation of real networks.
This book constitutes the refereed proceedings of the 11th IFIP
WG 6.1 International Conference on Distributed Applications and
Interoperable Systems, DAIS 2011, held in Reykjavik, Iceland, in
June 2011 as one of the DisCoTec 2011 events.
Collections of digital documents can nowadays be found everywhere in institutions, universities or companies. Examples are Web sites or intranets. But searching them for information can still be painful. Searches often return either large numbers of matches or no suitable matches at all. Such document collections can vary a lot in size and how much structure they carry. What they have in common is that they typically do have some structure and that they cover a limited range of topics. The second point is significantly different from the Web in general. The type of search system that we propose in this book can suggest ways of refining or relaxing the query to assist a user in the search process. In order to suggest sensible query modifications we would need to know what the documents are about. Explicit knowledge about the document collection encoded in some electronic form is what we need. However, typically such knowledge is not available. So we construct it automatically.
From the Foreword: ..".the presentation of real-time scheduling is probably the best in terms of clarity I have ever read in the professional literature. Easy to understand, which is important for busy professionals keen to acquire (or refresh) new knowledge without being bogged down in a convoluted narrative and an excessive detail overload. The authors managed to largely avoid theoretical-only presentation of the subject, which frequently affects books on operating systems. ... an indispensable resource] to gain a thorough understanding of the real-time systems from the operating systems perspective, and to stay up to date with the recent trends and actual developments of the open-source real-time operating systems." -Richard Zurawski, ISA Group, San Francisco, California,
USA Real-time embedded systems are integral to the global technological and social space, but references still rarely offer professionals the sufficient mix of theory and practical examples required to meet intensive economic, safety, and other demands on system development. Similarly, instructors have lacked a resource to help students fully understand the field. The information was out there, though often at the abstract level, fragmented and scattered throughout literature from different engineering disciplines and computing sciences. Accounting for readers' varying practical needs and experience levels, Real Time Embedded Systems: Open-Source Operating Systems Perspective offers a holistic overview from the operating-systems perspective. It provides a long-awaited reference on real-time operating systems and their almost boundless application potential in the embedded system domain. Balancing the already abundant coverage of operating systems with the largely ignored real-time aspects, or "physicality," the authors analyze several realistic case studies to introduce vital theoretical material. They also discuss popular open-source operating systems-Linux and FreRTOS, in particular-to help embedded-system designers identify the benefits and weaknesses in deciding whether or not to adopt more traditional, less powerful, techniques for a project. |
You may like...
Linear and Integer Programming Made Easy
T.C. Hu, Andrew B. Kahng
Hardcover
R2,118
Discovery Miles 21 180
Topology and Geometric Group Theory…
Michael W. Davis, James Fowler, …
Hardcover
Sustainable Fuel Technologies Handbook
Suman Dutta, Chaudhery Mustansar Hussain
Paperback
R3,331
Discovery Miles 33 310
Applications of Geometric Algebra in…
Leo Dorst, Chris Doran, …
Hardcover
R2,924
Discovery Miles 29 240
Advances in the Complex Variable…
Theodore V Hromadka, Robert J Whitley
Hardcover
R4,228
Discovery Miles 42 280
The Theory of Sprays and Finsler Spaces…
P.L. Antonelli, Roman S. Ingarden, …
Hardcover
R5,318
Discovery Miles 53 180
One-dimensional Linear Singular Integral…
Israel Gohberg, Naum IA. Krupnick, …
Hardcover
R2,394
Discovery Miles 23 940
Handbook of Microalgae-Based Processes…
Eduardo Jacob-Lopes, Mariana Manzoni Maroneze, …
Paperback
R5,031
Discovery Miles 50 310
|