![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
The book provides complete coverage of fundamental IP networking in Java. It introduces the concepts behind TCP/IP and UDP and their intended use and purpose; gives complete coverage of Java networking APIs, includes an extended discussion of advanced server design, so that the various design principles and tradeoffs concerned are discussed and equips the reader with analytic queuing-theory tools to evaluate design alternatives; covers UDP multicasting, and covers multi-homed hosts, leading the reader to understand the extra programming steps and design considerations required in such environments. After reading this book the reader will have an advanced knowledge of fundamental network design and programming concepts in the Java language, enabling them to design and implement distributed applications with advanced features and to predict their performance. Special emphasis is given to the scalable I/O facilities of Java 1.4 as well as complete treatments of multi-homing and UDP both unicast and multicast.
Service provisioning in ad hoc networks is challenging given the difficulties of communicating over a wireless channel and the potential heterogeneity and mobility of the devices that form the network. Service placement is the process of selecting an optimal set of nodes to host the implementation of a service in light of a given service demand and network topology. The key advantage of active service placement in ad hoc networks is that it allows for the service configuration to be adapted continuously at run time. "Service Placement in Ad Hoc Networks" proposes the SPi service placement framework as a novel approach to service placement in ad hoc networks. The SPi framework takes advantage of the interdependencies between service placement, service discovery and the routing of service requests to minimize signaling overhead. The work also proposes the Graph Cost / Single Instance and the Graph Cost / Multiple Instances placement algorithms.
This book constitutes the refereed proceedings of the 15th International Conference on Principles of Distributed Systems, OPODIS 2011, held in Toulouse, France, in December 2011. The 26 revised papers presented in this volume were carefully reviewed and selected from 96 submissions. They represent the current state of the art of the research in the field of the design, analysis and development of distributed and real-time systems.
This book constitutes the refereed proceedings of the 14th International Conference on Model Driven Engineering Languages and Systems, MODELS 2011, held in Wellington, New Zealand, in October 2011. The papers address a wide range of topics in research (foundations track) and practice (applications track). For the first time a new category of research papers, vision papers, are included presenting "outside the box" thinking. The foundations track received 167 full paper submissions, of which 34 were selected for presentation. Out of these, 3 papers were vision papers. The application track received 27 submissions, of which 13 papers were selected for presentation. The papers are organized in topical sections on model transformation, model complexity, aspect oriented modeling, analysis and comprehension of models, domain specific modeling, models for embedded systems, model synchronization, model based resource management, analysis of class diagrams, verification and validation, refactoring models, modeling visions, logics and modeling, development methods, and model integration and collaboration.
This book constitutes the refereed proceedings of the 11th
International Conference on Next Generation Teletraffic and
Wired/Wireless Advanced Networking, NEW2AN 2011 and the 4th
Conference on Smart Spaces, ruSMART 2011 jointly held in St.
Petersburg, Russia, in August 2011.
Supervisory Control Theory (SCT) provides a tool to model and control human-engineered complex systems, such as computer networks, World Wide Web, identification and spread of malicious executables, and command, control, communication, and information systems. Although there are some excellent monographs and books on SCT to control and diagnose discrete-event systems, there is a need for a research monograph that provides a coherent quantitative treatment of SCT theory for decision and control of complex systems. This new monograph will assimilate many new concepts that have been recently reported or are in the process of being reported in open literature. The major objectives here are to present a) a quantitative approach, supported by a formal theory, for discrete-event decision and control of human-engineered complex systems; and b) a set of applications to emerging technological areas such as control of software systems, malicious executables, and complex engineering systems. The monograph will provide the necessary background materials in automata theory and languages for supervisory control. It will introduce a new paradigm of language measure to quantitatively compare the performance of different automata models of a physical system. A novel feature of this approach is to generate discrete-event robust optimal decision and control algorithms for both military and commercial systems.
This report describes the partially completed correctness proof of the Viper 'block model'. Viper 7,8,9,11,23] is a microprocessor designed by W. J. Cullyer, C. Pygott and J. Kershaw at the Royal Signals and Radar Establishment in Malvern, England, (henceforth 'RSRE') for use in safety-critical applications such as civil aviation and nuclear power plant control. It is currently finding uses in areas such as the de ployment of weapons from tactical aircraft. To support safety-critical applications, Viper has a particulary simple design about which it is relatively easy to reason using current techniques and models. The designers, who deserve much credit for the promotion of formal methods, intended from the start that Viper be formally verified. Their idea was to model Viper in a sequence of decreasingly abstract levels, each of which concentrated on some aspect ofthe design, such as the flow ofcontrol, the processingofinstructions, and so on. That is, each model would be a specification of the next (less abstract) model, and an implementation of the previous model (if any). The verification effort would then be simplified by being structured according to the sequence of abstraction levels. These models (or levels) of description were characterized by the design team. The first two levels, and part of the third, were written by them in a logical language amenable to reasoning and proof."
This year, the IFIP Working Conference on Distributed and Parallel Embedded Sys tems (DIPES 2008) is held as part of the IFIP World Computer Congress, held in Milan on September 7 10, 2008. The embedded systems world has a great deal of experience with parallel and distributed computing. Many embedded computing systems require the high performance that can be delivered by parallel computing. Parallel and distributed computing are often the only ways to deliver adequate real time performance at low power levels. This year's conference attracted 30 submissions, of which 21 were accepted. Prof. Jor ] g Henkel of the University of Karlsruhe graciously contributed a keynote address on embedded computing and reliability. We would like to thank all of the program committee members for their diligence. Wayne Wolf, Bernd Kleinjohann, and Lisa Kleinjohann Acknowledgements We would like to thank all people involved in the organization of the IFIP World Computer Congress 2008, especially the IPC Co Chairs Judith Bishop and Ivo De Lotto, the Organization Chair Giulio Occhini, as well as the Publications Chair John Impagliazzo. Further thanks go to the authors for their valuable contributions to DIPES 2008. Last but not least we would like to acknowledge the considerable amount of work and enthusiasm spent by our colleague Claudius Stern in preparing theproceedingsofDIPES2008. Hemadeitpossibletoproducethemintheircurrent professional and homogeneous style."
This book constitutes the refereed proceedings of the 8th International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR 2011, held in St. Petersburg, Russia in July, 2011. The book presents 30 revised full papers selected from a total of 52 submissions. The book is divided in sections on discrete and continuous optimization, segmentation, motion and video, learning and shape analysis.
The Second International Workshop on Traffic Monitoring and Analysis (TMA 2010) was an initiative of the COST Action IC0703 "Data Traffic Monitoring and Analysis: Theory, Techniques, Tools and Applications for the Future Networks" (http: // www.tma-portal.eu/cost-tma-action). The COST program is an intergovernmental framework for European cooperation in science and technology, promoting the coordination of nationally funded research on a European level. Each COST Action aims at reducing the fragmentation in - search and opening the European research area to cooperation worldwide. Traffic monitoring and analysis (TMA) is nowadays an important research topic within the field of computer networks. It involves many research groups worldwide that are collectively advancing our understanding of the Internet. The importance of TMA research is motivated by the fact that modern packet n- works are highly complex and ever-evolving objects. Understanding, developing and managing such environments is difficult and expensive in practice. Traffic monitoring is a key methodology for understanding telecommunication technology and improving its operation, and the recent advances in this field suggest that evolved TMA-based techniques can play a key role in the operation of real networks.
This book constitutes the refereed proceedings of the 11th IFIP
WG 6.1 International Conference on Distributed Applications and
Interoperable Systems, DAIS 2011, held in Reykjavik, Iceland, in
June 2011 as one of the DisCoTec 2011 events.
Collections of digital documents can nowadays be found everywhere in institutions, universities or companies. Examples are Web sites or intranets. But searching them for information can still be painful. Searches often return either large numbers of matches or no suitable matches at all. Such document collections can vary a lot in size and how much structure they carry. What they have in common is that they typically do have some structure and that they cover a limited range of topics. The second point is significantly different from the Web in general. The type of search system that we propose in this book can suggest ways of refining or relaxing the query to assist a user in the search process. In order to suggest sensible query modifications we would need to know what the documents are about. Explicit knowledge about the document collection encoded in some electronic form is what we need. However, typically such knowledge is not available. So we construct it automatically.
Recent accidents in a range of industries have increased concern
over the design, development, management and control of
safety-critical systems. Attention has now focused upon the role of
human error both in the development and in the operation of complex
processes. This volume contains 20 original and significant contributions addressing these critical questions. The papers were presented at the 7th IFIP Working Group 13.5 Working Conference on Human Error, Safety and Systems Development, which was held in August 2004 in conjunction with the 18th IFIP World Computer Congress in Toulouse, France, and sponsored by the International Federation for Information Processing (IFIP).
Parallel and distributed computation has been gaining a great lot of attention in the last decades. During this period, the advances attained in computing and communication technologies, and the reduction in the costs of those technolo gies, played a central role in the rapid growth of the interest in the use of parallel and distributed computation in a number of areas of engineering and sciences. Many actual applications have been successfully implemented in various plat forms varying from pure shared-memory to totally distributed models, passing through hybrid approaches such as distributed-shared memory architectures. Parallel and distributed computation differs from dassical sequential compu tation in some of the following major aspects: the number of processing units, independent local dock for each unit, the number of memory units, and the programming model. For representing this diversity, and depending on what level we are looking at the problem, researchers have proposed some models to abstract the main characteristics or parameters (physical components or logical mechanisms) of parallel computers. The problem of establishing a suitable model is to find a reasonable trade-off among simplicity, power of expression and universality. Then, be able to study and analyze more precisely the behavior of parallel applications."
Welcome to the proceedings of the 19th International Workshop on Power and TimingModeling, OptimizationandSimulation, PATMOS2009.Overtheyears, PATMOShasevolvedintoanimportantEuropeanevent, whereresearchersfrom both industry and academia discuss and investigate the emerging challenges in future and contemporary applications, design methodologies, and tools required for the development of the upcoming generations of integrated circuits and s- tems. PATMOS 2009 was organized by TU Delft, The Netherlands, with sp- sorship by the NIRICT Design Lab and Cadence Design Systems, and technical co-sponsorshipbytheIEEE.Furtherinformationabouttheworkshopisavailable athttp: //ens.ewi.tudelft.nl/patmos09. The technical programof PATMOS 2009 contained state-of-the-arttechnical contributions, three invited keynotes, and a special session on SystemC-AMS Extensions. The technical program focused on timing, performance, and power consumption, as well as architectural aspects with particular emphasis on m- eling, design, characterization, analysis, and optimization in the nanometer era. The Technical Program Committee, with the assistance of additional expert reviewers, selected the 36 papers presented at PATMOS. The papers were - ganized into 7 oral sessions (with a total of 26 papers) and 2 poster sessions (with a total of 10 papers). As is customary for the PATMOS workshops, full papers were required for review, and a minimum of three reviews were received per manuscr
Since its original inception back in 1989 the Web has changed into an environment where Web applications range from small-scale information dissemination applications, often developed by non-IT professionals, to large-scale, commercial, enterprise-planning and scheduling applications, developed by multidisciplinary teams of people with diverse skills and backgrounds and using cutting-edge, diverse technologies. As an engineering discipline, Web engineering must provide principles, methodologies and frameworks to help Web professionals and researchers develop applications and manage projects effectively. Mendes and Mosley have selected experts from numerous areas in Web engineering, who contribute chapters where important concepts are presented and then detailed using real industrial case studies. After an introduction into the discipline itself and its intricacies, the contributions range from Web effort estimation, productivity benchmarking and conceptual and model-based application development methodologies, to other important principles such as usability, reliability, testing, process improvement and quality measurement. This is the first book that looks at Web engineering from a measurement perspective. The result is a self-containing, comprehensive overview detailing the role of measurement and metrics within the context of Web engineering. This book is ideal for professionals and researchers who want to know how to use sound principles for the effective management of Web projects, as well as for courses at an advanced undergraduate or graduate level.
Networks on Chip presents a variety of topics, problems and approaches with the common theme to systematically organize the on-chip communication in the form of a regular, shared communication network on chip, an NoC for short. As the number of processor cores and IP blocks integrated on a single chip is steadily growing, a systematic approach to design the communication infrastructure becomes necessary. Different variants of packed switched on-chip networks have been proposed by several groups during the past two years. This book summarizes the state of the art of these efforts and discusses the major issues from the physical integration to architecture to operating systems and application interfaces. It also provides a guideline and vision about the direction this field is moving to. Moreover, the book outlines the consequences of adopting design platforms based on packet switched network. The consequences may in fact be far reaching because many of the topics of distributed systems, distributed real-time systems, fault tolerant systems, parallel computer architecture, parallel programming as well as traditional system-on-chip issues will appear relevant but within the constraints of a single chip VLSI implementation. The book is organized in three parts. The first deals with system design and methodology issues. The second presents problems and solutions concerning the hardware and the basic communication infrastructure. Finally, the third part covers operating system, embedded software and application. However, communication from the physical to the application level is a central theme throughout the book. The book serves as an excellent reference source and may be used as a text for advanced courses on the subject.
Memory Architecture Exploration for Programmable Embedded Systems
addresses efficient exploration of alternative memory
architectures, assisted by a "compiler-in-the-loop" that allows
effective matching of the target application to the
processor-memory architecture. This new approach for memory
architecture exploration replaces the traditional black-box view of
the memory system and allows for aggressive co-optimization of the
programmable processor together with a customized memory system.
Innovation in Manufacturing Networks A fundamental concept of the emergent business, scientific and technological paradigms ces area, innovation the ability to apply new ideas to products, processes, organizational practices and business models - is crucial for the future competitiveness of organizations in a continually increasingly globalised, knowledge-intensive marketplace. Responsiveness, agility as well as the high performance of manufacturing systems is responsible for the recent changes in addition to the call for new approaches to achieve cost-effective responsiveness at all the levels of an enterprise. Moreover, creating appropriate frameworks for exploring the most effective synergies between human potential and automated systems represents an enormous challenge in terms of processes characterization, modelling, and the development of adequate support tools. The implementation and use of Automation Systems requires an ever increasing knowledge of enabling technologies and Business Practices. Moreover, the digital and networked world will surely trigger new business practices. In this context and in order to achieve the desired effective and efficiency performance levels, it is crucial to maintain a balance between both the technical aspects and the human and social aspects when developing and applying new innovations and innovative enabling technologies. BASYS conferences have been developed and organized so as to promote the development of balanced automation systems in an attempt to address the majority of the current open issues.
This book synthesizes the results of the seventh in a successful series of workshops that were established by Shanghai Jiao Tong University and Technische Universitat Berlin, bringing together researchers from both universities in order to present research results to an international community. Aspects covered here include, among others, Models and specification; Simulation of different properties; Middleware for distributed real-time systems; Signal Analysis; Control methods; Applications in airborne and medical systems."
This book is a practical guide to IPv6 addressing Unix and network administrators with experience in TCP/IP(v4) but not necessarily any IPv6 knowledge. It focuses on reliable and efficient operation of IPv6 implementations available today rather than on protocol specifications. Consequently, it covers the essential concepts, using instructive and thoroughly tested examples, on how to configure, administrate, and debug IPv6 setups. These foundations are complemented by discussions of best practices and strategic considerations aimed at overall efficiency, reliability, maintainability, and interoperation.
Making Systems Safer contains the papers presented at the eighteenth annual Safety-critical Systems Symposium, held at Bristol, UK, in February 2010. The Symposium is for engineers, managers and academics in the field of system safety, across all industry sectors, so the papers making up this volume offer a wide-ranging coverage of current safety topics, and a blend of academic research and industrial experience. They include both recent developments in the field and discussion of open issues that will shape future progress. The first paper reflects a tutorial - on Formalization in Safety Cases - held on the first day of the Symposium. The subsequent 15 papers are presented under the headings of the Symposium's sessions: Perspectives on Systems Safety, Managing Safety-Related Projects, Transport Safety, Safety Standards, Safety Competencies and Safety Methods. The book will be of interest to both academics and practitioners working in the safety-critical systems arena.
The two internationally renowned authors elucidate the structure of "fast" parallel computation. Its complexity is emphasised through a variety of techniques ranging from finite combinatorics, probability theory and finite group theory to finite model theory and proof theory. Non-uniform computation models are studied in the form of Boolean circuits; uniform ones in a variety of forms. Steps in the investigation of non-deterministic polynomial time are surveyed as is the complexity of various proof systems. Providing a survey of research in the field, the book will benefit advanced undergraduates and graduate students as well as researchers.
Tsutomu Sasao - Kyushu Institute of Technology, Japan The material covered in this book is quite unique especially for p- ple who are reading English, since such material is quite hard to ?nd in the U.S. literature. German and Russian people have independently developed their theories, but such work is not well known in the U.S. societies. On the other hand, the theories developed in the U.S. are not conveyed to the other places. Thus, the same theory is re-invented or re-discovered in various places. For example, the switching theory was developed independently in the U.S., Europe, and Japan, almost at the same time [4, 18, 19]. Thus, the same notions are represented by di?- ent terminologies. For example, the Shegalkin polynomial is often called complement-free ring-sum, Reed-Muller expression [10], or Positive - larityReed-Mullerexpression [19].Anyway,itisquitedesirablethatsuch a unique book like this is written in English, and many people can read it without any di?culties. The authors have developed a logic system called XBOOLE.Itp- forms logical operations on the given functions. With XBOOLE, the readers can solve the problems given in the book. Many examples and complete solutions to the problems are shown, so the readers can study at home. I believe that the book containing many exercises and their solutions [9] is quite useful not only for the students, but also the p- fessors.
System-Level Design Techniques for Energy-Efficient Embedded
Systems addresses the development and validation of co-synthesis
techniques that allow an effective design of embedded systems with
low energy dissipation. The book provides an overview of a
system-level co-design flow, illustrating through examples how
system performance is influenced at various steps of the flow
including allocation, mapping, and scheduling. The book places
special emphasis upon system-level co-synthesis techniques for
architectures that contain voltage scalable processors, which can
dynamically trade off between computational performance and power
consumption. Throughout the book, the introduced co-synthesis
techniques, which target both single-mode systems and emerging
multi-mode applications, are applied to numerous benchmarks and
real-life examples including a realistic smart phone. |
You may like...
Cuito Cuanavale - 12 Months Of War That…
Fred Bridgland
Paperback
(4)
|