![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
This book constitutes the refereed proceedings of the 20th International Conference on Analytical and Stochastic Modelling and Applications, ASMTA 2013, held in Ghent, Belgium, in July 2013. The 32 papers presented were carefully reviewed and selected from numerous submissions. The focus of the papers is on the following application topics: complex systems; computer and information systems; communication systems and networks; wireless and mobile systems and networks; peer-to-peer application and services; embedded systems and sensor networks; workload modelling and characterization; road traffic and transportation; social networks; measurements and hybrid techniques; modeling of virtualization; energy-aware optimization; stochastic modeling for systems biology; biologically inspired network design.
This book constitutes the refereed proceedings of the 10th International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment, DIMVA 2013, held in Berlin, Germany, in July 2013. The 9 revised full papers presented together with 3 short papers were carefully reviewed and selected from 38 submissions. The papers are organized in topical sections on malware; network security, Web security; attacks and defenses; and host security.
This book constitutes the refereed post-proceedings of the 10th European Performance Engineering Workshop, EPEW 2013, held in Venice, Italy, in September 2013. The 16 regular papers presented together with 8 short papers and 2 invited talks were carefully reviewed and selected from 33 submissions. The Workshop aims to gather academic and industrial researchers working on all aspects of performance engineering. Original papers related to theoretical and methodological issues as well as case studies and automated tool support are solicited in the following areas: performance modeling and evaluation, system and network performance engineering, and software performance engineering.
This book constitutes the refereed proceedings of the 9th International Workshop on OpenMP, held in Canberra, Australia, in September 2013. The 14 technical full papers presented were carefully reviewed and selected from various submissions. The papers are organized in topical sections on proposed extensions to OpenMP, applications, accelerators, scheduling, and tools.
Queueing theory applications can be discovered in many walks of life including; transportation, manufacturing, telecommunications, computer systems and more. However, the most prevalent applications of queueing theory are in the telecommunications field. Queueing Theory for Telecommunications: Discrete Time Modelling of a Single Node System focuses on discrete time modeling and illustrates that most queueing systems encountered in real life can be set up as a Markov chain. This feature is very unique because the models are set in such a way that matrix-analytic methods are used to analyze them. Queueing Theory for Telecommunications: Discrete Time Modelling of a Single Node System is the most relevant book available on queueing models designed for applications to telecommunications. This book presents clear concise theories behind how to model and analyze key single node queues in discrete time using special tools that were presented in the second chapter. The text also delves into the types of single node queues that are very frequently encountered in telecommunication systems modeling, and provides simple methods for analyzing them. Where appropriate, alternative analysis methods are also presented. This book is for advanced-level students and researchers concentrating on engineering, computer science and mathematics as a secondary text or reference book. Professionals who work in the related industries of telecommunications, industrial engineering and communications engineering will find this book useful as well.
This book constitutes revised selected papers from the Conference on Energy Efficiency in Large Scale Distributed Systems, EE-LSDS, held in Vienna, Austria, in April 2013. It served as the final event of the COST Action IC0804 which started in May 2009. The 15 full papers presented in this volume were carefully reviewed and selected from 31 contributions. In addition, 7 short papers and 3 demo papers are included in this book. The papers are organized in sections named: modeling and monitoring of power consumption; distributed, mobile and cloud computing; HPC computing; wired and wireless networking; and standardization issues.
Firefighters and other emergency first responders use a huge variety of highly specialized and critical technologies for personal protection. These technologies, ranging from GPS to environmental sensing to communication devices, often run on different systems with separate power supplies and operating platforms. How these technological components function in a single synergistic system is of critical interest to firefighter end-users seeking efficient tools. Interoperable ESE states that a standardized platform for electronic safety equipment (ESE) is both logical and essential. This book develops an inventory of existing and emerging electronic equipment categorized by key areas of interest to the fire service, documents equipment performance requirements relevant to interoperability, including communications and power requirements, and develops an action plan toward the development of requirements to meet the needs of emergency responders. This book is intended for practitioners as a tool for understanding interoperability concepts and the requirements of the fire service landscape. It offers clear recommendations for the future to help ensure efficiency and safety with fire protection equipment. Researchers working in a related field will also find the book valuable.
The proceedings of the 5th International Workshop on Parallel Tools for High Performance Computing provide an overview on supportive software tools and environments in the fields of System Management, Parallel Debugging and Performance Analysis. In the pursuit to maintain exponential growth for the performance of high performance computers the HPC community is currently targeting Exascale Systems. The initial planning for Exascale already started when the first Petaflop system was delivered. Many challenges need to be addressed to reach the necessary performance. Scalability, energy efficiency and fault-tolerance need to be increased by orders of magnitude. The goal can only be achieved when advanced hardware is combined with a suitable software stack. In fact, the importance of software is rapidly growing. As a result, many international projects focus on the necessary software.
It is in the area of Systems Diagnosis. Supervision and Control that Knowledge-Based Techniques have had their most significant impact in recent years. In this volume. Spyros Tzafestas has ably put together the current state of the art of the application of Artificial Intelligence concepts to problems of Systems Diagnosis. All the authors in this edited work are distinguished internationally. recognized experts on various aspects of Artificial Intelligence and its applications. and the coverage of the field that they provide is both readable and authoritative. The sixteen chapters break down in a natural way into three broad categories i.e ** (a) introduction to the applications of Expert Systems in Engineering. (b) Knowledge-based systems architectures. models and techniques for fault diagnosis. supervision and real time control and finally. (c) applications and case studies in three specific 'areas. namely: Manufacturing. Chemical Processes and Communications Networks. The final chapter provides a com prehensive survey of the field with an extensive bibliography. The mix of original scientific articles. tutorial and survey papers makes this col lection a very timely and valuable addition to the literature in this important field. MADAN G. SINGH Professor of Information Engineering at U.M.I.S.T.
The Data Management Body of Knowledge (DAMA-DMBOK2) presents a comprehensive view of the challenges, complexities, and value of effective data management. Today's organizations recognize that managing data is central to their success. They recognize data has value and they want to leverage that value. As our ability and desire to create and exploit data has increased, so too has the need for reliable data management practices. The second edition of DAMA International's Guide to the Data Management Body of Knowledge (DAMA-DMBOK2) updates and augments the highly successful DMBOK1. An accessible, authoritative reference book written by leading thinkers in the field and extensively reviewed by DAMA members, DMBOK2 brings together materials that comprehensively describe the challenges of data management and how to meet them by:
Asynchronous System-on-Chip Interconnect describes the use of an entirely asynchronous system-bus for the modular construction of integrated circuits. Industry is just awakening to the benefits of asynchronous design in avoiding the problems of clock-skew and multiple clock-domains, an din parallel with this is coming to grips with Intellectual Property (IP) based design flows which emphasise the need for a flexible interconnect strategy. In this book, John Bainbridge investigates the design of an asynchronous on-chip interconnect, looking at all the stages of the design from the choice of wiring layout, through asynchronous signalling protocols to the higher level problems involved in supporting split transactions. The MARBLE bus (the first asynchronous SoC bus) used in a commercial demonstrator chip containing a mixture of asynchronous and synchronous macrocells is used as a concrete example throughout the book.
Distributed applications are a necessity in most central application sectors of the contemporary information society, including e-commerce, e-banking, e-learning, e-health, telecommunication and transportation. This results from a tremendous growth of the role that the Internet plays in business, administration and our everyday activities. This trend is going to be even further expanded in the context of advances in broadband wireless communication. New Developments in Distributed Applications and Interoperable Systems focuses on the techniques available or under development with the goal to ease the burden of constructing reliable and maintainable interoperable information systems providing services in the global communicating environment. The topics covered in this book include: * Context-aware applications; * Integration and interoperability of distributed systems; * Software architectures and services for open distributed systems; * Management, security and quality of service issues in distributed systems; * Software agents and mobility; * Internet and other related problem areas.The book contains the proceedings of the Third International Working Conference on Distributed Applications and Interoperable Systems (DAIS'2001), which was held in September 2001 in Krakow, Poland, and sponsored by the International Federation on Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported.
Performance of Web Services provides innovative techniques to improve the performance of Web Services, as well as QoS (Quality of Service) requirements. This includes Qos performance, reliability and security. The author presents two levels of Web Services: the "kernel" (ithe SOAP engine which delivers messages from one point to another through various networks), and the "server side" (which processes heavy load / requests). The primary objective of this book is execution of applications delivered in a timely fashion. Case studies and examples are provided throughout this book.
The general trend of modern network devices towards greater intelligence and programmability is accelerating the development of systems that are increasingly autonomous and to a certain degree self-managing. Examples range from router scripting environments to fully programmable server blades. This has opened up a new field of computer science research, reflected in this new volume. This selection of contributions to the first ever international workshop on network-embedded management applications (NEMA) features six papers selected from submissions to the workshop, held in October 2010 at Niagara Falls, Canada. They represent a wide cross-section of the current work in this vital field of inquiry. Covering a diversity of perspectives, the volume's dual structure first of all examines the 'enablers' for NEMAs-the platforms, frameworks, and development environments which facilitate the evolution of network-embedded management and applications. The second section of the book covers network-embedded applications that might both empower and benefit from such enabling platforms. These papers cover topics ranging from deciding where to best place management control functions inside a network to a discussion of how multi-core hardware processors can be leveraged for traffic filtering applications. The section concludes with an analysis of a delay-tolerant network application in the context of the 'One Laptop per Child' program. There is a growing recognition that it is vital to make network operation and administration as easy as possible to contain operational expenses and cope with ever shorter control cycles. This volume provides researchers in the field with the very latest in current thinking.
This book constitutes the refereed proceedings of the 17th
International Conference on Distributed Computer and Communication
Networks, DCCN 2013, held in Moscow, Russia, in October 2013.
Computer Networks, Architecture and Applications covers many aspects of research in modern communications networks for computing purposes.
This book describes the key concepts, principles and implementation options for creating high-assurance cloud computing solutions. The guide starts with a broad technical overview and basic introduction to cloud computing, looking at the overall architecture of the cloud, client systems, the modern Internet and cloud computing data centers. It then delves into the core challenges of showing how reliability and fault-tolerance can be abstracted, how the resulting questions can be solved, and how the solutions can be leveraged to create a wide range of practical cloud applications. The author's style is practical, and the guide should be readily understandable without any special background. Concrete examples are often drawn from real-world settings to illustrate key insights. Appendices show how the most important reliability models can be formalized, describe the API of the Isis2 platform, and offer more than 80 problems at varying levels of difficulty.
Welcome to ANALYZE, designed to provide computer assistance for analyzing linear programs and their solutions. Chapter 1 gives an overview of ANALYZE and how to install it. It also describes how to get started and how to obtain further documentation and help on-line. Chapter 2 reviews the forms of linear programming models and describes the syntax of a model. One of the routine, but important, functions of ANALYZE is to enable convenient access to rows and columns in the matrix by conditional delineation. Chapter 3 illustrates simple queries, like DISPLAY, LIST, and PICTURE. This chapter also introduces the SUBMAT command level to define any submatrix by an arbitrary sequence of additions, deletions and reversals. Syntactic explanations and a schema view are also illustrated. Chapter 4 goes through some elementary exercises to demonstrate computer assisted analysis and introduce additional conventions of the ANALYZE language. Besides simple queries, it demonstrates the INTERPRT command, which automates the analysis process and gives English explanations of results. The last 2 exercises are diagnoses of elementary infeasible instances of a particular model. Chapter 5 progresses to some advanced uses of ANALYZE. The first is blocking to obtain macro views of the model and for finding embedded substructures, like a netform. The second is showing rates of substitution described by the basic equations. Then, the use of the REDUCE and BASIS commands are illustrated for a variety of applications, including solution analysis, infeasibility diagnosis, and redundancy detection.
Adequate verification is the key issue not only in today's arms control, arms limitation, and disarmament regimes, but also in less spectacular areas like auditing in economics or control of environmental pollution. Statistical methodologies and system analytical approaches are the tools developed over the past decades for quantifying those components of adequate verification which are quantifiable, i. e. , numbers, inventories, mass transfers, etc. , together with their uncertainties. In his book Safeguards Systems Analy sis, Professor Rudolf Avenhaus condenses the experience and expertise he has gained over the past 20 years, when his work was mainly related to the development of the IAEA's system for safeguarding nuclear materials, to system analytical studies at IIASA in the field of future energy requirements and their risks, and to the application of statistical techniques to arms control. The result is a unified and up-to-date presentation and analysis of the quantitative aspects of safeguards systems, and the application of the more important findings to practical problems. International Nuclear Material Safeguards, by far the most advanced verification system in the field of arms limitation, is used as the main field of application for the game theoretical analysis, material accountancy theory, and the theory on verification of material accounting data developed in the first four chapters.
It is man's ongoing hope that a machine could somehow adapt to its environment by reorganizing itself. This is what the notion of self-organizing robots is based on. The theme of this book is to examine the feasibility of creating such robots within the limitations of current mechanical engineering. The topics comprise the following aspects of such a pursuit: the philosophy of design of self-organizing mechanical systems; self-organization in biological systems; the history of self-organizing mechanical systems; a case study of a self-assembling/self-repairing system as an autonomous distributed system; a self-organizing robot that can create its own shape and robotic motion; implementation and instrumentation of self-organizing robots; and the future of self-organizing robots. All topics are illustrated with many up-to-date examples, including those from the authors' own work. The book does not require advanced knowledge of mathematics to be understood, and will be of great benefit to students in the robotics discipline, including in the areas of mechanics, control, electronics, and computer science. It is also an important source for researchers who wish to investigate the field of robotics or who have an interest in the application of self-organizing phenomena.
This book constitutes the thoroughly refereed proceedings of the 21st International Conference on Computer Networks, CN 2014, held in Brunow, Poland, in June 2014. The 34 revised full papers presented were carefully reviewed and selected for inclusion in the book. The papers in these proceedings cover the following topics: computer networks, tele informatics and communications, new technologies, queueing theory, innovative applications and networked and IT-related aspects of e-business."
Control system design is a challenging task for practicing engineers. It requires knowledge of different engineering fields, a good understanding of technical specifications and good communication skills. The current book introduces the reader into practical control system design, bridging the gap between theory and practice. The control design techniques presented in the book are all model based., considering the needs and possibilities of practicing engineers. Classical control design techniques are reviewed and methods are presented how to verify the robustness of the design. It is how the designed control algorithm can be implemented in real-time and tested, fulfilling different safety requirements. Good design practices and the systematic software development process are emphasized in the book according to the generic standard IEC61508. The book is mainly addressed to practicing control and embedded software engineers - working in research and development - as well as graduate students who are faced with the challenge to design control systems and implement them in real-time.
This book constitutes the refereed proceedings of the 29th International Supercomputing Conference, ISC 2014, held in Leipzig, Germany, in June 2014. The 34 revised full papers presented together were carefully reviewed and selected from 79 submissions. The papers cover the following topics: scalable applications with 50K+ cores; advances in algorithms; scientific libraries; programming models; architectures; performance models and analysis; automatic performance optimization; parallel I/O and energy efficiency.
This book constitutes the thoroughly refereed post-conference proceedings of the Second International Workshop on Energy Efficient Data Centers, E(2)DC 2013, held in Berkeley, CA, USA, in May 2013; co-located with SIGCOMM e-Energy 2013. The 8 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on energy and workload measurement; energy management; simulators and control.
Learn how to write R code with fewer bugs. The problem with programming is that you are always one typo away from writing something silly. Likewise with data analysis, a small mistake in your model can lead to a big mistake in your results. Combining the two disciplines means that it is all too easy for a missed minus sign to generate a false prediction that you don't spot until it's too late. Testing is the only way to be sure that your code, and your results, are correct. Testing R Code teaches you how to perform development-time testing using the testthat package, allowing you to ensure that your code works as intended. The book also teaches run-time testing using the assertive package; enabling your users to correctly run your code. After beginning with an introduction to testing in R, the book explores more advanced cases such as integrating tests into R packages; testing code that accesses databases; testing C++ code with Rcpp; and testing graphics. Each topic is explained with real-world examples, and has accompanying exercises for readers to practise their skills - only a small amount of experience with R is needed to get started! |
You may like...
Medium7 - Evidence of the Afterlife and…
Donna Smith-Moncrieffe
Hardcover
|