![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
This book constitutes the refereed proceedings of the 13th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems, DAIS 2013, held in Florence, Italy, in June 2013, as part of the 8th International Federated Conference on Distributed Computing Techniques, DisCoTec 2013. The 12 revised full papers and 9 short papers presented were carefully reviewed and selected from 42 submissions. The papers present state-of-the-art research results and case studies in the area of distributed applications and interoperable systems focussing on cloud computing, replicated storage, and peer-to-peer computing.
This book constitutes the thoroughly refereed post-conference proceedings of the Third International ICST Conference on Sensor Systems and Software, S-Cube 2012, held in Lisbon, Portugal in June 2012. The 12 revised full papers presented were carefully reviewed and selected from over 18 submissions and four invited talks and cover a wide range of topics including middleware, frameworks, learning from sensor data streams, stock management, e-health, and Web Of Things.
Control of Discrete-event Systems provides a survey of the most important topics in the discrete-event systems theory with particular focus on finite-state automata, Petri nets and max-plus algebra. Coverage ranges from introductory material on the basic notions and definitions of discrete-event systems to more recent results. Special attention is given to results on supervisory control, state estimation and fault diagnosis of both centralized and distributed/decentralized systems developed in the framework of the Distributed Supervisory Control of Large Plants (DISC) project. Later parts of the text are devoted to the study of congested systems though fluidization, an over approximation allowing a much more efficient study of observation and control problems of timed Petri nets. Finally, the max-plus algebraic approach to the analysis and control of choice-free systems is also considered. Control of Discrete-event Systems provides an introduction to discrete-event systems for readers that are not familiar with this class of systems, but also provides an introduction to research problems and open issues of current interest to readers already familiar with them. Most of the material in this book has been presented during a Ph.D. school held in Cagliari, Italy, in June 2011.
This book constitutes the refereed proceedings of the 12th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems, DAIS 2012, held in Stockholm, Sweden, in June 2012 as one of the DisCoTec 2012 events. The 12 revised full papers and 9 short papers presented were carefully reviewed and selected from 58 submissions. The papers are organized in topical sections on peer-to-peer and large scale systems; security and reliability in web, cloud, p2p, and mobile systems; wireless, mobile, and pervasive systems; multidisciplinary approaches and case studies, ranging from Grid and parallel computing to multimedia and socio-technical systems; and service-oriented computing and e-commerce. This book constitutes the thoroughly refereed post-conference proceedings of the 7th International Conference on Heterogeneous Networking for Quality, Reliability, Security and Robustness, QShine 2010. The 37 revised full papers presented along with 7 papers from the allocated Dedicated Short Range Communications Workshop, DSRC 2010, were carefully selected from numerous submissions. Conference papers are organized into 9 technical sessions, covering the topics of cognitive radio networks, security, resource allocation, wireless protocols and algorithms, advanced networking systems, sensor networks, scheduling and optimization, routing protocols, multimedia and stream processing. Workshop papers are organized into two sessions: DSRC networks and DSRC security.
This book constitutes thoroughly refereed post-conference proceedings of the workshops of the 18th International Conference on Parallel Computing, Euro-Par 2012, held in Rhodes Islands, Greece, in August 2012. The papers of these 10 workshops BDMC, CGWS, HeteroPar, HiBB, OMHI, Paraphrase, PROPER, UCHPC, VHPC focus on promotion and advancement of all aspects of parallel and distributed computing.
This book constitutes the proceedings of the 7th International ICST Conference, TridentCom 2011, held in Shanghai, China, in April 2011. Out of numerous submissions the Program Committee finally selected 26 full papers and 2 invited papers. They focus on topics as future Internet testbeds, future wireless testbeds, federated and large scale testbeds, network and resource virtualization, overlay network testbeds, management provisioning and tools for networking research, and experimentally driven research and user experience evaluation.
This book constitutes the refereed post-proceedings of the 9th European Performance Engineering Workshop, EPEW 2012, held in Munich, Germany, and the 28th UK Performance Engineering Workshop, UKPEW 2012, held in Edinburgh, UK, in July 2012. The 15 regular papers and one poster presentation paper presented together with 2 invited talks were carefully reviewed and selected from numerous submissions. The papers cover a wide range of topics from classical performance modeling areas such as wireless network protocols and parallel execution of scientific codes to hot topics such as energy-aware computing to unexpected ventures into ranking professional tennis players. In addition to new case studies, the papers also present new techniques for dealing with the modeling challenges brought about by the increasing complexity and scale of systems today.
This book constitutes the refereed proceedings of the 14th International Conference on Passive and Active Measurement, PAM 2013, held in Hong Kong, China, in March 2013. The 24 revised full papers presented were carefully reviewed and selected from 74 submissions. The papers have been organized in the following topical sections: measurement design, experience and analysis; Internet wireless and mobility; performance measurement; protocol and application behavior; characterization of network usage; and network security and privacy. In addition, 9 poster abstracts have been included.
This preface tells the story of how Multimodal Usability responds to a special challenge. Chapter 1 describes the goals and structure of this book. The idea of describing how to make multimodal computer systems usable arose in the European Network of Excellence SIMILAR - "Taskforce for cre- ing human-machine interfaces SIMILAR to human-human communication," 2003- 2007, www. similar. cc. SIMILAR brought together people from multimodal signal processing and usability with the aim of creating enabling technologies for new kinds of multimodal systems and demonstrating results in research prototypes. Most of our colleagues in the network were, in fact, busy extracting features and guring out how to demonstrate progress in working interactive systems, while claiming not to have too much of a notion of usability in system development and evaluation. It was proposed that the authors support the usability of the many multimodal pro- types underway by researching and presenting a methodology for building usable multimodal systems. We accepted the challenge, rst and foremost, no doubt, because the formidable team spirit in SIMILAR could make people accept outrageous things. Second, h- ing worked for nearly two decades on making multimodal systems usable, we were curious - curious at the opportunity to try to understand what happens to traditional usability work, that is, work in human-computer interaction centred around tra- tional graphical user interfaces (GUIs), when systems become as multimodal and as advanced in other ways as those we build in research today.
This book constitutes the proceedings of the 4th International Workshop on Traffic Monitoring and Analysis, TMA 2012, held in Vienna, Austria, in March 2012. The thoroughly refereed 10 full papers and 8 short papers presented in this volume were carefully reviewed and selected from 31 submissions. The contributions are organized in topical sections on traffic analysis and characterization: new results and improved measurement techniques; measurement for QoS, security and service level agreements; and tools for network measurement and experimentation.
Database and Application Security XV provides a forum for original research results, practical experiences, and innovative ideas in database and application security. With the rapid growth of large databases and the application systems that manage them, security issues have become a primary concern in business, industry, government and society. These concerns are compounded by the expanding use of the Internet and wireless communication technologies. This volume covers a wide variety of topics related to security and privacy of information in systems and applications, including: * Access control models; * Role and constraint-based access control; * Distributed systems; * Information warfare and intrusion detection; * Relational databases; * Implementation issues; * Multilevel systems; * New application areas including XML. Database and Application Security XV contains papers, keynote addresses, and panel discussions from the Fifteenth Annual Working Conference on Database and Application Security, organized by the International Federation for Information Processing (IFIP) Working Group 11.3 and held July 15-18, 2001 in Niagara on the Lake, Ontario, Canada.
This book constitutes the refereed proceedings of the 25th International Conference on Architecture of Computing Systems, ARCS 2012, held in Munich, Germany, in February/March 2012. The 20 revised full papers presented in 7 technical sessions were carefully reviewed and selected from 65 submissions. The papers are organized in topical sections on robustness and fault tolerance, power-aware processing, parallel processing, processor cores, optimization, and communication and memory.
Evolution through natural selection has been going on for a very long time. Evolution through artificial selection has been practiced by humans for a large part of our history, in the breeding of plants and livestock. Artificial evolution, where we evolve an artifact through artificial selection, has been around since electronic computers became common: about 30 years. Right from the beginning, people have suggested using artificial evolution to design electronics automatically.l Only recently, though, have suitable re configurable silicon chips become available that make it easy for artificial evolution to work with a real, physical, electronic medium: before them, ex periments had to be done entirely in software simulations. Early research concentrated on the potential applications opened-up by the raw speed ad vantage of dedicated digital hardware over software simulation on a general purpose computer. This book is an attempt to show that there is more to it than that. In fact, a radically new viewpoint is possible, with fascinating consequences. This book was written as a doctoral thesis, submitted in September 1996. As such, it was a rather daring exercise in ruthless brevity. Believing that the contribution I had to make was essentially a simple one, I resisted being drawn into peripheral discussions. In the places where I deliberately drop a subject, this implies neither that it's not interesting, nor that it's not relevant: just that it's not a crucial part of the tale I want to tell here."
Traffic Measurement on the Internet presents several novel online measurement methods that are compact and fast. Traffic measurement provides critical real-world data for service providers and network administrations to perform capacity planning, accounting and billing, anomaly detection, and service provision. Statistical methods play important roles in many measurement functions including: system designing, model building, formula deriving, and error analyzing. One of the greatest challenges in designing an online measurement function is to minimize the per-packet processing time in order to keep up with the line speed of the modern routers. This book also introduces a challenging problem - the measurement of per-flow information in high-speed networks, as well as, the solution. The last chapter discusses origin-destination flow measurement.
Software is continuously increasing in complexity. Paradigmatic shifts and new development frameworks make it easier to implement software - but not to test it. Software testing remains to be a topic with many open questions with regard to both technical low-level aspects and to the organizational embedding of testing. However, a desired level of software quality cannot be achieved by either choosing a technical procedure or by optimizing testing processes. In fact, it requires a holistic approach.This Brief summarizes the current knowledge of software testing and introduces three current research approaches. The base of knowledge is presented comprehensively in scope but concise in length; thereby the volume can be used as a reference. Research is highlighted from different points of view. Firstly, progress on developing a tool for automated test case generation (TCG) based on a program's structure is introduced. Secondly, results from a project with industry partners on testing best practices are highlighted. Thirdly, embedding testing into e-assessment of programming exercises is described."
This book presents the technical program of the International Embedded Systems Symposium (IESS) 2009. Timely topics, techniques and trends in embedded system design are covered by the chapters in this volume, including modelling, simulation, verification, test, scheduling, platforms and processors. Particular emphasis is paid to automotive systems and wireless sensor networks. Sets of actual case studies in the area of embedded system design are also included. Over recent years, embedded systems have gained an enormous amount of proce- ing power and functionality and now enter numerous application areas, due to the fact that many of the formerly external components can now be integrated into a single System-on-Chip. This tendency has resulted in a dramatic reduction in the size and cost of embedded systems. As a unique technology, the design of embedded systems is an essential element of many innovations. Embedded systems meet their performance goals, including real-time constraints, through a combination of special-purpose hardware and software components tailored to the system requirements. Both the development of new features and the reuse of existing intellectual property components are essential to keeping up with ever more demanding customer requirements. Furthermore, design complexities are steadily growing with an increasing number of components that have to cooperate properly. Embedded system designers have to cope with multiple goals and constraints simul- neously, including timing, power, reliability, dependability, maintenance, packaging and, last but not least, price.
This text has been produced for the benefit of students in computer and infor mation science and for experts involved in the design of microprocessors. It deals with the design of complex VLSI chips, specifically of microprocessor chip sets. The aim is on the one hand to provide an overview of the state of the art, and on the other hand to describe specific design know-how. The depth of detail presented goes considerably beyond the level of information usually found in computer science text books. The rapidly developing discipline of designing complex VLSI chips, especially microprocessors, requires a significant extension of the state of the art. We are observing the genesis of a new engineering discipline, the design and realization of very complex logical structures, and we are obviously only at the beginning. This discipline is still young and immature, alternate concepts are still evolving, and "the best way to do it" is still being explored. Therefore it is not yet possible to describe the different methods in use and to evaluate them. However, the economic impact is significant today, and the heavy investment that companies in the USA, the Far East, and in Europe, are making in gener ating VLSI design competence is a testimony to the importance this field is expected to have in the future. Staying competitive requires mastering and extending this competence.
Wireless technology and handheld devices are dramatically changing the degrees of interaction throughout the world, further creating a ubiquitous network society. The emergence of advanced wireless telecommunication technologies and devices in today's society has increased accuracy and access rate, all of which are increasingly essential as the volume of information handled by users expands at an accelerated pace. The requirement for mobility leads to increasing pressure for applications and wireless systems to revolve around the concept of continuous communication with anyone, anywhere, and anytime. With the wireless technology and devices come ?exibility in network design and quicker deployment time. Over the past decades, numerous wireless telecommu- cation topics have received increasing attention from industry professionals, a- demics, and government agencies. Among these topics are the wireless Internet; multimedia; 3G/4G wireless networks and systems; mobile and wireless network security; wireless network modeling, algorithms, and simulation; satellite based s- tems; 802.11x; RFID; and broadband wireless access.
Model based testing is the most powerful technique for testing hardware and software systems. Models in Hardware Testing describes the use of models at all the levels of hardware testing. The relevant fault models for nanoscaled CMOS technology are introduced, and their implications on fault simulation, automatic test pattern generation, fault diagnosis, memory testing and power aware testing are discussed. Models and the corresponding algorithms are considered with respect to the most recent state of the art, and they are put into a historical context by a concluding chapter on the use of physical fault models in fault tolerance.
Service provisioning in ad hoc networks is challenging given the difficulties of communicating over a wireless channel and the potential heterogeneity and mobility of the devices that form the network. Service placement is the process of selecting an optimal set of nodes to host the implementation of a service in light of a given service demand and network topology. The key advantage of active service placement in ad hoc networks is that it allows for the service configuration to be adapted continuously at run time. "Service Placement in Ad Hoc Networks" proposes the SPi service placement framework as a novel approach to service placement in ad hoc networks. The SPi framework takes advantage of the interdependencies between service placement, service discovery and the routing of service requests to minimize signaling overhead. The work also proposes the Graph Cost / Single Instance and the Graph Cost / Multiple Instances placement algorithms.
This volume contains papers representing a comprehensive record of the contributions to the fifth workshop at EG '90 in Lausanne. The Eurographics hardware workshops have now become an established forum for the exchange of information about the latest developments in this field of growing importance. The first workshop took place during EG '86 in Lisbon. All participants considered this to be a very rewarding event to be repeated at future EG conferences. This view was reinforced at the EG '87 Hardware Workshop in Amsterdam and firmly established the need for such a colloquium in this specialist area within the annual EG conference. The third EG Hardware Workshop took place in Nice in 1988 and the fourth in Hamburg at EG '89. The first part of the book is devoted to rendering machines. The papers in this part address techniques for accelerating the rendering of images and efficient ways of improv ing their quality. The second part on ray tracing describes algorithms and architectures for producing photorealistic images, with emphasis on ways of reducing the time for this computationally intensive task. The third part on visualization systems covers a num ber of topics, including voxel-based systems, radiosity, animation and special rendering techniques. The contributions show that there is flourishing activity in the development of new algorithmic and architectural ideas and, in particular, in absorbing the impact of VLSI technology. The increasing diversity of applications encourage new solutions, and graphics hardware has become a research area of high activity and importance.
This Guide to Sun Administration is areference manual written by Sun administrators for Sun administrators. The book is not in tended to be a complete guide to UNIX Systems Administration; instead it will concentrate on the special issues that are particular to the Sun environment. It will take you through the basic steps necessary to install and maintain a network of Sun computers. Along the way, helpful ideas will be given concerning NFS, YP, backup and restore procedures, as well as many useful installation tips that can make a system administrator's job less painful. Spe cifically, SunGS 4.0 through 4.0.3 will be studied; however, many ofthe ideas and concepts presented are generic enough to be used on any version of SunGS. This book is not intended to be basic introduction to SunGS. It is assumed thatthe reader will have at least a year ofexperience supporting UNIX. BookOverview The firstchaptergives adescription ofthe system types thatwill be discussed throughout the book. An understanding of all of the system types is needed to comprehend the rest ofthe book. Chapter 2 provides the information necessary to install a workstation. The format utility and the steps involved in the suninstall process are covered in detail. Ideas and concepts about partitioning are included in this chapter. YP is the topic of the third chapter. A specific description of each YPmap and each YPcommand ispresented, along with some tips about ways to best utilize this package in your environment.
Massively Parallel Systems (MPSs) with their scalable computation and storage space promises are becoming increasingly important for high-performance computing. The growing acceptance of MPSs in academia is clearly apparent. However, in industrial companies, their usage remains low. The programming of MPSs is still the big obstacle, and solving this software problem is sometimes referred to as one of the most challenging tasks of the 1990's. The 1994 working conference on "Programming Environments for Massively Parallel Systems" was the latest event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) in this field. It succeeded the 1992 conference in Edinburgh on "Programming Environments for Parallel Computing". The research and development work discussed at the conference addresses the entire spectrum of software problems including virtual machines which are less cumbersome to program; more convenient programming models; advanced programming languages, and especially more sophisticated programming tools; but also algorithms and applications.
The main objective of this workshop was to review and discuss the state of the art and the latest advances* in the area of 1-10 Gbit/s throughput for local and metropolitan area networks. The first generation of local area networks had throughputs in the range 1-20 Mbit/s. Well-known examples of this first generation networks are the Ethernet and the Token Ring. The second generation of networks allowed throughputs in the range 100-200 Mbit/s. Representatives of this generation are the FDDI double ring and the DQDB (IEEE 802.6) networks. The third generation networks will have throughputs in the range 1-10 Gbit/s. The rapid development and deployment of fiber optics worldwide, as well as the projected emergence of a market for broadband services, have given rise to the development of broadband ISDN standards. Currently, the Asynchronous Transfer Mode (ATM) appears to be a viable solution to broadband networks. The possibility of all-optical networks in the future is being examined. This would allow the tapping of approximately 50 terahertz or so available in the lightwave range of the frequency spectrum. It is envisaged that using such a high-speed network it will be feasible to distribute high-quality video to the home, to carry out rapid retrieval of radiological and other scientific images, and to enable multi-media conferencing between various parties. |
![]() ![]() You may like...
Pearson REVISE BTEC Tech Award Digital…
Alan Jarvis
Digital product license key
R278
Discovery Miles 2 780
Pearson REVISE Edexcel GCSE Computer…
Ann Weidmann, Cynthia Selby
Paperback
R280
Discovery Miles 2 800
Mobility Management - Principle…
Shanzhi Chen, Yan Shi, …
Hardcover
|