![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
This book constitutes the thoroughly refereed post-conference proceedings of the Third International ICST Conference on Sensor Systems and Software, S-Cube 2012, held in Lisbon, Portugal in June 2012. The 12 revised full papers presented were carefully reviewed and selected from over 18 submissions and four invited talks and cover a wide range of topics including middleware, frameworks, learning from sensor data streams, stock management, e-health, and Web Of Things.
This state-of-the-art survey features topics related to the impact of multicore, manycore, and coprocessor technologies in science and large-scale applications in an interdisciplinary environment. The papers included in this survey cover research in mathematical modeling, design of parallel algorithms, aspects of microprocessor architecture, parallel programming languages, hardware-aware computing, heterogeneous platforms, manycore technologies, performance tuning, and requirements for large-scale applications. The contributions presented in this volume are an outcome of an inspiring conference conceived and organized by the editors at the University of Applied Sciences (HfT) in Stuttgart, Germany, in September 2012. The 10 revised full papers selected from 21 submissions are presented together with the twelve poster abstracts and focus on combination of new aspects of microprocessor technologies, parallel applications, numerical simulation, and software development; thus they clearly show the potential of emerging technologies in the area of multicore and manycore processors that are paving the way towards personal supercomputing and very likely towards exascale computing.
This book constitutes the refereed proceedings of the 12th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems, DAIS 2012, held in Stockholm, Sweden, in June 2012 as one of the DisCoTec 2012 events. The 12 revised full papers and 9 short papers presented were carefully reviewed and selected from 58 submissions. The papers are organized in topical sections on peer-to-peer and large scale systems; security and reliability in web, cloud, p2p, and mobile systems; wireless, mobile, and pervasive systems; multidisciplinary approaches and case studies, ranging from Grid and parallel computing to multimedia and socio-technical systems; and service-oriented computing and e-commerce.
This book constitutes the proceedings of the 7th International ICST Conference, TridentCom 2011, held in Shanghai, China, in April 2011. Out of numerous submissions the Program Committee finally selected 26 full papers and 2 invited papers. They focus on topics as future Internet testbeds, future wireless testbeds, federated and large scale testbeds, network and resource virtualization, overlay network testbeds, management provisioning and tools for networking research, and experimentally driven research and user experience evaluation. This book constitutes the thoroughly refereed post-conference proceedings of the 7th International Conference on Heterogeneous Networking for Quality, Reliability, Security and Robustness, QShine 2010. The 37 revised full papers presented along with 7 papers from the allocated Dedicated Short Range Communications Workshop, DSRC 2010, were carefully selected from numerous submissions. Conference papers are organized into 9 technical sessions, covering the topics of cognitive radio networks, security, resource allocation, wireless protocols and algorithms, advanced networking systems, sensor networks, scheduling and optimization, routing protocols, multimedia and stream processing. Workshop papers are organized into two sessions: DSRC networks and DSRC security.
This book constitutes thoroughly refereed post-conference proceedings of the workshops of the 18th International Conference on Parallel Computing, Euro-Par 2012, held in Rhodes Islands, Greece, in August 2012. The papers of these 10 workshops BDMC, CGWS, HeteroPar, HiBB, OMHI, Paraphrase, PROPER, UCHPC, VHPC focus on promotion and advancement of all aspects of parallel and distributed computing.
Most of the papers in this volume were presented at the NATO Advanced Research Workshop High Performance Computing: Technology and Application, held in Cetraro, Italy from 24 to 26 of June, 1996. The main purpose of the Workshop was to discuss some key scientific and technological developments in high performance computing, identify significant trends and defme desirable research objectives. The volume structure corresponds, in general, to the outline of the workshop technical agenda: general concepts and emerging systems, software technology, algorithms and applications. One of the Workshop innovations was an effort to extend slightly the scope of the meeting from scientific/engineering computing to enterprise-wide computing. The papers on performance and scalability of database servers, and Oracle DBMS reflect this attempt We hope that after reading this collection of papers the readers will have a good idea about some important research and technological issues in high performance computing. We wish to give our thanks to the NATO Scientific and Environmental Affairs Division for being the principal sponsor for the Workshop. Also we are pleased to acknowledge other institutions and companies that supported the Workshop: European Union: European Commission DGIII-Industry, CNR: National Research Council of Italy, University of Calabria, Alenia Spazio, Centro Italiano Ricerche Aerospaziali, ENEA: Italian National Agency for New Technology, Energy and the Environment, Fujitsu, Hewlett Packard-Convex, Hitachi, NEC, Oracle, and Silicon Graphics-Cray Research. Editors January 1997 vii LIST OF CONTRIBUTORS Ecole Nonnale Sucentsrieure de Lyon, 69364 Abarbanel. Robert
This book constitutes the refereed proceedings of the 14th International Conference on Passive and Active Measurement, PAM 2013, held in Hong Kong, China, in March 2013. The 24 revised full papers presented were carefully reviewed and selected from 74 submissions. The papers have been organized in the following topical sections: measurement design, experience and analysis; Internet wireless and mobility; performance measurement; protocol and application behavior; characterization of network usage; and network security and privacy. In addition, 9 poster abstracts have been included.
This preface tells the story of how Multimodal Usability responds to a special challenge. Chapter 1 describes the goals and structure of this book. The idea of describing how to make multimodal computer systems usable arose in the European Network of Excellence SIMILAR - "Taskforce for cre- ing human-machine interfaces SIMILAR to human-human communication," 2003- 2007, www. similar. cc. SIMILAR brought together people from multimodal signal processing and usability with the aim of creating enabling technologies for new kinds of multimodal systems and demonstrating results in research prototypes. Most of our colleagues in the network were, in fact, busy extracting features and guring out how to demonstrate progress in working interactive systems, while claiming not to have too much of a notion of usability in system development and evaluation. It was proposed that the authors support the usability of the many multimodal pro- types underway by researching and presenting a methodology for building usable multimodal systems. We accepted the challenge, rst and foremost, no doubt, because the formidable team spirit in SIMILAR could make people accept outrageous things. Second, h- ing worked for nearly two decades on making multimodal systems usable, we were curious - curious at the opportunity to try to understand what happens to traditional usability work, that is, work in human-computer interaction centred around tra- tional graphical user interfaces (GUIs), when systems become as multimodal and as advanced in other ways as those we build in research today.
Wireless technology and handheld devices are dramatically changing the degrees of interaction throughout the world, further creating a ubiquitous network society. The emergence of advanced wireless telecommunication technologies and devices in today's society has increased accuracy and access rate, all of which are increasingly essential as the volume of information handled by users expands at an accelerated pace. The requirement for mobility leads to increasing pressure for applications and wireless systems to revolve around the concept of continuous communication with anyone, anywhere, and anytime. With the wireless technology and devices come ?exibility in network design and quicker deployment time. Over the past decades, numerous wireless telecommu- cation topics have received increasing attention from industry professionals, a- demics, and government agencies. Among these topics are the wireless Internet; multimedia; 3G/4G wireless networks and systems; mobile and wireless network security; wireless network modeling, algorithms, and simulation; satellite based s- tems; 802.11x; RFID; and broadband wireless access.
Model based testing is the most powerful technique for testing hardware and software systems. Models in Hardware Testing describes the use of models at all the levels of hardware testing. The relevant fault models for nanoscaled CMOS technology are introduced, and their implications on fault simulation, automatic test pattern generation, fault diagnosis, memory testing and power aware testing are discussed. Models and the corresponding algorithms are considered with respect to the most recent state of the art, and they are put into a historical context by a concluding chapter on the use of physical fault models in fault tolerance.
This book constitutes the refereed proceedings of the 25th International Conference on Architecture of Computing Systems, ARCS 2012, held in Munich, Germany, in February/March 2012. The 20 revised full papers presented in 7 technical sessions were carefully reviewed and selected from 65 submissions. The papers are organized in topical sections on robustness and fault tolerance, power-aware processing, parallel processing, processor cores, optimization, and communication and memory.
Evolution through natural selection has been going on for a very long time. Evolution through artificial selection has been practiced by humans for a large part of our history, in the breeding of plants and livestock. Artificial evolution, where we evolve an artifact through artificial selection, has been around since electronic computers became common: about 30 years. Right from the beginning, people have suggested using artificial evolution to design electronics automatically.l Only recently, though, have suitable re configurable silicon chips become available that make it easy for artificial evolution to work with a real, physical, electronic medium: before them, ex periments had to be done entirely in software simulations. Early research concentrated on the potential applications opened-up by the raw speed ad vantage of dedicated digital hardware over software simulation on a general purpose computer. This book is an attempt to show that there is more to it than that. In fact, a radically new viewpoint is possible, with fascinating consequences. This book was written as a doctoral thesis, submitted in September 1996. As such, it was a rather daring exercise in ruthless brevity. Believing that the contribution I had to make was essentially a simple one, I resisted being drawn into peripheral discussions. In the places where I deliberately drop a subject, this implies neither that it's not interesting, nor that it's not relevant: just that it's not a crucial part of the tale I want to tell here."
During the last two decades, there have been many reports about the success and failure of investments in ICT and information systems. Failures in particular have drawn a lot of attention. The outcome of the implementation of information and communication systems has often been disastrous. Recent research does not show that results have improved. This raises the question why so many ICT projects perform so badly. Information, Organization and Information Systems Design: An Integrated Approach to Information Problems aims at discussing measures to improve the results of information systems. Bart Prakken identifies various factors that explain the shortfall of information systems. Subsequently, he provides a profound discussion of the measures that can be taken to remove the causes of failure. When organizations are confronted with information problems, they will almost automatically look for ICT solutions. However, Prakken argues that more fundamental and often cheaper solutions are in many cases available. When looking for solutions to information problems, the inter-relationship between organization, information and the people within the organization should explicitly be taken into account. The measures that the author proposes are based on organizational redesign, particularly using the sociotechnical approach. In cases where ICT solutions do have to be introduced, Prakken discusses a number of precautionary measures that will help their implementation. The book aims to contribute to the scientific debate on how to solve information problems, and can be used in graduate and postgraduate courses. It is also helpful to managers.
Software is continuously increasing in complexity. Paradigmatic shifts and new development frameworks make it easier to implement software - but not to test it. Software testing remains to be a topic with many open questions with regard to both technical low-level aspects and to the organizational embedding of testing. However, a desired level of software quality cannot be achieved by either choosing a technical procedure or by optimizing testing processes. In fact, it requires a holistic approach.This Brief summarizes the current knowledge of software testing and introduces three current research approaches. The base of knowledge is presented comprehensively in scope but concise in length; thereby the volume can be used as a reference. Research is highlighted from different points of view. Firstly, progress on developing a tool for automated test case generation (TCG) based on a program's structure is introduced. Secondly, results from a project with industry partners on testing best practices are highlighted. Thirdly, embedding testing into e-assessment of programming exercises is described."
This book presents the technical program of the International Embedded Systems Symposium (IESS) 2009. Timely topics, techniques and trends in embedded system design are covered by the chapters in this volume, including modelling, simulation, verification, test, scheduling, platforms and processors. Particular emphasis is paid to automotive systems and wireless sensor networks. Sets of actual case studies in the area of embedded system design are also included. Over recent years, embedded systems have gained an enormous amount of proce- ing power and functionality and now enter numerous application areas, due to the fact that many of the formerly external components can now be integrated into a single System-on-Chip. This tendency has resulted in a dramatic reduction in the size and cost of embedded systems. As a unique technology, the design of embedded systems is an essential element of many innovations. Embedded systems meet their performance goals, including real-time constraints, through a combination of special-purpose hardware and software components tailored to the system requirements. Both the development of new features and the reuse of existing intellectual property components are essential to keeping up with ever more demanding customer requirements. Furthermore, design complexities are steadily growing with an increasing number of components that have to cooperate properly. Embedded system designers have to cope with multiple goals and constraints simul- neously, including timing, power, reliability, dependability, maintenance, packaging and, last but not least, price.
This book constitutes the proceedings of the 4th International Workshop on Traffic Monitoring and Analysis, TMA 2012, held in Vienna, Austria, in March 2012. The thoroughly refereed 10 full papers and 8 short papers presented in this volume were carefully reviewed and selected from 31 submissions. The contributions are organized in topical sections on traffic analysis and characterization: new results and improved measurement techniques; measurement for QoS, security and service level agreements; and tools for network measurement and experimentation.
Service provisioning in ad hoc networks is challenging given the difficulties of communicating over a wireless channel and the potential heterogeneity and mobility of the devices that form the network. Service placement is the process of selecting an optimal set of nodes to host the implementation of a service in light of a given service demand and network topology. The key advantage of active service placement in ad hoc networks is that it allows for the service configuration to be adapted continuously at run time. "Service Placement in Ad Hoc Networks" proposes the SPi service placement framework as a novel approach to service placement in ad hoc networks. The SPi framework takes advantage of the interdependencies between service placement, service discovery and the routing of service requests to minimize signaling overhead. The work also proposes the Graph Cost / Single Instance and the Graph Cost / Multiple Instances placement algorithms.
This volume contains papers representing a comprehensive record of the contributions to the fifth workshop at EG '90 in Lausanne. The Eurographics hardware workshops have now become an established forum for the exchange of information about the latest developments in this field of growing importance. The first workshop took place during EG '86 in Lisbon. All participants considered this to be a very rewarding event to be repeated at future EG conferences. This view was reinforced at the EG '87 Hardware Workshop in Amsterdam and firmly established the need for such a colloquium in this specialist area within the annual EG conference. The third EG Hardware Workshop took place in Nice in 1988 and the fourth in Hamburg at EG '89. The first part of the book is devoted to rendering machines. The papers in this part address techniques for accelerating the rendering of images and efficient ways of improv ing their quality. The second part on ray tracing describes algorithms and architectures for producing photorealistic images, with emphasis on ways of reducing the time for this computationally intensive task. The third part on visualization systems covers a num ber of topics, including voxel-based systems, radiosity, animation and special rendering techniques. The contributions show that there is flourishing activity in the development of new algorithmic and architectural ideas and, in particular, in absorbing the impact of VLSI technology. The increasing diversity of applications encourage new solutions, and graphics hardware has become a research area of high activity and importance.
This Guide to Sun Administration is areference manual written by Sun administrators for Sun administrators. The book is not in tended to be a complete guide to UNIX Systems Administration; instead it will concentrate on the special issues that are particular to the Sun environment. It will take you through the basic steps necessary to install and maintain a network of Sun computers. Along the way, helpful ideas will be given concerning NFS, YP, backup and restore procedures, as well as many useful installation tips that can make a system administrator's job less painful. Spe cifically, SunGS 4.0 through 4.0.3 will be studied; however, many ofthe ideas and concepts presented are generic enough to be used on any version of SunGS. This book is not intended to be basic introduction to SunGS. It is assumed thatthe reader will have at least a year ofexperience supporting UNIX. BookOverview The firstchaptergives adescription ofthe system types thatwill be discussed throughout the book. An understanding of all of the system types is needed to comprehend the rest ofthe book. Chapter 2 provides the information necessary to install a workstation. The format utility and the steps involved in the suninstall process are covered in detail. Ideas and concepts about partitioning are included in this chapter. YP is the topic of the third chapter. A specific description of each YPmap and each YPcommand ispresented, along with some tips about ways to best utilize this package in your environment.
Massively Parallel Systems (MPSs) with their scalable computation and storage space promises are becoming increasingly important for high-performance computing. The growing acceptance of MPSs in academia is clearly apparent. However, in industrial companies, their usage remains low. The programming of MPSs is still the big obstacle, and solving this software problem is sometimes referred to as one of the most challenging tasks of the 1990's. The 1994 working conference on "Programming Environments for Massively Parallel Systems" was the latest event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) in this field. It succeeded the 1992 conference in Edinburgh on "Programming Environments for Parallel Computing". The research and development work discussed at the conference addresses the entire spectrum of software problems including virtual machines which are less cumbersome to program; more convenient programming models; advanced programming languages, and especially more sophisticated programming tools; but also algorithms and applications.
This text has been produced for the benefit of students in computer and infor mation science and for experts involved in the design of microprocessors. It deals with the design of complex VLSI chips, specifically of microprocessor chip sets. The aim is on the one hand to provide an overview of the state of the art, and on the other hand to describe specific design know-how. The depth of detail presented goes considerably beyond the level of information usually found in computer science text books. The rapidly developing discipline of designing complex VLSI chips, especially microprocessors, requires a significant extension of the state of the art. We are observing the genesis of a new engineering discipline, the design and realization of very complex logical structures, and we are obviously only at the beginning. This discipline is still young and immature, alternate concepts are still evolving, and "the best way to do it" is still being explored. Therefore it is not yet possible to describe the different methods in use and to evaluate them. However, the economic impact is significant today, and the heavy investment that companies in the USA, the Far East, and in Europe, are making in gener ating VLSI design competence is a testimony to the importance this field is expected to have in the future. Staying competitive requires mastering and extending this competence.
The pervasive creation and consumption of content, especially visual content, is ingrained into our modern world. We're constantly consuming visual media content, in printed form and in digital form, in work and in leisure pursuits. Like our cave- man forefathers, we use pictures to record things which are of importance to us as memory cues for the future, but nowadays we also use pictures and images to document processes; we use them in engineering, in art, in science, in medicine, in entertainment and we also use images in advertising. Moreover, when images are in digital format, either scanned from an analogue format or more often than not born digital, we can use the power of our computing and networking to exploit images to great effect. Most of the technical problems associated with creating, compressing, storing, transmitting, rendering and protecting image data are already solved. We use - cepted standards and have tremendous infrastructure and the only outstanding ch- lenges, apart from managing the scale issues associated with growth, are to do with locating images. That involves analysing them to determine their content, clas- fying them into related groupings, and searching for images. To overcome these challenges we currently rely on image metadata, the description of the images, - ther captured automatically at creation time or manually added afterwards.
Using Subject Headings for Online Retrieval is an indispensable tool for online system designers who are developing new systems or refining existing ones. The book describes subject analysis and subject searching in online catalogs, including the limitations of retrieval, and demonstrates how such limitations can be overcome through system design and programming. The book describes the Library of Congress Subject Headings system and system characteristics, shows how information is stored in machine-readable files, and offers examples of and recommendations for successful retrieval methods. Tables are included to support these recommendations, and diagrams, graphs, and bar charts are used to provide results of data analysis. Practitioners in institutions using or considering the installation of an online catalog will refer to this book often to generate specifications. Researchers in library systems, information retrieval, and user behavior will appreciate the book's detailing of the results of an extensive, empirical study of the subject terms entered into online systems by end users. Using Subject Headings for Online Retrieval also addresses the needs of advanced students in library schools and instructors in library automation, information retrieval, cataloging, indexing, and user behavior.
Overview and Goals Data arriving in time order (a data stream) arises in fields ranging from physics to finance to medicine to music, just to name a few. Often the data comes from sensors (in physics and medicine for example) whose data rates continue to improve dramati cally as sensor technology improves. Further, the number of sensors is increasing, so correlating data between sensors becomes ever more critical in orderto distill knowl edge from the data. On-line response is desirable in many applications (e.g., to aim a telescope at a burst of activity in a galaxy or to perform magnetic resonance-based real-time surgery). These factors - data size, bursts, correlation, and fast response motivate this book. Our goal is to help you design fast, scalable algorithms for the analysis of single or multiple time series. Not only will you find useful techniques and systems built from simple primi tives, but creative readers will find many other applications of these primitives and may see how to create new ones of their own. Our goal, then, is to help research mathematicians and computer scientists find new algorithms and to help working scientists and financial mathematicians design better, faster software." |
![]() ![]() You may like...
Cases on Lean Thinking Applications in…
Eduardo Guilherme Satolo, Robisom Damasceno Calado
Hardcover
R6,835
Discovery Miles 68 350
Implementing Data Analytics and…
Chintan Bhatt, Neeraj Kumar, …
Hardcover
R6,766
Discovery Miles 67 660
Handbook of Research on 5G Networks and…
Augustine O Nwajana, Isibor Kennedy Ihianle
Hardcover
R9,088
Discovery Miles 90 880
|