![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
In this work, the unique power measurement capabilities of the Cray XT architecture were exploited to gain an understanding of power and energy use, and the effects of tuning both CPU and network bandwidth. Modifications were made to deterministically halt cores when idle. Additionally, capabilities were added to alter operating P-state. At the application level, an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale is gained by simultaneously collecting current and voltage measurements on the hosting nodes. The effects of both CPU and network bandwidth tuning are examined, and energy savings opportunities without impact on run-time performance are demonstrated. This research suggests that next-generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components to achieve more energy-efficient performance.
This book constitutes the refereed post-proceedings of the Second International Workshop on Foundational and Practical Aspects of Resource Analysis, FOPARA 2011, held in Madrid, Spain, in May 2011. The 8 revised full papers were carefully reviewed and selected from the papers presented at the workshop and papers submitted following an open call for contributions after the workshop. The papers are organized in the following topical sections: implicit complexity, analysis and verfication of cost expressions, and worst case execution time analysis.
Traffic Measurement on the Internet presents several novel online measurement methods that are compact and fast. Traffic measurement provides critical real-world data for service providers and network administrations to perform capacity planning, accounting and billing, anomaly detection, and service provision. Statistical methods play important roles in many measurement functions including: system designing, model building, formula deriving, and error analyzing. One of the greatest challenges in designing an online measurement function is to minimize the per-packet processing time in order to keep up with the line speed of the modern routers. This book also introduces a challenging problem - the measurement of per-flow information in high-speed networks, as well as, the solution. The last chapter discusses origin-destination flow measurement.
This book constitutes the refereed proceedings of the international competition aimed at the evaluation and assessment of Ambient Assisted Living (AAL) systems and services, EvAAL 2011, which was organized in two major events, the Competition in Valencia, Spain, in July 2011, and the Final workshop in Lecce, Italy, in September 2011. The papers included in this book describe the organization and technical aspects of the competition, and provide a complete technical description of the competing artefacts and report on the experience lessons learned by the teams during the competition.
This book constitutes the thoroughly refereed post-conference proceedings of the Third International ICST Conference on Sensor Systems and Software, S-Cube 2012, held in Lisbon, Portugal in June 2012. The 12 revised full papers presented were carefully reviewed and selected from over 18 submissions and four invited talks and cover a wide range of topics including middleware, frameworks, learning from sensor data streams, stock management, e-health, and Web Of Things.
This state-of-the-art survey features topics related to the impact of multicore, manycore, and coprocessor technologies in science and large-scale applications in an interdisciplinary environment. The papers included in this survey cover research in mathematical modeling, design of parallel algorithms, aspects of microprocessor architecture, parallel programming languages, hardware-aware computing, heterogeneous platforms, manycore technologies, performance tuning, and requirements for large-scale applications. The contributions presented in this volume are an outcome of an inspiring conference conceived and organized by the editors at the University of Applied Sciences (HfT) in Stuttgart, Germany, in September 2012. The 10 revised full papers selected from 21 submissions are presented together with the twelve poster abstracts and focus on combination of new aspects of microprocessor technologies, parallel applications, numerical simulation, and software development; thus they clearly show the potential of emerging technologies in the area of multicore and manycore processors that are paving the way towards personal supercomputing and very likely towards exascale computing.
This book constitutes the refereed proceedings of the 12th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems, DAIS 2012, held in Stockholm, Sweden, in June 2012 as one of the DisCoTec 2012 events. The 12 revised full papers and 9 short papers presented were carefully reviewed and selected from 58 submissions. The papers are organized in topical sections on peer-to-peer and large scale systems; security and reliability in web, cloud, p2p, and mobile systems; wireless, mobile, and pervasive systems; multidisciplinary approaches and case studies, ranging from Grid and parallel computing to multimedia and socio-technical systems; and service-oriented computing and e-commerce.
This book constitutes the refereed proceedings of the 5th International Conference on Data Management in Grid and Peer-to-Peer Systems, Globe 2012, held in Vienna, Austria, in September 2012 in conjunction with DEXA 2012. The 9 revised full papers presented were carefully reviewed and selected from 15 submissions. The papers are organized in topical sections on data management in the cloud, cloud MapReduce and performance evaluation, and data stream systems and distributed data mining.
This book constitutes the refereed proceedings of the 19th International Conference on Analytical and Stochastic Modelling Techniques and Applications, ASMTA 2012, held in Grenoble, France, in June 2012. The 20 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on queueing systems; networking applications; Markov chains; stochastic modelling.
This book constitutes the refereed proceedings of the 8th International Workshop on OpenMP, held in in Rome, Italy, in June 2012. The 18 technical full papers presented together with 7 posters were carefully reviewed and selected from 30 submissions. The papers are organized in topical sections on proposed extensions to OpenMP, runtime environments, optimization and accelerators, task parallelism, validations and benchmarks
This book constitutes the proceedings of the 7th International ICST Conference, TridentCom 2011, held in Shanghai, China, in April 2011. Out of numerous submissions the Program Committee finally selected 26 full papers and 2 invited papers. They focus on topics as future Internet testbeds, future wireless testbeds, federated and large scale testbeds, network and resource virtualization, overlay network testbeds, management provisioning and tools for networking research, and experimentally driven research and user experience evaluation. This book constitutes the thoroughly refereed post-conference proceedings of the 7th International Conference on Heterogeneous Networking for Quality, Reliability, Security and Robustness, QShine 2010. The 37 revised full papers presented along with 7 papers from the allocated Dedicated Short Range Communications Workshop, DSRC 2010, were carefully selected from numerous submissions. Conference papers are organized into 9 technical sessions, covering the topics of cognitive radio networks, security, resource allocation, wireless protocols and algorithms, advanced networking systems, sensor networks, scheduling and optimization, routing protocols, multimedia and stream processing. Workshop papers are organized into two sessions: DSRC networks and DSRC security.
The pervasive creation and consumption of content, especially visual content, is ingrained into our modern world. We're constantly consuming visual media content, in printed form and in digital form, in work and in leisure pursuits. Like our cave- man forefathers, we use pictures to record things which are of importance to us as memory cues for the future, but nowadays we also use pictures and images to document processes; we use them in engineering, in art, in science, in medicine, in entertainment and we also use images in advertising. Moreover, when images are in digital format, either scanned from an analogue format or more often than not born digital, we can use the power of our computing and networking to exploit images to great effect. Most of the technical problems associated with creating, compressing, storing, transmitting, rendering and protecting image data are already solved. We use - cepted standards and have tremendous infrastructure and the only outstanding ch- lenges, apart from managing the scale issues associated with growth, are to do with locating images. That involves analysing them to determine their content, clas- fying them into related groupings, and searching for images. To overcome these challenges we currently rely on image metadata, the description of the images, - ther captured automatically at creation time or manually added afterwards.
Most of the papers in this volume were presented at the NATO Advanced Research Workshop High Performance Computing: Technology and Application, held in Cetraro, Italy from 24 to 26 of June, 1996. The main purpose of the Workshop was to discuss some key scientific and technological developments in high performance computing, identify significant trends and defme desirable research objectives. The volume structure corresponds, in general, to the outline of the workshop technical agenda: general concepts and emerging systems, software technology, algorithms and applications. One of the Workshop innovations was an effort to extend slightly the scope of the meeting from scientific/engineering computing to enterprise-wide computing. The papers on performance and scalability of database servers, and Oracle DBMS reflect this attempt We hope that after reading this collection of papers the readers will have a good idea about some important research and technological issues in high performance computing. We wish to give our thanks to the NATO Scientific and Environmental Affairs Division for being the principal sponsor for the Workshop. Also we are pleased to acknowledge other institutions and companies that supported the Workshop: European Union: European Commission DGIII-Industry, CNR: National Research Council of Italy, University of Calabria, Alenia Spazio, Centro Italiano Ricerche Aerospaziali, ENEA: Italian National Agency for New Technology, Energy and the Environment, Fujitsu, Hewlett Packard-Convex, Hitachi, NEC, Oracle, and Silicon Graphics-Cray Research. Editors January 1997 vii LIST OF CONTRIBUTORS Ecole Nonnale Sucentsrieure de Lyon, 69364 Abarbanel. Robert
This preface tells the story of how Multimodal Usability responds to a special challenge. Chapter 1 describes the goals and structure of this book. The idea of describing how to make multimodal computer systems usable arose in the European Network of Excellence SIMILAR - "Taskforce for cre- ing human-machine interfaces SIMILAR to human-human communication," 2003- 2007, www. similar. cc. SIMILAR brought together people from multimodal signal processing and usability with the aim of creating enabling technologies for new kinds of multimodal systems and demonstrating results in research prototypes. Most of our colleagues in the network were, in fact, busy extracting features and guring out how to demonstrate progress in working interactive systems, while claiming not to have too much of a notion of usability in system development and evaluation. It was proposed that the authors support the usability of the many multimodal pro- types underway by researching and presenting a methodology for building usable multimodal systems. We accepted the challenge, rst and foremost, no doubt, because the formidable team spirit in SIMILAR could make people accept outrageous things. Second, h- ing worked for nearly two decades on making multimodal systems usable, we were curious - curious at the opportunity to try to understand what happens to traditional usability work, that is, work in human-computer interaction centred around tra- tional graphical user interfaces (GUIs), when systems become as multimodal and as advanced in other ways as those we build in research today.
The main objective of pervasive computing systems is to create environments where computers become invisible by being seamlessly integrated and connected into our everyday environment, where such embedded computers can then provide inf- mation and exercise intelligent control when needed, but without being obtrusive. Pervasive computing and intelligent multimedia technologies are becoming incre- ingly important to the modern way of living. However, many of their potential applications have not yet been fully realized. Intelligent multimedia allows dynamic selection, composition and presentation of the most appropriate multimedia content based on user preferences. A variety of applications of pervasive computing and - telligent multimedia are being developed for all walks of personal and business life. Pervasive computing (often synonymously called ubiquitous computing, palpable computing or ambient intelligence) is an emerging ?eld of research that brings in revolutionary paradigms for computing models in the 21st century. Pervasive c- puting is the trend towards increasingly ubiquitous connected computing devices in the environment, a trend being brought about by a convergence of advanced el- tronic - and particularly, wireless - technologies and the Internet. Recent advances in pervasive computers, networks, telecommunications and information technology, along with the proliferation of multimedia mobile devices - such as laptops, iPods, personal digital assistants (PDAs) and cellular telephones - have further stimulated the development of intelligent pervasive multimedia applications. These key te- nologiesarecreatingamultimediarevolutionthatwillhavesigni?cantimpactacross a wide spectrum of consumer, business, healthcare and governmental domains.
This state-of-the-art survey gives a systematic presentation of recent advances in the design and validation of computer architectures. The book covers a comprehensive range of architecture design and validation methods, from computer aided high-level design of VLSI circuits and systems to layout and testable design, including the modeling and synthesis of behavior and dataflow, cell-based logic optimization, machine assisted verification, and virtual machine design.
Wireless technology and handheld devices are dramatically changing the degrees of interaction throughout the world, further creating a ubiquitous network society. The emergence of advanced wireless telecommunication technologies and devices in today's society has increased accuracy and access rate, all of which are increasingly essential as the volume of information handled by users expands at an accelerated pace. The requirement for mobility leads to increasing pressure for applications and wireless systems to revolve around the concept of continuous communication with anyone, anywhere, and anytime. With the wireless technology and devices come ?exibility in network design and quicker deployment time. Over the past decades, numerous wireless telecommu- cation topics have received increasing attention from industry professionals, a- demics, and government agencies. Among these topics are the wireless Internet; multimedia; 3G/4G wireless networks and systems; mobile and wireless network security; wireless network modeling, algorithms, and simulation; satellite based s- tems; 802.11x; RFID; and broadband wireless access.
Model based testing is the most powerful technique for testing hardware and software systems. Models in Hardware Testing describes the use of models at all the levels of hardware testing. The relevant fault models for nanoscaled CMOS technology are introduced, and their implications on fault simulation, automatic test pattern generation, fault diagnosis, memory testing and power aware testing are discussed. Models and the corresponding algorithms are considered with respect to the most recent state of the art, and they are put into a historical context by a concluding chapter on the use of physical fault models in fault tolerance.
This book constitutes the refereed proceedings of the 25th International Conference on Architecture of Computing Systems, ARCS 2012, held in Munich, Germany, in February/March 2012. The 20 revised full papers presented in 7 technical sessions were carefully reviewed and selected from 65 submissions. The papers are organized in topical sections on robustness and fault tolerance, power-aware processing, parallel processing, processor cores, optimization, and communication and memory.
Evolution through natural selection has been going on for a very long time. Evolution through artificial selection has been practiced by humans for a large part of our history, in the breeding of plants and livestock. Artificial evolution, where we evolve an artifact through artificial selection, has been around since electronic computers became common: about 30 years. Right from the beginning, people have suggested using artificial evolution to design electronics automatically.l Only recently, though, have suitable re configurable silicon chips become available that make it easy for artificial evolution to work with a real, physical, electronic medium: before them, ex periments had to be done entirely in software simulations. Early research concentrated on the potential applications opened-up by the raw speed ad vantage of dedicated digital hardware over software simulation on a general purpose computer. This book is an attempt to show that there is more to it than that. In fact, a radically new viewpoint is possible, with fascinating consequences. This book was written as a doctoral thesis, submitted in September 1996. As such, it was a rather daring exercise in ruthless brevity. Believing that the contribution I had to make was essentially a simple one, I resisted being drawn into peripheral discussions. In the places where I deliberately drop a subject, this implies neither that it's not interesting, nor that it's not relevant: just that it's not a crucial part of the tale I want to tell here."
Parallel Language and Compiler Research in Japan offers the international community an opportunity to learn in-depth about key Japanese research efforts in the particular software domains of parallel programming and parallelizing compilers. These are important topics that strongly bear on the effectiveness and affordability of high performance computing systems. The chapters of this book convey a comprehensive and current depiction of leading edge research efforts in Japan that focus on parallel software design, development, and optimization that could be obtained only through direct and personal interaction with the researchers themselves.
During the last two decades, there have been many reports about the success and failure of investments in ICT and information systems. Failures in particular have drawn a lot of attention. The outcome of the implementation of information and communication systems has often been disastrous. Recent research does not show that results have improved. This raises the question why so many ICT projects perform so badly. Information, Organization and Information Systems Design: An Integrated Approach to Information Problems aims at discussing measures to improve the results of information systems. Bart Prakken identifies various factors that explain the shortfall of information systems. Subsequently, he provides a profound discussion of the measures that can be taken to remove the causes of failure. When organizations are confronted with information problems, they will almost automatically look for ICT solutions. However, Prakken argues that more fundamental and often cheaper solutions are in many cases available. When looking for solutions to information problems, the inter-relationship between organization, information and the people within the organization should explicitly be taken into account. The measures that the author proposes are based on organizational redesign, particularly using the sociotechnical approach. In cases where ICT solutions do have to be introduced, Prakken discusses a number of precautionary measures that will help their implementation. The book aims to contribute to the scientific debate on how to solve information problems, and can be used in graduate and postgraduate courses. It is also helpful to managers.
Software is continuously increasing in complexity. Paradigmatic shifts and new development frameworks make it easier to implement software - but not to test it. Software testing remains to be a topic with many open questions with regard to both technical low-level aspects and to the organizational embedding of testing. However, a desired level of software quality cannot be achieved by either choosing a technical procedure or by optimizing testing processes. In fact, it requires a holistic approach.This Brief summarizes the current knowledge of software testing and introduces three current research approaches. The base of knowledge is presented comprehensively in scope but concise in length; thereby the volume can be used as a reference. Research is highlighted from different points of view. Firstly, progress on developing a tool for automated test case generation (TCG) based on a program's structure is introduced. Secondly, results from a project with industry partners on testing best practices are highlighted. Thirdly, embedding testing into e-assessment of programming exercises is described."
This book presents the technical program of the International Embedded Systems Symposium (IESS) 2009. Timely topics, techniques and trends in embedded system design are covered by the chapters in this volume, including modelling, simulation, verification, test, scheduling, platforms and processors. Particular emphasis is paid to automotive systems and wireless sensor networks. Sets of actual case studies in the area of embedded system design are also included. Over recent years, embedded systems have gained an enormous amount of proce- ing power and functionality and now enter numerous application areas, due to the fact that many of the formerly external components can now be integrated into a single System-on-Chip. This tendency has resulted in a dramatic reduction in the size and cost of embedded systems. As a unique technology, the design of embedded systems is an essential element of many innovations. Embedded systems meet their performance goals, including real-time constraints, through a combination of special-purpose hardware and software components tailored to the system requirements. Both the development of new features and the reuse of existing intellectual property components are essential to keeping up with ever more demanding customer requirements. Furthermore, design complexities are steadily growing with an increasing number of components that have to cooperate properly. Embedded system designers have to cope with multiple goals and constraints simul- neously, including timing, power, reliability, dependability, maintenance, packaging and, last but not least, price. |
![]() ![]() You may like...
Schur Functions, Operator Colligations…
Daniel Alpay, Etc, …
Hardcover
R2,482
Discovery Miles 24 820
Integral, Measure, and Ordering
Beloslav Riecan, Tibor Neubrunn
Hardcover
R5,805
Discovery Miles 58 050
Thermal Transport Characteristics of…
S. Harikrishnan, A. D. Dhass
Hardcover
R3,582
Discovery Miles 35 820
Biological Control of Plant Pathogens…
M Reddi Kumar, M John Sudheer, …
Hardcover
From Chemistry to Consciousness - The…
Harald Atmanspacher, Ulrich Muller-Herold
Hardcover
R1,525
Discovery Miles 15 250
|