![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
As software systems become increasingly ubiquitous, issues of dependability become ever more crucial. Given that solutions to these issues must be considered from the very beginning of the design process, it is reasonable that dependability and security are addressed at the architectural level. This book has originated from an effort to bring together the research communities of software architectures, dependability and security. This state-of-the-art survey contains expanded and peer-reviewed papers based on the carefully selected contributions to two workshops: the Workshop on Architecting Dependable Systems (WADS 2008), organized at the 2008 International Conference on Dependable Systems and Networks (DSN 2008), held in Anchorage, Alaska, USA, in June 2008, and the Third International Workshop on Views On Designing Complex Architectures (VODCA 2008) held in Bertinoro, Italy, in August 2008. It also contains invited papers written by recognized experts in the area. The 13 papers are organized in topical sections on dependable service-oriented architectures, fault-tolerance and system evaluation, and architecting security.
Scheduled transportation networks give rise to very complex and large-scale networkoptimization problems requiring innovative solution techniques and ideas from mathematical optimization and theoretical computer science. Examples of scheduled transportation include bus, ferry, airline, and railway networks, with the latter being a prime application domain that provides a fair amount of the most complex and largest instances of such optimization problems. Scheduled transport optimization deals with planning and scheduling problems over several time horizons, and substantial progress has been made for strategic planning and scheduling problems in all transportation domains. This state-of-the-art survey presents the outcome of an open call for contributions asking for either research papers or state-of-the-art survey articles. We received 24 submissions that underwent two rounds of the standard peer-review process, out of which 18 were finally accepted for publication. The volume is organized in four parts: Robustness and Recoverability, Robust Timetabling and Route Planning, Robust Planning Under Scarce Resources, and Online Planning: Delay and Disruption Management.
In view of the incessant growth of data and knowledge and the continued diversifi- tion of information dissemination on a global scale, scalability has become a ma- stream research area in computer science and information systems. The ICST INFO- SCALE conference is one of the premier forums for presenting new and exciting research related to all aspects of scalability, including system architecture, resource management, data management, networking, and performance. As the fourth conf- ence in the series, INFOSCALE 2009 was held in Hong Kong on June 10 and 11, 2009. The articles presented in this volume focus on a wide range of scalability issues and new approaches to tackle problems arising from the ever-growing size and c- plexity of information of all kind. More than 60 manuscripts were submitted, and the Program Committee selected 22 papers for presentation at the conference. Each s- mission was reviewed by three members of the Technical Program Committee.
Going beyond isolated research ideas and design experiences, Designing Network On-Chip Architectures in the Nanoscale Era covers the foundations and design methods of network on-chip (NoC) technology. The contributors draw on their own lessons learned to provide strong practical guidance on various design issues. Exploring the design process of the network, the first part of the book focuses on basic aspects of switch architecture and design, topology selection, and routing implementation. In the second part, contributors discuss their experiences in the industry, offering a roadmap to recent products. They describe Tilera's TILE family of multicore processors, novel Intel products and research prototypes, and the TRIPS operand network (OPN). The last part reveals state-of-the-art solutions to hardware-related issues and explains how to efficiently implement the programming model at the network interface. In the appendix, the microarchitectural details of two switch architectures targeting multiprocessor system-on-chips (MPSoCs) and chip multiprocessors (CMPs) can be used as an experimental platform for running tests. A stepping stone to the evolution of future chip architectures, this volume provides a how-to guide for designers of current NoCs as well as designers involved with 2015 computing platforms. It cohesively brings together fundamental design issues, alternative design paradigms and techniques, and the main design tradeoffs-consistently focusing on topics most pertinent to real-world NoC designers.
We are proud to present the proceedings of NET-COOP 2009, the inter- tionalconferenceonnetworkcontrolandoptimization, co-organizedbyEURAN- DOM/Eindhoven University of Technology and CWI. This year's conference at EURANDOM, held November 23-25, was the third in line after previous e- tions in Avignon (2007) and Paris (2008). NET-COOP 2009 was organized in conjunction with the Euro-NF workshop on "New Trends in Modeling, Quan- tative Methods, and Measurements. " While organized within the framework of Euro-NF, NET-COOP enjoys great interest beyond Euro-NF, as is attested by the geographic origins of the papers in these proceedings. TheNET-COOPconferencefocusesonperformanceanalysis, controland- timization of communication networks, including wired networks, wireless n- works, peer to peer networks and delay tolerant networks. In each of these domains network operators and service providers face the challenging task to e?ciently provide service at their customer's standards in a highly dynamic - vironment. Internet tra?c continues to grow tremendously in terms of volume as well as diversity. This development is fueled by the increasing availability of high-bandwidth access (both wired and wireless) to end users, opening new ground for evolving and newly emerging wide-band applications. The increase in network complexity, as well as the plurality of parties involved in network operation, calls for e?cient distributed control. New models and techniques for the control and optimization of networks are needed to address the challenge of allocating communication resources e?ciently and fairly, while accounting for non-cooperative behavior.
First established in August 1988, the Transaction Processing Performance Council (TPC) has shaped the landscape of modern transaction processing and database benchmarks over two decades. Now, the world is in the midst of an extraordinary information explosion led by rapid growth in the use of the Internet and connected devices. Both user-generated data and enterprise data levels continue to grow ex- nentially. With substantial technological breakthroughs, Moore's law will continue for at least a decade, and the data storage capacities and data transfer speeds will continue to increase exponentially. These have challenged industry experts and researchers to develop innovative techniques to evaluate and benchmark both hardware and software technologies. As a result, the TPC held its First Conference on Performance Evaluation and Benchmarking (TPCTC 2009) on August 24 in Lyon, France in conjunction with the 35th International Conference on Very Large Data Bases (VLDB 2009). TPCTC 2009 provided industry experts and researchers with a forum to present and debate novel ideas and methodologies in performance evaluation, measurement and characteri- tion for 2010 and beyond. This book contains the proceedings of this conference, including 16 papers and keynote papers from Michael Stonebraker and Karl Huppler.
The RV series of workshops brings together researchers from academia and - dustry that are interested in runtime veri?cation. The goal of the RV workshops is to study the ability to apply lightweight formal veri?cation during the exe- tion of programs. This approach complements the o?ine use of formal methods, which often use large resources. Runtime veri?cation methods and tools include the instrumentation of code with pieces of software that can help to test and monitor it online and detect, and sometimes prevent, potential faults. RV 2009 was held during June 26-28 in Grenoble, adjacent to CAV 2009. The program included 11 accepted papers. Two invited talks were given by AmirPnueli,on"CompositionalApproachtoMonitoringLinearTemporalLogic Properties" and Sriram Rajamani on "Veri?cation, Testing and Statistics." The program also included three tutorials. We would like to thank the members of the Program Committee and ad- tional referees for the reviewing and participation in the discussions.
Designing the future internet requires an in-depth consideration of the mana- ment,dimensioningandtra?ccontrolissuesthatwillbe involvedinthe network operations of these networks. The International Workshop on Tra?c Mana- ment and Tra?c Engineering of the Future Internet, FITraMEn2008, organized 1 within the framework of the Network of Excellence Euro-NF,providedanopen forum to present and discuss new ideas in this area in the context of ?xed, wireless and spontaneous (ad hoc and sensor) networks. TheNetworkofExcellenceEuro-NF"AnticipatingtheNetworkoftheFuture - From Theory to Design" is a European project funded by the European Union withintheSeventhFrameworkProgram.ThefocusofEuro-NFistodevelopnew principles and methods to design/dimension/control/manage multi-technology architectures. The emerging networking paradigms raise new challenging sci- ti?c and technological problems embedded in complex policy, governance, and worldwide standards issues. Dealing with the diversity of these scienti?c and social, political and economic challenges requires the integration of a wide range of research capabilities, a role that Euro-NF aims to ful?ll. This proceedings volume contains a selection of the research contributions presented at FITraMEn 2008. The workshop was held December 11-12, 2008 in 2 Porto, Portugal, organized by Instituto de Telecomunica, c" oes .
MobiSec 2009 was the first ICST conference on security and privacy in mobile information and communication systems. With the the vast area of mobile technology research and application, the intention behind the creation of MobiSec was to make a small, but unique contribution to build a bridge between top-level research and large scale application of novel kinds of information security for mobile devices and communication. The papers at MobiSec 2009 dealt with a broad variety of subjects ranging from issues of trust in and security of mobile devices and embedded hardware security, over efficient cryptography for resource-restricted platforms, to advanced applications such as wireless sensor networks, user authentication, and privacy in an environment of autonomously communicating objects. With hindsight a leitmotif emerged from these contributions, which corrobarated the idea behind MobiSec; a set of powerful tools have been created in various branches of the security discipline, which await combined application to build trust and security into mobile (that is, all future) networks, autonomous and personal devices, and pervasive applications
Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology. New to the Second Edition New chapter on Bayesian network classifiers New section on object-oriented Bayesian networks New section that addresses foundational problems with causal discovery and Markov blanket discovery New section that covers methods of evaluating causal discovery programs Discussions of many common modeling errors New applications and case studies More coverage on the uses of causal interventions to understand and reason with causal Bayesian networks Illustrated with real case studies, the second edition of this bestseller continues to cover the groundwork of Bayesian networks. It presents the elements of Bayesian network technology, automated causal discovery, and learning probabilities from data and shows how to employ these technologies to develop probabilistic expert systems. Web Resource The book's website at www.csse.monash.edu.au/bai/book/book.html offers a variety of supplemental materials, including example Bayesian networks and data sets. Instructors can email the authors for sample solutions to many of the problems in the text.
Going beyond isolated research ideas and design experiences, Designing Network On-Chip Architectures in the Nanoscale Era covers the foundations and design methods of network on-chip (NoC) technology. The contributors draw on their own lessons learned to provide strong practical guidance on various design issues. Exploring the design process of the network, the first part of the book focuses on basic aspects of switch architecture and design, topology selection, and routing implementation. In the second part, contributors discuss their experiences in the industry, offering a roadmap to recent products. They describe Tilera's TILE family of multicore processors, novel Intel products and research prototypes, and the TRIPS operand network (OPN). The last part reveals state-of-the-art solutions to hardware-related issues and explains how to efficiently implement the programming model at the network interface. In the appendix, the microarchitectural details of two switch architectures targeting multiprocessor system-on-chips (MPSoCs) and chip multiprocessors (CMPs) can be used as an experimental platform for running tests. A stepping stone to the evolution of future chip architectures, this volume provides a how-to guide for designers of current NoCs as well as designers involved with 2015 computing platforms. It cohesively brings together fundamental design issues, alternative design paradigms and techniques, and the main design tradeoffs-consistently focusing on topics most pertinent to real-world NoC designers.
Patients have always been encouraged to be active participants in managing their health. New technologies, cultural shifts, trends in healthcare delivery, and policies have brought the patients' role in healthcare to the forefront. This 2-volume set reviews and advances the emerging discipline of Patient Ergonomics. The set focuses on patients and their performance. It presents practical recommendations and case studies useful for researchers and practitioners. It covers diverse healthcare settings outside of hospitals and clinics, and provides a combination of foundational content and specific applications in detail. The 2-volume set will be ideal for academics working in healthcare and patient-centered research, their students, human factors practitioners (consultants, employees of health systems and technology/medical device compaines), healthcare professionals (physicians, nurses, pharmacists), and organizational leaders (healthcare administrators and executives).
TheSAMOSworkshopisaninternationalgatheringofhighlyquali?edresearchers from academia and industry, sharing ideas in a 3-day lively discussion on the quietandinspiringnorthernmountainsideoftheMediterraneanislandofSamos. The workshopmeeting is one of two co-locatedevents (the other event being the IC-SAMOS).Asatradition, theworkshopfeaturespresentationsinthemorning, while after lunch all kinds of informal discussions and nut-cracking gatherings take place. The workshop is unique in the sense that not only solved research problems are presented and discussed but also (partly) unsolved problems and in-depth topical reviews can be unleashed in the scienti?c arena. Consequently, the workshopprovidesthe participantswithanenvironmentwherecollaboration rather than competition is fostered. The SAMOS conference and workshop were established in 2001 by Stamatis Vassiliadis with the goals outlined above in mind, and located on Samos, one of the most beautiful islands of the Aegean. The rich historical and cultural environment of the island, coupled with the intimate atmosphereandthe slowpaceofasmallvillagebythe seainthe middle of the Greek summer, provide a very conducive environment where ideas can be exchanged and shared freely
The 14th International Conference on Implementation and Application of - tomata (CIAA 2009) was held in NICTA's Neville Roach Laboratory at the University of New South Wales, Sydney, Australia during July 14-17, 2009. This volume of Lecture Notes in Computer Science contains the papers that were presented at CIAA 2009, as well as abstracts of the posters and short papers that were presented at the conference. The volume also includes papers orextendedabstractsofthethreeinvitedtalkspresentedbyGonzalo Navarro on ImplementationandApplicationofAutomatainStringProcessing, byChristoph Koch on Applications of Automata in XML Processing, and by Helmut Seidl on Program Analysis Through Finite Tree Automata. The 23 regular papers were selected from 42 submissions covering various ?elds in the application, implementation, and theory of automata and related structures. This year, six additional papers were selected as "short papers"; at the conference these were allocated the same presentation length as r- ular papers. Each paper was reviewed by at least three Program Committee members, with the assistance of external referees. Papers were submitted by - thors from the following countries: Australia, Austria, Belgium, Brazil, Canada, China, Czech Republic, Finland, France, Germany, India, Italy, Republic of - rea, Japan, Latvia, TheNetherlands, Portugal, RussianFederation, Spain, South Africa, Turkey, United Arab Emirates, and the USA.
The PaCT-2009 (Parallel Computing Technologies) conference was a four-day eventheld in Novosibirsk. This was the tenth internationalconference to be held in the PaCT series. The conferences are held in Russia every odd year. The ?rst conference, PaCT 1991, was held in Novosibirsk (Academgorodok), September 7-11, 1991. The next PaCT conferences were held in Obninsk (near Moscow), August 30 to September 4, 1993; in St. Petersburg, September 12-15, 1995; in Yaroslavl, September 9-12, 1997; in Pushkin (near St. Petersburg), September 6-10, 1999; in Academgorodok (Novosibirsk), September 3-7, 2001; in Nizhni Novgorod, September 15-19, 2003; in Krasnoyarsk, September 5-9, 2005; in Pereslavl-Zalessky, September 3-7, 2007. Since 1995 all the PaCT Proceedings have been published by Springer in the LNCS series. PaCT-2009 was jointly organized by the Institute of Computational Mathematics and Mathematical Geophysics of the Russian Academy of Sciences (RAS) and the State University of Novosibirsk. The purpose of the conference was to bring together scientists working on theory, architecture, software, hardware and the solution of lar- scale problems in order to provide integrated discussions on parallel computing technologies. The conference attracted about 100 participants from around the world. Authors from 17 countries submitted 72 papers. Of those submitted, 34 were selected for the conference as regular papers; there were also 2 invited - pers. In addition there were a number of posters presented. All the papers were internationallyreviewedby at leastthree referees. A demo sessionwasorganized for the participants.
It is our great pleasure to present the proceedings of the 16th International ConferenceonAnalyticalandStochasticModellingTechniquesandApplications (ASMTA 2009) that took place in Madrid. The conference has become an established annual event in the agenda of the experts of analytical modelling and performance evaluation in Europe and internationally. This year the proceedings continued to be published as part of Springer's prestigiousLecture Notes in Computer Science (LNCS) series. This is another sign of the growing con?dence in the quality standards and procedures followed in the reviewing process and the program compilation. Following the traditions of the conference, ASMTA 2009, was honored to have a distinguished keynote speaker in the person of Kishor Trivedi. Professor Trivedi holds the Hudson Chair in the Department of Electrical and Computer EngineeringatDukeUniversity, Durham, NC, USA. HeistheDuke-SiteDirector of an NSF Industry-University Cooperative Research Center between NC State University and Duke University for carrying out applied research in computing and communications. He has been on the Duke faculty since 1975. He is the author of a well-known text entitled Probability and Statistics with Reliability, Queuing and Computer Science Applications, published by Prentice-Hall, the secondeditionofwhichhasjustappeared. Hehasalsopublishedtwootherbooks entitled Performance and Reliability Analysis of Computer Systems, published by Kluwer Academic Publishers, and Queueing Networks and Markov Chains, by John Wiley. He is also known for his work on the modelling and analysis of software aging and rejuvenation. The conference maintained the tradition of high-quality programs with an acceptance rate of about 40%.
OpenMP is an application programming interface (API) that is widely accepted as a de facto standard for high-level shared-memory parallel programming. It is a portable, scalable programming model that provides a simple and ?exible interface for developing shared-memory parallel applications in Fortran, C, and C++. Since its introduction in 1997, OpenMP has gained support from the - jority of high-performance compiler and hardware vendors. Under the direction of the OpenMP Architecture Review Board (ARB), the OpenMP speci?cation is undergoing further improvement. Active research in OpenMP compilers, r- time systems, tools, and environments continues to drive OpenMP evolution.To provideaforumforthedisseminationandexchangeofinformationaboutand- periences with OpenMP, the community of OpenMP researchersand developers in academia and industry is organized under cOMPunity (www.compunity.org). This organization has held workshops on OpenMP since 1999. This book contains the proceedings of the 5th International Workshop on OpenMP held in Dresden in June 2009. With sessions on tools, benchmarks, applications, performance and runtime environments it covered all aspects of the current use of OpenMP. In addition, several contributions presented p- posed extensions to OpenMP and evaluated reference implementations of those extensions. An invited talk provided the details on the latest speci?cation dev- opment inside the Architecture Review Board. Together with the two keynotes about OpenMP on hardware accelerators and future generation processors it demonstrated that OpenMP is suitable for future generation systems.
1 This volume contains the research papers and invited papers presented at the Third International Conference on Tests and Proofs (TAP 2009) held at ETH Zurich, Switzerland, during July 2-3, 2009. TheTAPconferenceisdevotedtotheconvergenceofproofsandtests. Itc- bines ideasfromboth sidesforthe advancementofsoftwarequality. Toprovethe correctness of a program is to demonstrate, through impeccable mathematical techniques, that it has no bugs; to test a program is to run it with the exp- tation of discovering bugs. The two techniques seem contradictory: if you have proved your program, it is fruitless to comb it for bugs; and if you are testing it, that is surely a sign that you have given up on any hope of proving its corre- ness. Accordingly, proofs and tests have, since the onset of software engineering research, been pursuedby distinct communities using ratherdi?erent techniques and tools. And yet the development of both approaches leads to the discovery of common issues and to the realization that each may need the other. The emergence of model checking has been one of the ?rst signs that contradiction may yield to complementarity, but in the past few years an increasing number of research e?orts have encountered the need for combining proofs and tests, dropping earlier dogmatic views of incompatibility and taking instead the best of what each of these software engineering domains has to o?er
The First International Workshop on Traffic Monitoring and Analysis (TMA 2009) was an initiative from the COST Action IC0703 "Data Traffic Monitoring and Analysis: Theory, Techniques, Tools and Applications for the Future Networks" (www.cost-tma.eu). The COST program is an intergovernmental framework for European Cooperation in Science and Technology, allowing the coordination of nationally funded research on a European level. Each COST Action contributes to reducing the fragmentation in research and opening the European Research Area to cooperation worldwide. Traffic monitoring and analysis (TMA) is now an important research topic within the field of networking. It involves many research groups worldwide that are coll- tively advancing our understanding of the Internet. The importance of TMA research is motivated by the fact that modern packet n- works are highly complex and ever-evolving objects. Understanding, developing and managing such environments is difficult and expensive in practice. Traffic monitoring is a key methodology for understanding telecommunication technology and improving its operation, and the recent advances in this field suggest that evolved TMA-based techniques can play a key role in the operation of real networks. Moreover, TMA offers a basis for prevention and response in network security, as typically the det- tion of attacks and intrusions requires the analysis of detailed traffic records. On the more theoretical side, TMA is an attractive research topic for many reasons.
"Robust Control for Uncertain Networked Control Systems with Random Delays" addresses the problem of analysis and design of networked control systems when the communication delays are varying in a random fashion. The random nature of the time delays is typical for commercially used networks, such as a DeviceNet (which is a controller area network) and Ethernet network. The main technique used in this book is based on the Lyapunov-Razumikhin method, which results in delay-dependent controllers. The existence of such controllers and fault estimators are given in terms of the solvability of bilinear matrix inequalities. Iterative algorithms are proposed to change this non-convex problem into quasi-convex optimization problems, which can be solved effectively by available mathematical tools. Finally, to demonstrate the effectiveness and advantages of the proposed design method in the book, numerical examples are given in each designed control system.
1 2 Per Stenstro ..m and David Whalley 1 Chalmers University of Technology, Sweden 2 Florida State University, U.S.A. In January2007,the secondedition in the series of International Conferenceson High-Performance Embedded Architectures andCompilers (HiPEAC'2007)was held in Ghent,Belgium.We were fortunate to attract around70 submissions of whichonly19wereselected forpresentation.Amongthese,weaskedtheauthors ofthe?vemost highly rated contributionsto make extended versions ofthem. They all accepted to do that andtheirarticles appear in this section ofthe second volume. The?rstarticlebyKeramidas,Xekalakis,andKaxirasfocusesontheincreased power consumption in set-associativecaches.They presenta novel approach to reduce dynamicpower that leverages on the previously proposed cache decay approach that has been shown to reduce static (or leakage) power. In the secondarticlebyMagarajan,Gupta,andKrishnaswamythe focus ison techniques to encrypt data in memory to preservedata integrity. The problem with previous techniques is that the decryption latency ends up on the critical memory access path. Especially in embedded processors,caches are small and it isdi?cultto hide the decryption latency. The authors propose a compiler-based strategy that manages to reduce the impact of the decryption time signi?cantly. The thirdarticlebyKluyskensandEeckhoutfocusesondetailedarchitectural simulation techniques.It is well-known that they are ine?cientandaremedy to the problem isto use sampling.When usingsampling,onehastowarm up memory structures such as caches andbranch predictors.Thispaper introduces a noveltechnique calledBranchHistoryMatchingfore?cient warmupofbranch predictors. The fourth articlebyBhadauria,McKee,Singh, and Tyson focuses on static power consumptioninlarge caches.Theyintroduce a reuse-distance drowsy cache mechanism that issimpleas well as e?ective in reducingthestaticpower in caches.
in the algorithmic and foundational aspects, high-level approaches as well as more applied and technology-related issues regarding tools and applications of wireless sensor networks. June 2009 Jie Wu Viktor K. Prasanna Ivan Stojmenovic Message from the Program Chair This proceedings volume includes the accepted papers of the 5th International Conference on Distributed Computing in Sensor Systems. This year we int- duced some changes in the composition of the three tracks to increase cro- disciplinary interactions. The Algorithms track was enhanced to include topics pertaining to performance analysis and network optimization and renamed "- gorithms and Analysis. " The Systems and Applications tracks, previously s- arate, were combined into a single track. And a new track was introduced on "Signal Processing and Information Theory. " DCOSS 2009 received 116 submissions for the three tracks. After a thorough reviewprocess, inwhichatleastthreereviewsweresolicitedforallpapers, atotal of 26 papers were accepted. The research contributions in this proceedings span many aspects of sensor systems, including energy-e?cient mechanisms, tracking and surveillance, activity recognition, simulation, query optimization, network coding, localization, application development, data and code dissemination. Basedonthereviews, wealsoidenti?edthebestpaperfromeachtrack, which are as follows: BestpaperintheAlgorithmsandAnalysistrack: "E?cientSensorPlacement for Surveillance Problems" by Pankaj Agarwal, Esther Ezra and Shashidhara Ganjugunte. Best paper in the Applications and Systems track: "Optimal Allocation of Time-Resources for Multihypothesis Activity-Level Detection," by Gautam Thatte, ViktorRozgic, MingLi, SabyasachiGhosh, UrbashiMitra, ShriNarayanan, Murali Annavaram and Donna Spruijt-Metz. Best paper in the Signal Processing and Information Theory track: "D- tributed Computation of Likelihood Maps for Target Tracking" by Jonathan Gallagher, Randolph Moses and Emre Ertin.
This book constitutes the thoroughly refereed post-conference proceedings of the Second International Conference on Networks for Grid Applications, GridNets 2008, held in Beijing, China in October 2008. The 19 revised full papers presented together with 4 invited
presentations were carefully reviewed and selected from 37
submissions. The papers address the whole spectrum of grid
networks, ranging from formal approaches for grid management to
case studis in optical switching.
This volume contains the proceedings of the 12th International Conference on Hybrid Systems Computation and Control (HSCC 2009) held in San Francisco, CaliforniaduringApril13-15,2009. Theannualconferenceonhybridsystems- cuses on researchin embedded, reactive systems involving the interplay between discrete switching and continuous dynamics. HSCC is a forum for academic and industrial researchers and practitioners to exchange information on the latest advancements, both practical and theoretical, in the design, analysis, control, optimization, and implementation of hybrid systems. HSCC 2009 was the 12th in a series of successful meetings. Previous versions wereheld in Berkeley(1998), Nijmegen (1999), Pittsburgh(2000), Rome (2001), PaloAlto (2002), Prague(2003), Philadelphia (2004), Zurich (2005), Santa B- bara (2006), Pisa (2007), and St. Louis (2008). HSCC 2009 was part of the 2nd Cyber-Physical Systems Week (CPSWeek), whichconsistedoftheco-locationofHSCCwiththeInternationalConferenceon Information Processing in Sensor Networks (IPSN) and the Real-Time and - bedded Technology and Applications Symposium (RTAS). Through CPSWeek, the three conferences had joint invited speakers, poster sessions, and joint - cial events. In addition to the workshops sponsored by CPSWeek, HSCC 2009 sponsored two workshops: - NSV II: Second International Workshop on Numerical Software Veri?cation - HSCB 2009: Hybrid Systems Approaches to Computational Biology We would like to thank the authors of submitted papers, the Program C- mittee members, the additional reviewers, the workshop organizers, and the HSCC Steering Committee members for their help in composing a strong p- gram. We also thank the CPSWeek Organizing Committee, in particular Rajesh Gupta, for their strenuous work in handling the local arrangemen
This volume of the Lecture Notes in Computer Science series contains all papers accepted for presentation at the 20th IFIP/IEEE International Workshop on Distributed Systems: Operations and Management (DSOM 2009), which was held in Venice, Italy, during October 27-28, 2009. DSOM 2009 was the 20th event in a series of annual workshops. It followed in the footsteps of previous successful meetings, the most recent of which were held on Samos, Greece (DSOM 2008), San Jos e, California, USA (DSOM 2007), Dublin, Ireland (DSOM 2006), Barcelona, Spain (DSOM 2005), and Davis, C- ifornia, USA (DSOM 2004). The goal of the DSOM workshops is to bring - gether researchersfromindustry andacademia workingin the areasofnetworks, systems, and service management, to discuss recent advances and foster future growth. In contrast to the larger management conferences, such as IM (Inter- tional Symposium on Integrated Network Management) and NOMS (Network OperationsandManagementSymposium),DSOMworkshopshaveasingle-track program in order to stimulate more intense interaction among participants. |
![]() ![]() You may like...
Psychology of Bilingualism - The…
Alfredo Ardila, Anna B. Cieslicka, …
Hardcover
R4,916
Discovery Miles 49 160
Personality Psychology: Domains of…
Randy Larsen, David Buss, …
Paperback
![]() R1,815 Discovery Miles 18 150
The Dopamine Brain - Break Free From Bad…
Anastasia Hronis
Paperback
|