![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
Tsutomu Sasao - Kyushu Institute of Technology, Japan The material covered in this book is quite unique especially for p- ple who are reading English, since such material is quite hard to ?nd in the U.S. literature. German and Russian people have independently developed their theories, but such work is not well known in the U.S. societies. On the other hand, the theories developed in the U.S. are not conveyed to the other places. Thus, the same theory is re-invented or re-discovered in various places. For example, the switching theory was developed independently in the U.S., Europe, and Japan, almost at the same time [4, 18, 19]. Thus, the same notions are represented by di?- ent terminologies. For example, the Shegalkin polynomial is often called complement-free ring-sum, Reed-Muller expression [10], or Positive - larityReed-Mullerexpression [19].Anyway,itisquitedesirablethatsuch a unique book like this is written in English, and many people can read it without any di?culties. The authors have developed a logic system called XBOOLE.Itp- forms logical operations on the given functions. With XBOOLE, the readers can solve the problems given in the book. Many examples and complete solutions to the problems are shown, so the readers can study at home. I believe that the book containing many exercises and their solutions [9] is quite useful not only for the students, but also the p- fessors.
This book gathers chapters from some of the top international empirical software engineering researchers focusing on the practical knowledge necessary for conducting, reporting and using empirical methods in software engineering. Topics and features include guidance on how to design, conduct and report empirical studies. The volume also provides information across a range of techniques, methods and qualitative and quantitative issues to help build a toolkit applicable to the diverse software development contexts
System-Level Design Techniques for Energy-Efficient Embedded
Systems addresses the development and validation of co-synthesis
techniques that allow an effective design of embedded systems with
low energy dissipation. The book provides an overview of a
system-level co-design flow, illustrating through examples how
system performance is influenced at various steps of the flow
including allocation, mapping, and scheduling. The book places
special emphasis upon system-level co-synthesis techniques for
architectures that contain voltage scalable processors, which can
dynamically trade off between computational performance and power
consumption. Throughout the book, the introduced co-synthesis
techniques, which target both single-mode systems and emerging
multi-mode applications, are applied to numerous benchmarks and
real-life examples including a realistic smart phone.
This informative monograph helps meet the challenge of applying distributed control to dynamical systems. It shows readers how to bring the best parts of various control paradigms to bear in making distributed control more flexible and responsive.
As future generation information technology (FGIT) becomes specialized and fr- mented, it is easy to lose sight that many topics in FGIT have common threads and, because of this, advances in one discipline may be transmitted to others. Presentation of recent results obtained in different disciplines encourages this interchange for the advancement of FGIT as a whole. Of particular interest are hybrid solutions that c- bine ideas taken from multiple disciplines in order to achieve something more signi- cant than the sum of the individual parts. Through such hybrid philosophy, a new principle can be discovered, which has the propensity to propagate throughout mul- faceted disciplines. FGIT 2009 was the first mega-conference that attempted to follow the above idea of hybridization in FGIT in a form of multiple events related to particular disciplines of IT, conducted by separate scientific committees, but coordinated in order to expose the most important contributions. It included the following international conferences: Advanced Software Engineering and Its Applications (ASEA), Bio-Science and Bio-Technology (BSBT), Control and Automation (CA), Database Theory and Application (DTA), D- aster Recovery and Business Continuity (DRBC; published independently), Future G- eration Communication and Networking (FGCN) that was combined with Advanced Communication and Networking (ACN), Grid and Distributed Computing (GDC), M- timedia, Computer Graphics and Broadcasting (MulGraB), Security Technology (SecTech), Signal Processing, Image Processing and Pattern Recognition (SIP), and- and e-Service, Science and Technology (UNESST).
Das 21. Fachgesprach Autonome Mobile Systeme (AMS 2009) ist ein Forum, das Wissenschaftlerinnen und Wissenschaftlern aus Forschung und Industrie, die auf dem Gebiet der autonomen mobilen Systeme arbeiten, eine Basis fur den Gedankenaustausch bietet und wissenschaftliche Diskussionen sowie Kooperationen auf diesem Forschungsgebiet fordert bzw. initiiert. Inhaltlich finden sich ausgewahlte Beitrage zu den Themen Humanoide Roboter und Flugmaschinen, Perzeption und Sensorik, Kartierung und Lokalisation, Regelung, Navigation, Lernverfahren, Systemarchitekturen sowie der Anwendung von autonomen mobilen Systemen."
As future generation information technology (FGIT) becomes specialized and fr- mented, it is easy to lose sight that many topics in FGIT have common threads and, because of this, advances in one discipline may be transmitted to others. Presentation of recent results obtained in different disciplines encourages this interchange for the advancement of FGIT as a whole. Of particular interest are hybrid solutions that c- bine ideas taken from multiple disciplines in order to achieve something more signi- cant than the sum of the individual parts. Through such hybrid philosophy, a new principle can be discovered, which has the propensity to propagate throughout mul- faceted disciplines. FGIT 2009 was the first mega-conference that attempted to follow the above idea of hybridization in FGIT in a form of multiple events related to particular disciplines of IT, conducted by separate scientific committees, but coordinated in order to expose the most important contributions. It included the following international conferences: Advanced Software Engineering and Its Applications (ASEA), Bio-Science and Bio-Technology (BSBT), Control and Automation (CA), Database Theory and Application (DTA), D- aster Recovery and Business Continuity (DRBC; published independently), Future G- eration Communication and Networking (FGCN) that was combined with Advanced Communication and Networking (ACN), Grid and Distributed Computing (GDC), M- timedia, Computer Graphics and Broadcasting (MulGraB), Security Technology (SecTech), Signal Processing, Image Processing and Pattern Recognition (SIP), and- and e-Service, Science and Technology (UNESST).
As future generation information technology (FGIT) becomes specialized and fr- mented, it is easy to lose sight that many topics in FGIT have common threads and, because of this, advances in one discipline may be transmitted to others. Presentation of recent results obtained in different disciplines encourages this interchange for the advancement of FGIT as a whole. Of particular interest are hybrid solutions that c- bine ideas taken from multiple disciplines in order to achieve something more signi- cant than the sum of the individual parts. Through such hybrid philosophy, a new principle can be discovered, which has the propensity to propagate throughout mul- faceted disciplines. FGIT 2009 was the first mega-conference that attempted to follow the above idea of hybridization in FGIT in a form of multiple events related to particular disciplines of IT, conducted by separate scientific committees, but coordinated in order to expose the most important contributions. It included the following international conferences: Advanced Software Engineering and Its Applications (ASEA), Bio-Science and Bio-Technology (BSBT), Control and Automation (CA), Database Theory and Application (DTA), D- aster Recovery and Business Continuity (DRBC; published independently), Future G- eration Communication and Networking (FGCN) that was combined with Advanced Communication and Networking (ACN), Grid and Distributed Computing (GDC), M- timedia, Computer Graphics and Broadcasting (MulGraB), Security Technology (SecTech), Signal Processing, Image Processing and Pattern Recognition (SIP), and- and e-Service, Science and Technology (UNESST).
As future generation information technology (FGIT) becomes specialized and fr- mented, it is easy to lose sight that many topics in FGIT have common threads and, because of this, advances in one discipline may be transmitted to others. Presentation of recent results obtained in different disciplines encourages this interchange for the advancement of FGIT as a whole. Of particular interest are hybrid solutions that c- bine ideas taken from multiple disciplines in order to achieve something more signi- cant than the sum of the individual parts. Through such hybrid philosophy, a new principle can be discovered, which has the propensity to propagate throughout mul- faceted disciplines. FGIT 2009 was the first mega-conference that attempted to follow the above idea of hybridization in FGIT in a form of multiple events related to particular disciplines of IT, conducted by separate scientific committees, but coordinated in order to expose the most important contributions. It included the following international conferences: Advanced Software Engineering and Its Applications (ASEA), Bio-Science and Bio-Technology (BSBT), Control and Automation (CA), Database Theory and Application (DTA), D- aster Recovery and Business Continuity (DRBC; published independently), Future G- eration Communication and Networking (FGCN) that was combined with Advanced Communication and Networking (ACN), Grid and Distributed Computing (GDC), M- timedia, Computer Graphics and Broadcasting (MulGraB), Security Technology (SecTech), Signal Processing, Image Processing and Pattern Recognition (SIP), and u- and e-Service, Science and Technology (UNESST).
Explains fault tolerance in clear terms, with concrete examples
drawn from real-world settings
This volume presents the proceedings of the 6th International ICST Conference on Heterogeneous Networking for Quality, Reliability, Security and Robustness and of the Third International ICST Workshop on Advanced Architectures and Algorithms for Internet DElivery and Applications. Both events were held in Las Palmas de Gran Canaria in November 2009. To each of these events is devoted a specific part of the volume. The first part is dedicated to the proceedings of ICST QShine 2009. The first four chapters deal with new issues concerning the quality of service in IP-based telephony and multimedia. A second set of four chapters addresses some important research problems in mul- hop wireless networks, with a special emphasis on the problems of routing. The following three papers deal with recent advances in the field of data mana- ment and area coverage in sensor networks, while a fourth set of chapters deals with mobility and context-aware services. The fifth set of chapters contains new works in the area of Internet delivery and switching systems. The following chapters of the QShine part of the volume are devoted to papers in the areas of resource management in wireless networks, overlay, P2P and SOA arc- tectures. Some works also deal with the optimization of quality of service and energy consumption in WLAN and sensor networks and on the design of a mobility support in mesh networks.
As software systems become increasingly ubiquitous, issues of dependability become ever more crucial. Given that solutions to these issues must be considered from the very beginning of the design process, it is reasonable that dependability and security are addressed at the architectural level. This book has originated from an effort to bring together the research communities of software architectures, dependability and security. This state-of-the-art survey contains expanded and peer-reviewed papers based on the carefully selected contributions to two workshops: the Workshop on Architecting Dependable Systems (WADS 2008), organized at the 2008 International Conference on Dependable Systems and Networks (DSN 2008), held in Anchorage, Alaska, USA, in June 2008, and the Third International Workshop on Views On Designing Complex Architectures (VODCA 2008) held in Bertinoro, Italy, in August 2008. It also contains invited papers written by recognized experts in the area. The 13 papers are organized in topical sections on dependable service-oriented architectures, fault-tolerance and system evaluation, and architecting security.
Scheduled transportation networks give rise to very complex and large-scale networkoptimization problems requiring innovative solution techniques and ideas from mathematical optimization and theoretical computer science. Examples of scheduled transportation include bus, ferry, airline, and railway networks, with the latter being a prime application domain that provides a fair amount of the most complex and largest instances of such optimization problems. Scheduled transport optimization deals with planning and scheduling problems over several time horizons, and substantial progress has been made for strategic planning and scheduling problems in all transportation domains. This state-of-the-art survey presents the outcome of an open call for contributions asking for either research papers or state-of-the-art survey articles. We received 24 submissions that underwent two rounds of the standard peer-review process, out of which 18 were finally accepted for publication. The volume is organized in four parts: Robustness and Recoverability, Robust Timetabling and Route Planning, Robust Planning Under Scarce Resources, and Online Planning: Delay and Disruption Management.
In view of the incessant growth of data and knowledge and the continued diversifi- tion of information dissemination on a global scale, scalability has become a ma- stream research area in computer science and information systems. The ICST INFO- SCALE conference is one of the premier forums for presenting new and exciting research related to all aspects of scalability, including system architecture, resource management, data management, networking, and performance. As the fourth conf- ence in the series, INFOSCALE 2009 was held in Hong Kong on June 10 and 11, 2009. The articles presented in this volume focus on a wide range of scalability issues and new approaches to tackle problems arising from the ever-growing size and c- plexity of information of all kind. More than 60 manuscripts were submitted, and the Program Committee selected 22 papers for presentation at the conference. Each s- mission was reviewed by three members of the Technical Program Committee.
We are proud to present the proceedings of NET-COOP 2009, the inter- tionalconferenceonnetworkcontrolandoptimization, co-organizedbyEURAN- DOM/Eindhoven University of Technology and CWI. This year's conference at EURANDOM, held November 23-25, was the third in line after previous e- tions in Avignon (2007) and Paris (2008). NET-COOP 2009 was organized in conjunction with the Euro-NF workshop on "New Trends in Modeling, Quan- tative Methods, and Measurements. " While organized within the framework of Euro-NF, NET-COOP enjoys great interest beyond Euro-NF, as is attested by the geographic origins of the papers in these proceedings. TheNET-COOPconferencefocusesonperformanceanalysis, controland- timization of communication networks, including wired networks, wireless n- works, peer to peer networks and delay tolerant networks. In each of these domains network operators and service providers face the challenging task to e?ciently provide service at their customer's standards in a highly dynamic - vironment. Internet tra?c continues to grow tremendously in terms of volume as well as diversity. This development is fueled by the increasing availability of high-bandwidth access (both wired and wireless) to end users, opening new ground for evolving and newly emerging wide-band applications. The increase in network complexity, as well as the plurality of parties involved in network operation, calls for e?cient distributed control. New models and techniques for the control and optimization of networks are needed to address the challenge of allocating communication resources e?ciently and fairly, while accounting for non-cooperative behavior.
First established in August 1988, the Transaction Processing Performance Council (TPC) has shaped the landscape of modern transaction processing and database benchmarks over two decades. Now, the world is in the midst of an extraordinary information explosion led by rapid growth in the use of the Internet and connected devices. Both user-generated data and enterprise data levels continue to grow ex- nentially. With substantial technological breakthroughs, Moore's law will continue for at least a decade, and the data storage capacities and data transfer speeds will continue to increase exponentially. These have challenged industry experts and researchers to develop innovative techniques to evaluate and benchmark both hardware and software technologies. As a result, the TPC held its First Conference on Performance Evaluation and Benchmarking (TPCTC 2009) on August 24 in Lyon, France in conjunction with the 35th International Conference on Very Large Data Bases (VLDB 2009). TPCTC 2009 provided industry experts and researchers with a forum to present and debate novel ideas and methodologies in performance evaluation, measurement and characteri- tion for 2010 and beyond. This book contains the proceedings of this conference, including 16 papers and keynote papers from Michael Stonebraker and Karl Huppler.
The RV series of workshops brings together researchers from academia and - dustry that are interested in runtime veri?cation. The goal of the RV workshops is to study the ability to apply lightweight formal veri?cation during the exe- tion of programs. This approach complements the o?ine use of formal methods, which often use large resources. Runtime veri?cation methods and tools include the instrumentation of code with pieces of software that can help to test and monitor it online and detect, and sometimes prevent, potential faults. RV 2009 was held during June 26-28 in Grenoble, adjacent to CAV 2009. The program included 11 accepted papers. Two invited talks were given by AmirPnueli,on"CompositionalApproachtoMonitoringLinearTemporalLogic Properties" and Sriram Rajamani on "Veri?cation, Testing and Statistics." The program also included three tutorials. We would like to thank the members of the Program Committee and ad- tional referees for the reviewing and participation in the discussions.
Euro-Par is an annual series of international conferences dedicated to the p- motion and advancement of all aspects of parallel and distributed computing. th Euro-Par 2009 was the 15 edition in this conference series. Througout the years, the Euro-Par conferences have always attracted high-quality submissions and have become one of the established conferences in the area of parallel and distributed processing. Built upon the success of the annual conferences and in order to accommodate the needs of special interest groups (among the conf- ence participants), starting from 2006, a series of workshopsin conjunction with the Euro-Par main conference have been organized. This was the ?fth year in which workshops were organized within the Euro-Par conference format. The workshops focus on advanced specialized topics in parallel and d- tributed computing. These topics re?ect new scienti?c and technological dev- opments. While the community for such new and speci?c developments is still small and the topics have yet to become mature, the Euro-Par conference o?ers a platform in the form of a workshop to exchange ideas and discuss cooperation opportunities. The workshops in the past four years have been very successful. The number ofworkshopproposalsandthenumberof?nallyacceptedworkshopshavegra- ally increasedsince 2006.In 2008, nine workshopswereorganizedin conjunction with the main Euro-Par conference. In 2009, there were again nine workshop
Transition Engineering: Building a Sustainable Future examines new strategies emerging in response to the mega-issues of global climate change, decline in world oil supply, scarcity of key industrial minerals, and local environmental constraints. These issues pose challenges for organizations, businesses, and communities, and engineers will need to begin developing ideas and projects to implement the transition of engineered systems. This work presents a methodology for shifting away from unsustainable activities. Teaching the Transition Engineering approach and methodology is the focus of the text, and the concept is presented in a way that engineers can begin applying it in their work.
TheSAMOSworkshopisaninternationalgatheringofhighlyquali?edresearchers from academia and industry, sharing ideas in a 3-day lively discussion on the quietandinspiringnorthernmountainsideoftheMediterraneanislandofSamos. The workshopmeeting is one of two co-locatedevents (the other event being the IC-SAMOS).Asatradition, theworkshopfeaturespresentationsinthemorning, while after lunch all kinds of informal discussions and nut-cracking gatherings take place. The workshop is unique in the sense that not only solved research problems are presented and discussed but also (partly) unsolved problems and in-depth topical reviews can be unleashed in the scienti?c arena. Consequently, the workshopprovidesthe participantswithanenvironmentwherecollaboration rather than competition is fostered. The SAMOS conference and workshop were established in 2001 by Stamatis Vassiliadis with the goals outlined above in mind, and located on Samos, one of the most beautiful islands of the Aegean. The rich historical and cultural environment of the island, coupled with the intimate atmosphereandthe slowpaceofasmallvillagebythe seainthe middle of the Greek summer, provide a very conducive environment where ideas can be exchanged and shared freely
The PaCT-2009 (Parallel Computing Technologies) conference was a four-day eventheld in Novosibirsk. This was the tenth internationalconference to be held in the PaCT series. The conferences are held in Russia every odd year. The ?rst conference, PaCT 1991, was held in Novosibirsk (Academgorodok), September 7-11, 1991. The next PaCT conferences were held in Obninsk (near Moscow), August 30 to September 4, 1993; in St. Petersburg, September 12-15, 1995; in Yaroslavl, September 9-12, 1997; in Pushkin (near St. Petersburg), September 6-10, 1999; in Academgorodok (Novosibirsk), September 3-7, 2001; in Nizhni Novgorod, September 15-19, 2003; in Krasnoyarsk, September 5-9, 2005; in Pereslavl-Zalessky, September 3-7, 2007. Since 1995 all the PaCT Proceedings have been published by Springer in the LNCS series. PaCT-2009 was jointly organized by the Institute of Computational Mathematics and Mathematical Geophysics of the Russian Academy of Sciences (RAS) and the State University of Novosibirsk. The purpose of the conference was to bring together scientists working on theory, architecture, software, hardware and the solution of lar- scale problems in order to provide integrated discussions on parallel computing technologies. The conference attracted about 100 participants from around the world. Authors from 17 countries submitted 72 papers. Of those submitted, 34 were selected for the conference as regular papers; there were also 2 invited - pers. In addition there were a number of posters presented. All the papers were internationallyreviewedby at leastthree referees. A demo sessionwasorganized for the participants.
It is our great pleasure to present the proceedings of the 16th International ConferenceonAnalyticalandStochasticModellingTechniquesandApplications (ASMTA 2009) that took place in Madrid. The conference has become an established annual event in the agenda of the experts of analytical modelling and performance evaluation in Europe and internationally. This year the proceedings continued to be published as part of Springer's prestigiousLecture Notes in Computer Science (LNCS) series. This is another sign of the growing con?dence in the quality standards and procedures followed in the reviewing process and the program compilation. Following the traditions of the conference, ASMTA 2009, was honored to have a distinguished keynote speaker in the person of Kishor Trivedi. Professor Trivedi holds the Hudson Chair in the Department of Electrical and Computer EngineeringatDukeUniversity, Durham, NC, USA. HeistheDuke-SiteDirector of an NSF Industry-University Cooperative Research Center between NC State University and Duke University for carrying out applied research in computing and communications. He has been on the Duke faculty since 1975. He is the author of a well-known text entitled Probability and Statistics with Reliability, Queuing and Computer Science Applications, published by Prentice-Hall, the secondeditionofwhichhasjustappeared. Hehasalsopublishedtwootherbooks entitled Performance and Reliability Analysis of Computer Systems, published by Kluwer Academic Publishers, and Queueing Networks and Markov Chains, by John Wiley. He is also known for his work on the modelling and analysis of software aging and rejuvenation. The conference maintained the tradition of high-quality programs with an acceptance rate of about 40%.
OpenMP is an application programming interface (API) that is widely accepted as a de facto standard for high-level shared-memory parallel programming. It is a portable, scalable programming model that provides a simple and ?exible interface for developing shared-memory parallel applications in Fortran, C, and C++. Since its introduction in 1997, OpenMP has gained support from the - jority of high-performance compiler and hardware vendors. Under the direction of the OpenMP Architecture Review Board (ARB), the OpenMP speci?cation is undergoing further improvement. Active research in OpenMP compilers, r- time systems, tools, and environments continues to drive OpenMP evolution.To provideaforumforthedisseminationandexchangeofinformationaboutand- periences with OpenMP, the community of OpenMP researchersand developers in academia and industry is organized under cOMPunity (www.compunity.org). This organization has held workshops on OpenMP since 1999. This book contains the proceedings of the 5th International Workshop on OpenMP held in Dresden in June 2009. With sessions on tools, benchmarks, applications, performance and runtime environments it covered all aspects of the current use of OpenMP. In addition, several contributions presented p- posed extensions to OpenMP and evaluated reference implementations of those extensions. An invited talk provided the details on the latest speci?cation dev- opment inside the Architecture Review Board. Together with the two keynotes about OpenMP on hardware accelerators and future generation processors it demonstrated that OpenMP is suitable for future generation systems.
Updated and expanded, Bayesian Artificial Intelligence, Second Edition provides a practical and accessible introduction to the main concepts, foundation, and applications of Bayesian networks. It focuses on both the causal discovery of networks and Bayesian inference procedures. Adopting a causal interpretation of Bayesian networks, the authors discuss the use of Bayesian networks for causal modeling. They also draw on their own applied research to illustrate various applications of the technology. New to the Second Edition New chapter on Bayesian network classifiers New section on object-oriented Bayesian networks New section that addresses foundational problems with causal discovery and Markov blanket discovery New section that covers methods of evaluating causal discovery programs Discussions of many common modeling errors New applications and case studies More coverage on the uses of causal interventions to understand and reason with causal Bayesian networks Illustrated with real case studies, the second edition of this bestseller continues to cover the groundwork of Bayesian networks. It presents the elements of Bayesian network technology, automated causal discovery, and learning probabilities from data and shows how to employ these technologies to develop probabilistic expert systems. Web Resource The book's website at www.csse.monash.edu.au/bai/book/book.html offers a variety of supplemental materials, including example Bayesian networks and data sets. Instructors can email the authors for sample solutions to many of the problems in the text.
1 This volume contains the research papers and invited papers presented at the Third International Conference on Tests and Proofs (TAP 2009) held at ETH Zurich, Switzerland, during July 2-3, 2009. TheTAPconferenceisdevotedtotheconvergenceofproofsandtests. Itc- bines ideasfromboth sidesforthe advancementofsoftwarequality. Toprovethe correctness of a program is to demonstrate, through impeccable mathematical techniques, that it has no bugs; to test a program is to run it with the exp- tation of discovering bugs. The two techniques seem contradictory: if you have proved your program, it is fruitless to comb it for bugs; and if you are testing it, that is surely a sign that you have given up on any hope of proving its corre- ness. Accordingly, proofs and tests have, since the onset of software engineering research, been pursuedby distinct communities using ratherdi?erent techniques and tools. And yet the development of both approaches leads to the discovery of common issues and to the realization that each may need the other. The emergence of model checking has been one of the ?rst signs that contradiction may yield to complementarity, but in the past few years an increasing number of research e?orts have encountered the need for combining proofs and tests, dropping earlier dogmatic views of incompatibility and taking instead the best of what each of these software engineering domains has to o?er |
![]() ![]() You may like...
Handbook of Research on Smarter and…
Kavita Saini, Pethuru Raj
Hardcover
R7,211
Discovery Miles 72 110
AI, IoT, and Blockchain Breakthroughs in…
Kavita Saini, N.S. Gowri Ganesh, …
Hardcover
R6,439
Discovery Miles 64 390
Bio-inspired Algorithms for Data…
Simon James Fong, Richard C. Millham
Hardcover
R4,924
Discovery Miles 49 240
Computational Probability - Algorithms…
John H. Drew, Diane L. Evans, …
Hardcover
R4,355
Discovery Miles 43 550
Cohesive Subgraph Computation over Large…
Lijun Chang, Lu Qin
Hardcover
R1,521
Discovery Miles 15 210
Migrating to Swift from Web Development
Sean Liao, Mark Punak, …
Paperback
R1,798
Discovery Miles 17 980
|