![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
This two-volume set LNCS 6771 and 6772 constitutes the refereed proceedings of the Symposium on Human Interface 2011, held in Orlando, FL, USA in July 2011 in the framework of the 14th International Conference on Human-Computer Interaction, HCII 2011 with 10 other thematically similar conferences. The 137 revised papers presented in the two volumes were carefully reviewed and selected from numerous submissions. The papers accepted for presentation thoroughly cover the thematic area of human interface and the management of information. The 62 papers of this second volume address the following major topics: access to information; supporting communication; supporting work, collaboration; decision-making and business; mobile and ubiquitous information; and information in aviation.
Holger Scherl introduces the reader to the reconstruction problem in computed tomography and its major scientific challenges that range from computational efficiency to the fulfillment of Tuy's sufficiency condition. The assessed hardware architectures include multi- and many-core systems, cell broadband engine architecture, graphics processing units, and field programmable gate arrays.
Focus on issues and principles in context awareness, sensor processing and software design (rather than sensor networks or HCI or particular commercial systems). Designed as a textbook, with readings and lab problems in most chapters. Focus on concepts, algorithms and ideas rather than particular technologies.
This book constitutes the refereed proceedings of the 11th
International Conference on Next Generation Teletraffic and
Wired/Wireless Advanced Networking, NEW2AN 2011 and the 4th
Conference on Smart Spaces, ruSMART 2011 jointly held in St.
Petersburg, Russia, in August 2011.
This book constitutes the refereed proceedings of the 8th International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR 2011, held in St. Petersburg, Russia in July, 2011. The book presents 30 revised full papers selected from a total of 52 submissions. The book is divided in sections on discrete and continuous optimization, segmentation, motion and video, learning and shape analysis.
This book constitutes the refereed proceedings of the 11th IFIP
WG 6.1 International Conference on Distributed Applications and
Interoperable Systems, DAIS 2011, held in Reykjavik, Iceland, in
June 2011 as one of the DisCoTec 2011 events.
An up-to-date and comprehensive overview of information and database systems design and implementation. The book provides an accessible presentation and explanation of technical architecture for systems complying with TOGAF standards, the accepted international framework. Covering nearly the full spectrum of architectural concern, the authors also illustrate and concretize the notion of traceability from business goals, strategy through to technical architecture, providing the reader with a holistic and commanding view. The work has two mutually supportive foci. First, information technology technical architecture, the in-depth, illustrative and contemporary treatment of which comprises the core and majority of the book; and secondly, a strategic and business context.
This book constitutes thoroughly refereed post-conference proceedings of the workshops of the 16th International Conference on Parallel Computing, Euro-Par 2010, held in Ischia, Italy, in August/September 2010. The papers of these 9 workshops HeteroPar, HPCC, HiBB, CoreGrid, UCHPC, HPCF, PROPER, CCPI, and VHPC focus on promotion and advancement of all aspects of parallel and distributed computing.
The first ESPRIT Basic Research Project on Predictably Dependable Computing Systems (No. 3092, PDCS) commenced in May 1989, and ran until March 1992. The institutions and principal investigators that were involved in PDCS were: City University, London, UK (Bev Littlewood), lEI del CNR, Pisa, Italy (Lorenzo Strigini), Universitiit Karlsruhe, Germany (Tom Beth), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The work continued after March 1992, and a three-year successor project (No. 6362, PDCS2) officially started in August 1992, with a slightly changed membership: Chalmers University of Technology, Goteborg, Sweden (Erland Jonsson), City University, London, UK (Bev Littlewood), CNR, Pisa, Italy (Lorenzo Strigini), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), Universite Catholique de Louvain, Belgium (Pierre-Jacques Courtois), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The summary objective of both projects has been "to contribute to making the process of designing and constructing dependable computing systems much more predictable and cost-effective." In the case of PDCS2, the concentration has been on the problems of producing dependable distributed real-time systems and especially those where the dependability requirements centre on issues of safety and/or security.
This book constitutes the refereed proceedings of the 15th International Conference on Principles of Distributed Systems, OPODIS 2011, held in Toulouse, France, in December 2011. The 26 revised papers presented in this volume were carefully reviewed and selected from 96 submissions. They represent the current state of the art of the research in the field of the design, analysis and development of distributed and real-time systems.
This book constitutes the proceedings of the Third International Workshop on Traffic Monitoring and Analysis, TMA 2011, held in Vienna, Austria, on April 27, 2011 - co-located with EW 2011, the 17th European Wireless Conference. The workshop is an initiative from the COST Action IC0703 "Data Traffic Monitoring and Analysis: Theory, Techniques, Tools and Applications for the Future Networks." The 10 revised full papers and 6 poster papers presented together with 4 short papers were carefully reviewed and selected from 29 submissions. The papers are organized in topical sections on traffic analysis, applications and privacy, traffic classification, and a poster session.
Advances in Systems Safety contains the papers presented at the nineteenth annual Safety-Critical Systems Symposium, held at Southampton, UK, in February 2011. The Symposium is for engineers, managers and academics in the field of system safety, across all industry sectors, so the papers making up this volume offer a wide-ranging coverage of current safety topics, and a blend of academic research and industrial experience. They include both recent developments in the field and discussion of open issues that will shape future progress. The 17 papers in this volume are presented under the headings of the Symposium 's sessions: Safety Cases; Projects, Services and Systems of Systems; Systems Safety in Healthcare; Testing Safety-Critical Systems; Technological Matters and Safety Standards. The book will be of interest to both academics and practitioners working in the safety-critical systems arena.
The communication complexity of two-party protocols is an only 15 years old complexity measure, but it is already considered to be one of the fundamen tal complexity measures of recent complexity theory. Similarly to Kolmogorov complexity in the theory of sequential computations, communication complex ity is used as a method for the study of the complexity of concrete computing problems in parallel information processing. Especially, it is applied to prove lower bounds that say what computer resources (time, hardware, memory size) are necessary to compute the given task. Besides the estimation of the compu tational difficulty of computing problems the proved lower bounds are useful for proving the optimality of algorithms that are already designed. In some cases the knowledge about the communication complexity of a given problem may be even helpful in searching for efficient algorithms to this problem. The study of communication complexity becomes a well-defined indepen dent area of complexity theory. In addition to a strong relation to several funda mental complexity measures (and so to several fundamental problems of com plexity theory) communication complexity has contributed to the study and to the understanding of the nature of determinism, nondeterminism, and random ness in algorithmics. There already exists a non-trivial mathematical machinery to handle the communication complexity of concrete computing problems, which gives a hope that the approach based on communication complexity will be in strumental in the study of several central open problems of recent complexity theory."
Efficient parallel solutions have been found to many problems. Some of them can be obtained automatically from sequential programs, using compilers. However, there is a large class of problems - irregular problems - that lack efficient solutions. IRREGULAR 94 - a workshop and summer school organized in Geneva - addressed the problems associated with the derivation of efficient solutions to irregular problems. This book, which is based on the workshop, draws on the contributions of outstanding scientists to present the state of the art in irregular problems, covering aspects ranging from scientific computing, discrete optimization, and automatic extraction of parallelism. Audience: This first book on parallel algorithms for irregular problems is of interest to advanced graduate students and researchers in parallel computer science.
Nonlinear Assignment Problems (NAPs) are natural extensions of the classic Linear Assignment Problem, and despite the efforts of many researchers over the past three decades, they still remain some of the hardest combinatorial optimization problems to solve exactly. The purpose of this book is to provide in a single volume, major algorithmic aspects and applications of NAPs as contributed by leading international experts. The chapters included in this book are concerned with major applications and the latest algorithmic solution approaches for NAPs. Approximation algorithms, polyhedral methods, semidefinite programming approaches and heuristic procedures for NAPs are included, while applications of this problem class in the areas of multiple-target tracking in the context of military surveillance systems, of experimental high energy physics, and of parallel processing are presented. Audience: Researchers and graduate students in the areas of combinatorial optimization, mathematical programming, operations research, physics, and computer science.
This book brings together experts to discuss relevant results in software process modeling, and expresses their personal view of this field. It is designed for a professional audience of researchers and practitioners in industry, and graduate-level students.
Unlike current survey articles and textbooks, here the so-called confluence and termination hierarchies play a key role. Throughout, the relationships between the properties in the hierarchies are reviewed, and it is shown that for every implication X => Y in the hierarchies, the property X is undecidable for all term rewriting systems satisfying Y. Topics covered include: the newest techniques for proving termination of rewrite systems; a comprehensive chapter on conditional term rewriting systems; a state-of-the-art survey of modularity in term rewriting, and a uniform framework for term and graph rewriting, as well as the first result on conditional graph rewriting.
Second International Workshop on Formal Aspects in Security and Trust is an essential reference for both academic and professional researchers in the field of security and trust. Because of the complexity and scale of deployment of emerging ICT systems based on web service and grid computing concepts, we also need to develop new, scalable, and more flexible foundational models of pervasive security enforcement across organizational borders and in situations where there is high uncertainty about the identity and trustworthiness of the participating networked entites. On the other hand, the increasingly complex set of building activities sharing different resources but managed with different policies calls for new and business-enabling models of trust between members of virtual organizations and communities that span the boundaries of physical enterprises and loosely structured groups of individuals. The papers presented in this volume address the challenges posed by "ambient intelligence space" as a future paradigm and the need for a set of concepts, tools and methodologies to enable the user's trust and confidence in the underlying computing infrastructure. This state-of-the-art volume presents selected papers from the 2nd International Workshop on Formal Aspects in Security and Trust, held in conjuuctions with the 18th IFIP World Computer Congress, August 2004, in Toulouse, France. The collection will be important not only for computer security experts and researchers but also for teachers and adminstrators interested in security methodologies and research.
Innovation in Manufacturing Networks A fundamental concept of the emergent business, scientific and technological paradigms ces area, innovation the ability to apply new ideas to products, processes, organizational practices and business models - is crucial for the future competitiveness of organizations in a continually increasingly globalised, knowledge-intensive marketplace. Responsiveness, agility as well as the high performance of manufacturing systems is responsible for the recent changes in addition to the call for new approaches to achieve cost-effective responsiveness at all the levels of an enterprise. Moreover, creating appropriate frameworks for exploring the most effective synergies between human potential and automated systems represents an enormous challenge in terms of processes characterization, modelling, and the development of adequate support tools. The implementation and use of Automation Systems requires an ever increasing knowledge of enabling technologies and Business Practices. Moreover, the digital and networked world will surely trigger new business practices. In this context and in order to achieve the desired effective and efficiency performance levels, it is crucial to maintain a balance between both the technical aspects and the human and social aspects when developing and applying new innovations and innovative enabling technologies. BASYS conferences have been developed and organized so as to promote the development of balanced automation systems in an attempt to address the majority of the current open issues.
Despite its increasing importance, the verification and validation of the human-machine interface is perhaps the most overlooked aspect of system development. Although much has been written about the design and developmentprocess, very little organized information is available on how to verifyand validate highly complex and highly coupled dynamic systems. Inability toevaluate such systems adequately may become the limiting factor in our ability to employ systems that our technology and knowledge allow us to design. This volume, based on a NATO Advanced Science Institute held in 1992, is designed to provide guidance for the verification and validation of all highly complex and coupled systems. Air traffic control isused an an example to ensure that the theory is described in terms that will allow its implementation, but the results can be applied to all complex and coupled systems. The volume presents the knowledge and theory ina format that will allow readers from a wide variety of backgrounds to apply it to the systems for which they are responsible. The emphasis is on domains where significant advances have been made in the methods of identifying potential problems and in new testing methods and tools. Also emphasized are techniques to identify the assumptions on which a system is built and to spot their weaknesses.
This is the first joint working conference between the IFIP Working Groups 11. 1 and 11. 5. We hope this joint conference will promote collaboration among researchers who focus on the security management issues and those who are interested in integrity and control of information systems. Indeed, as management at any level may be increasingly held answerable for the reliable and secure operation of the information systems and services in their respective organizations in the same manner as they are for financial aspects of the enterprise, there is an increasing need for ensuring proper standards of integrity and control in information systems in order to ensure that data, software and, ultimately, the business processes are complete, adequate and valid for intended functionality and expectations of the owner (i. e. the user organization). As organizers, we would like to thank the members of the international program committee for their review work during the paper selection process. We would also like to thank the authors of the invited papers, who added valuable contribution to this first joint working conference. Paul Dowland X. Sean Wang December 2005 Contents Preface vii Session 1 - Security Standards Information Security Standards: Adoption Drivers (Invited Paper) 1 JEAN-NOEL EZINGEARD AND DAVID BIRCHALL Data Quality Dimensions for Information Systems Security: A Theorectical Exposition (Invited Paper) 21 GURVIRENDER TEJAY, GURPREET DHILLON, AND AMITA GOYAL CHIN From XML to RDF: Syntax, Semantics, Security, and Integrity (Invited Paper) 41 C. FARKAS, V. GowADiA, A. JAIN, AND D.
Fault-tolerance in integrated circuits is not an exclusive concern regarding space designers or highly-reliable application engineers. Rather, designers of next generation products must cope with reduced margin noises due to technological advances. The continuous evolution of the fabrication technology process of semiconductor components, in terms of transistor geometry shrinking, power supply, speed, and logic density, has significantly reduced the reliability of very deep submicron integrated circuits, in face of the various internal and external sources of noise. The very popular Field Programmable Gate Arrays, customizable by SRAM cells, are a consequence of the integrated circuit evolution with millions of memory cells to implement the logic, embedded memories, routing, and more recently with embedded microprocessors cores. These re-programmable systems-on-chip platforms must be fault-tolerant to cope with present days requirements. This book discusses fault-tolerance techniques for SRAM-based Field Programmable Gate Arrays (FPGAs). It starts by showing the model of the problem and the upset effects in the programmable architecture. In the sequence, it shows the main fault tolerance techniques used nowadays to protect integrated circuits against errors. A large set of methods for designing fault tolerance systems in SRAM-based FPGAs is described. Some presented techniques are based on developing a new fault-tolerant architecture with new robustness FPGA elements. Other techniques are based on protecting the high-level hardware description before the synthesis in the FPGA. The reader has the flexibility of choosing the most suitable fault-tolerance technique for its project and to compare a set of fault tolerant techniques for programmable logic applications.
This book is intended for students and practitioners who have had a calculus-based statistics course and who have an interest in safety considerations such as reliability, strength, and duration-of-load or service life. Many persons studying statistical science will be employed professionally where the problems encountered are obscure, what should be analyzed is not clear, the appropriate assumptions are equivocal, and data are scant. In this book there is no disclosure with many of the data sets what type of investigation should be made or what assumptions are to be used.
This edited book serves as a companion volume to the Seventh INFORMS Telecommunications Conference held in Boca Raton, Florida, March 7-10, 2004. The 18 papers in this book were carefully selected after a thorough re view process. The research presented within these articles focuses on the latest methodological developments in three key areas-pricing of telecommunica tions services, network design, and resource allocation-that are most relevant to current telecommunications planning. With the global deregulation of the telecommunications industry, effective pricing and revenue management, as well as an understanding of competi tive pressures are key factors that will improve revenue in telecommunica tions companies. Chapters 1-5 address these topics by focusing on pricing of telecommunications services. They present some novel ideas related to pricing (including auction-based pricing of network bandwidth) and modeling compe tition in the industry. The successful telecommunications companies of the future will likely be the ones that can minimize their costs while meeting customer expectations. In this context the optimal design/provisioning of telecommunication networks plays an important role. Chapters 6-12 address these topics by focusing on net work design for a wide range of technologies including SONET, SDH, WDM, and MPLS. They include the latest research developments related to the mod eling and solving of network design problems. Day-to-day management/control of telecommunications networks is depen dent upon the optimal allocation of resources. Chapters 13-18 provide insight ful solutions to several intriguing resource allocation problems."
From Model-Driven Design to Resource Management for Distributed Embedded Systems presents 16 original contributions and 12 invited papers presented at the Working Conference on Distributed and Parallel Embedded Systems - DIPES 2006, sponsored by the International Federation for Information Processing - IFIP. Coverage includes model-driven design, testing and evolution of embedded systems, timing analysis and predictability, scheduling, allocation, communication and resource management in distributed real-time systems. |
![]() ![]() You may like...
Ski Mountaineering in Scotland
Donald J. Bennet, William Martin Murray Wallace
Paperback
R585
Discovery Miles 5 850
The Unaccountables - The Powerful…
Michael Marchant, Mamello Mosiana, …
Paperback
Dyadic Walsh Analysis from 1924 Onwards…
Radomir Stankovic, Paul Leo Butzer, …
Hardcover
R2,935
Discovery Miles 29 350
Algebras, Lattices, Varieties - Volume…
Ralph S Freese, Ralph N. McKenzie, …
Paperback
R3,224
Discovery Miles 32 240
|