Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > General theory of computing > Systems analysis & design
Holger Scherl introduces the reader to the reconstruction problem in computed tomography and its major scientific challenges that range from computational efficiency to the fulfillment of Tuy's sufficiency condition. The assessed hardware architectures include multi- and many-core systems, cell broadband engine architecture, graphics processing units, and field programmable gate arrays.
This two-volume set LNCS 6771 and 6772 constitutes the refereed proceedings of the Symposium on Human Interface 2011, held in Orlando, FL, USA in July 2011 in the framework of the 14th International Conference on Human-Computer Interaction, HCII 2011 with 10 other thematically similar conferences. The 137 revised papers presented in the two volumes were carefully reviewed and selected from numerous submissions. The papers accepted for presentation thoroughly cover the thematic area of human interface and the management of information. The 75 papers of this first volume address the following major topics: design and development methods and tools; information and user interfaces design; visualisation techniques and applications; security and privacy; touch and gesture interfaces; adaption and personalisation; and measuring and recognising human behavior.
This book constitutes the refereed proceedings of the 11th
International Conference on Next Generation Teletraffic and
Wired/Wireless Advanced Networking, NEW2AN 2011 and the 4th
Conference on Smart Spaces, ruSMART 2011 jointly held in St.
Petersburg, Russia, in August 2011.
This book constitutes the refereed proceedings of the 8th International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR 2011, held in St. Petersburg, Russia in July, 2011. The book presents 30 revised full papers selected from a total of 52 submissions. The book is divided in sections on discrete and continuous optimization, segmentation, motion and video, learning and shape analysis.
This book constitutes the refereed proceedings of the 11th IFIP
WG 6.1 International Conference on Distributed Applications and
Interoperable Systems, DAIS 2011, held in Reykjavik, Iceland, in
June 2011 as one of the DisCoTec 2011 events.
This book constitutes thoroughly refereed post-conference proceedings of the workshops of the 16th International Conference on Parallel Computing, Euro-Par 2010, held in Ischia, Italy, in August/September 2010. The papers of these 9 workshops HeteroPar, HPCC, HiBB, CoreGrid, UCHPC, HPCF, PROPER, CCPI, and VHPC focus on promotion and advancement of all aspects of parallel and distributed computing.
An up-to-date and comprehensive overview of information and database systems design and implementation. The book provides an accessible presentation and explanation of technical architecture for systems complying with TOGAF standards, the accepted international framework. Covering nearly the full spectrum of architectural concern, the authors also illustrate and concretize the notion of traceability from business goals, strategy through to technical architecture, providing the reader with a holistic and commanding view. The work has two mutually supportive foci. First, information technology technical architecture, the in-depth, illustrative and contemporary treatment of which comprises the core and majority of the book; and secondly, a strategic and business context.
This book constitutes the proceedings of the Third International Workshop on Traffic Monitoring and Analysis, TMA 2011, held in Vienna, Austria, on April 27, 2011 - co-located with EW 2011, the 17th European Wireless Conference. The workshop is an initiative from the COST Action IC0703 "Data Traffic Monitoring and Analysis: Theory, Techniques, Tools and Applications for the Future Networks." The 10 revised full papers and 6 poster papers presented together with 4 short papers were carefully reviewed and selected from 29 submissions. The papers are organized in topical sections on traffic analysis, applications and privacy, traffic classification, and a poster session.
From the Foreword: ..".the presentation of real-time scheduling is probably the best in terms of clarity I have ever read in the professional literature. Easy to understand, which is important for busy professionals keen to acquire (or refresh) new knowledge without being bogged down in a convoluted narrative and an excessive detail overload. The authors managed to largely avoid theoretical-only presentation of the subject, which frequently affects books on operating systems. ... an indispensable resource] to gain a thorough understanding of the real-time systems from the operating systems perspective, and to stay up to date with the recent trends and actual developments of the open-source real-time operating systems." -Richard Zurawski, ISA Group, San Francisco, California,
USA Real-time embedded systems are integral to the global technological and social space, but references still rarely offer professionals the sufficient mix of theory and practical examples required to meet intensive economic, safety, and other demands on system development. Similarly, instructors have lacked a resource to help students fully understand the field. The information was out there, though often at the abstract level, fragmented and scattered throughout literature from different engineering disciplines and computing sciences. Accounting for readers' varying practical needs and experience levels, Real Time Embedded Systems: Open-Source Operating Systems Perspective offers a holistic overview from the operating-systems perspective. It provides a long-awaited reference on real-time operating systems and their almost boundless application potential in the embedded system domain. Balancing the already abundant coverage of operating systems with the largely ignored real-time aspects, or "physicality," the authors analyze several realistic case studies to introduce vital theoretical material. They also discuss popular open-source operating systems-Linux and FreRTOS, in particular-to help embedded-system designers identify the benefits and weaknesses in deciding whether or not to adopt more traditional, less powerful, techniques for a project.
Transition Engineering: Building a Sustainable Future examines new strategies emerging in response to the mega-issues of global climate change, decline in world oil supply, scarcity of key industrial minerals, and local environmental constraints. These issues pose challenges for organizations, businesses, and communities, and engineers will need to begin developing ideas and projects to implement the transition of engineered systems. This work presents a methodology for shifting away from unsustainable activities. Teaching the Transition Engineering approach and methodology is the focus of the text, and the concept is presented in a way that engineers can begin applying it in their work.
The communication complexity of two-party protocols is an only 15 years old complexity measure, but it is already considered to be one of the fundamen tal complexity measures of recent complexity theory. Similarly to Kolmogorov complexity in the theory of sequential computations, communication complex ity is used as a method for the study of the complexity of concrete computing problems in parallel information processing. Especially, it is applied to prove lower bounds that say what computer resources (time, hardware, memory size) are necessary to compute the given task. Besides the estimation of the compu tational difficulty of computing problems the proved lower bounds are useful for proving the optimality of algorithms that are already designed. In some cases the knowledge about the communication complexity of a given problem may be even helpful in searching for efficient algorithms to this problem. The study of communication complexity becomes a well-defined indepen dent area of complexity theory. In addition to a strong relation to several funda mental complexity measures (and so to several fundamental problems of com plexity theory) communication complexity has contributed to the study and to the understanding of the nature of determinism, nondeterminism, and random ness in algorithmics. There already exists a non-trivial mathematical machinery to handle the communication complexity of concrete computing problems, which gives a hope that the approach based on communication complexity will be in strumental in the study of several central open problems of recent complexity theory."
Advances in Systems Safety contains the papers presented at the nineteenth annual Safety-Critical Systems Symposium, held at Southampton, UK, in February 2011. The Symposium is for engineers, managers and academics in the field of system safety, across all industry sectors, so the papers making up this volume offer a wide-ranging coverage of current safety topics, and a blend of academic research and industrial experience. They include both recent developments in the field and discussion of open issues that will shape future progress. The 17 papers in this volume are presented under the headings of the Symposium 's sessions: Safety Cases; Projects, Services and Systems of Systems; Systems Safety in Healthcare; Testing Safety-Critical Systems; Technological Matters and Safety Standards. The book will be of interest to both academics and practitioners working in the safety-critical systems arena.
Based on both theoretical investigations and industrial experience, this book provides an extensive approach to support the planning and optimization process for modern communication networks. The book contains a thorough survey and a detailed comparison of state-of-the-art numerical algorithms in the matrix-geometric field.
Innovation in Manufacturing Networks A fundamental concept of the emergent business, scientific and technological paradigms ces area, innovation the ability to apply new ideas to products, processes, organizational practices and business models - is crucial for the future competitiveness of organizations in a continually increasingly globalised, knowledge-intensive marketplace. Responsiveness, agility as well as the high performance of manufacturing systems is responsible for the recent changes in addition to the call for new approaches to achieve cost-effective responsiveness at all the levels of an enterprise. Moreover, creating appropriate frameworks for exploring the most effective synergies between human potential and automated systems represents an enormous challenge in terms of processes characterization, modelling, and the development of adequate support tools. The implementation and use of Automation Systems requires an ever increasing knowledge of enabling technologies and Business Practices. Moreover, the digital and networked world will surely trigger new business practices. In this context and in order to achieve the desired effective and efficiency performance levels, it is crucial to maintain a balance between both the technical aspects and the human and social aspects when developing and applying new innovations and innovative enabling technologies. BASYS conferences have been developed and organized so as to promote the development of balanced automation systems in an attempt to address the majority of the current open issues.
This book is intended to serve as a textbook for a second course in the im plementation (Le. microarchitecture) of computer architectures. The subject matter covered is the collection of techniques that are used to achieve the highest performance in single-processor machines; these techniques center the exploitation of low-level parallelism (temporal and spatial) in the processing of machine instructions. The target audience consists students in the final year of an undergraduate program or in the first year of a postgraduate program in computer science, computer engineering, or electrical engineering; professional computer designers will also also find the book useful as an introduction to the topics covered. Typically, the author has used the material presented here as the basis of a full-semester undergraduate course or a half-semester post graduate course, with the other half of the latter devoted to multiple-processor machines. The background assumed of the reader is a good first course in computer architecture and implementation - to the level in, say, Computer Organization and Design, by D. Patterson and H. Hennessy - and familiarity with digital-logic design. The book consists of eight chapters: The first chapter is an introduction to all of the main ideas that the following chapters cover in detail: the topics covered are the main forms of pipelining used in high-performance uniprocessors, a taxonomy of the space of pipelined processors, and performance issues. It is also intended that this chapter should be readable as a brief "stand-alone" survey."
Efficient parallel solutions have been found to many problems. Some of them can be obtained automatically from sequential programs, using compilers. However, there is a large class of problems - irregular problems - that lack efficient solutions. IRREGULAR 94 - a workshop and summer school organized in Geneva - addressed the problems associated with the derivation of efficient solutions to irregular problems. This book, which is based on the workshop, draws on the contributions of outstanding scientists to present the state of the art in irregular problems, covering aspects ranging from scientific computing, discrete optimization, and automatic extraction of parallelism. Audience: This first book on parallel algorithms for irregular problems is of interest to advanced graduate students and researchers in parallel computer science.
Nonlinear Assignment Problems (NAPs) are natural extensions of the classic Linear Assignment Problem, and despite the efforts of many researchers over the past three decades, they still remain some of the hardest combinatorial optimization problems to solve exactly. The purpose of this book is to provide in a single volume, major algorithmic aspects and applications of NAPs as contributed by leading international experts. The chapters included in this book are concerned with major applications and the latest algorithmic solution approaches for NAPs. Approximation algorithms, polyhedral methods, semidefinite programming approaches and heuristic procedures for NAPs are included, while applications of this problem class in the areas of multiple-target tracking in the context of military surveillance systems, of experimental high energy physics, and of parallel processing are presented. Audience: Researchers and graduate students in the areas of combinatorial optimization, mathematical programming, operations research, physics, and computer science.
This book brings together experts to discuss relevant results in software process modeling, and expresses their personal view of this field. It is designed for a professional audience of researchers and practitioners in industry, and graduate-level students.
Unlike current survey articles and textbooks, here the so-called confluence and termination hierarchies play a key role. Throughout, the relationships between the properties in the hierarchies are reviewed, and it is shown that for every implication X => Y in the hierarchies, the property X is undecidable for all term rewriting systems satisfying Y. Topics covered include: the newest techniques for proving termination of rewrite systems; a comprehensive chapter on conditional term rewriting systems; a state-of-the-art survey of modularity in term rewriting, and a uniform framework for term and graph rewriting, as well as the first result on conditional graph rewriting.
Second International Workshop on Formal Aspects in Security and Trust is an essential reference for both academic and professional researchers in the field of security and trust. Because of the complexity and scale of deployment of emerging ICT systems based on web service and grid computing concepts, we also need to develop new, scalable, and more flexible foundational models of pervasive security enforcement across organizational borders and in situations where there is high uncertainty about the identity and trustworthiness of the participating networked entites. On the other hand, the increasingly complex set of building activities sharing different resources but managed with different policies calls for new and business-enabling models of trust between members of virtual organizations and communities that span the boundaries of physical enterprises and loosely structured groups of individuals. The papers presented in this volume address the challenges posed by "ambient intelligence space" as a future paradigm and the need for a set of concepts, tools and methodologies to enable the user's trust and confidence in the underlying computing infrastructure. This state-of-the-art volume presents selected papers from the 2nd International Workshop on Formal Aspects in Security and Trust, held in conjuuctions with the 18th IFIP World Computer Congress, August 2004, in Toulouse, France. The collection will be important not only for computer security experts and researchers but also for teachers and adminstrators interested in security methodologies and research.
Going beyond isolated research ideas and design experiences, Designing Network On-Chip Architectures in the Nanoscale Era covers the foundations and design methods of network on-chip (NoC) technology. The contributors draw on their own lessons learned to provide strong practical guidance on various design issues. Exploring the design process of the network, the first part of the book focuses on basic aspects of switch architecture and design, topology selection, and routing implementation. In the second part, contributors discuss their experiences in the industry, offering a roadmap to recent products. They describe Tilera's TILE family of multicore processors, novel Intel products and research prototypes, and the TRIPS operand network (OPN). The last part reveals state-of-the-art solutions to hardware-related issues and explains how to efficiently implement the programming model at the network interface. In the appendix, the microarchitectural details of two switch architectures targeting multiprocessor system-on-chips (MPSoCs) and chip multiprocessors (CMPs) can be used as an experimental platform for running tests. A stepping stone to the evolution of future chip architectures, this volume provides a how-to guide for designers of current NoCs as well as designers involved with 2015 computing platforms. It cohesively brings together fundamental design issues, alternative design paradigms and techniques, and the main design tradeoffs-consistently focusing on topics most pertinent to real-world NoC designers.
This is the first joint working conference between the IFIP Working Groups 11. 1 and 11. 5. We hope this joint conference will promote collaboration among researchers who focus on the security management issues and those who are interested in integrity and control of information systems. Indeed, as management at any level may be increasingly held answerable for the reliable and secure operation of the information systems and services in their respective organizations in the same manner as they are for financial aspects of the enterprise, there is an increasing need for ensuring proper standards of integrity and control in information systems in order to ensure that data, software and, ultimately, the business processes are complete, adequate and valid for intended functionality and expectations of the owner (i. e. the user organization). As organizers, we would like to thank the members of the international program committee for their review work during the paper selection process. We would also like to thank the authors of the invited papers, who added valuable contribution to this first joint working conference. Paul Dowland X. Sean Wang December 2005 Contents Preface vii Session 1 - Security Standards Information Security Standards: Adoption Drivers (Invited Paper) 1 JEAN-NOEL EZINGEARD AND DAVID BIRCHALL Data Quality Dimensions for Information Systems Security: A Theorectical Exposition (Invited Paper) 21 GURVIRENDER TEJAY, GURPREET DHILLON, AND AMITA GOYAL CHIN From XML to RDF: Syntax, Semantics, Security, and Integrity (Invited Paper) 41 C. FARKAS, V. GowADiA, A. JAIN, AND D.
Fault-tolerance in integrated circuits is not an exclusive concern regarding space designers or highly-reliable application engineers. Rather, designers of next generation products must cope with reduced margin noises due to technological advances. The continuous evolution of the fabrication technology process of semiconductor components, in terms of transistor geometry shrinking, power supply, speed, and logic density, has significantly reduced the reliability of very deep submicron integrated circuits, in face of the various internal and external sources of noise. The very popular Field Programmable Gate Arrays, customizable by SRAM cells, are a consequence of the integrated circuit evolution with millions of memory cells to implement the logic, embedded memories, routing, and more recently with embedded microprocessors cores. These re-programmable systems-on-chip platforms must be fault-tolerant to cope with present days requirements. This book discusses fault-tolerance techniques for SRAM-based Field Programmable Gate Arrays (FPGAs). It starts by showing the model of the problem and the upset effects in the programmable architecture. In the sequence, it shows the main fault tolerance techniques used nowadays to protect integrated circuits against errors. A large set of methods for designing fault tolerance systems in SRAM-based FPGAs is described. Some presented techniques are based on developing a new fault-tolerant architecture with new robustness FPGA elements. Other techniques are based on protecting the high-level hardware description before the synthesis in the FPGA. The reader has the flexibility of choosing the most suitable fault-tolerance technique for its project and to compare a set of fault tolerant techniques for programmable logic applications.
Lean Manufacturing has proved to be one of the most successful and most powerful production business systems over the last decades. Its application enabled many companies to make a big leap towards better utilization of resources and thus provide better service to the customers through faster response, higher quality and lowered costs. Lean is often described as "eyes for flow and eyes for muda" philosophy. It simply means that value is created only when all the resources flow through the system. If the flow is stopped no value but only costs and time are added, which is muda (Jap. waste). Since the philosophy was born at the Toyota many solutions were tailored for the high volume environment. But in turbulent, fast-changing market environment and progressing globalization, customers tend to require more customization, lower volumes and higher variety at much less cost and of better quality. This calls for adaptation of existing lean techniques and exploration of the new waste-free solutions that go far beyond manufacturing. This book brings together the opinions of a number of leading academics and researchers from around the world responding to those emerging needs. They tried to find answer to the question how to move forward from "Spaghetti World" of supply, production, distribution, sales, administration, product development, logistics, accounting, etc. Through individual chapters in this book authors present their views, approaches, concepts and developed tools. The reader will learn the key issues currently being addressed in production management research and practice throughout the world.
From Model-Driven Design to Resource Management for Distributed Embedded Systems presents 16 original contributions and 12 invited papers presented at the Working Conference on Distributed and Parallel Embedded Systems - DIPES 2006, sponsored by the International Federation for Information Processing - IFIP. Coverage includes model-driven design, testing and evolution of embedded systems, timing analysis and predictability, scheduling, allocation, communication and resource management in distributed real-time systems. |
You may like...
Information Systems, International…
Ralph Stair, George Reynolds
Paperback
Implementing Data Analytics and…
Chintan Bhatt, Neeraj Kumar, …
Hardcover
R6,256
Discovery Miles 62 560
Cases on Lean Thinking Applications in…
Eduardo Guilherme Satolo, Robisom Damasceno Calado
Hardcover
R6,281
Discovery Miles 62 810
|