![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
From Model-Driven Design to Resource Management for Distributed Embedded Systems presents 16 original contributions and 12 invited papers presented at the Working Conference on Distributed and Parallel Embedded Systems - DIPES 2006, sponsored by the International Federation for Information Processing - IFIP. Coverage includes model-driven design, testing and evolution of embedded systems, timing analysis and predictability, scheduling, allocation, communication and resource management in distributed real-time systems.
Reservation procedures constitute the core of many popular data transmission protocols. They consist of two steps: A request phase in which a station reserves the communication channel and a transmission phase in which the actual data transmission takes place. Such procedures are often applied in communication networks that are characterised by a shared communication channel with large round-trip times. In this book, we propose queuing models for situations that require a reservation procedure and validate their applicability in the context of cable networks. We offer various mathematical models to better understand the performance of these reservation procedures. The book covers four key performance models, and modifications to these: Contention trees, the repairman model, the bulk service queue, and tandem queues. The relevance of this book is not limited to reservation procedures and cable networks, and performance analysts from a variety of areas may benefit, as all models have found application in other fields as well.
Lean Manufacturing has proved to be one of the most successful and most powerful production business systems over the last decades. Its application enabled many companies to make a big leap towards better utilization of resources and thus provide better service to the customers through faster response, higher quality and lowered costs. Lean is often described as "eyes for flow and eyes for muda" philosophy. It simply means that value is created only when all the resources flow through the system. If the flow is stopped no value but only costs and time are added, which is muda (Jap. waste). Since the philosophy was born at the Toyota many solutions were tailored for the high volume environment. But in turbulent, fast-changing market environment and progressing globalization, customers tend to require more customization, lower volumes and higher variety at much less cost and of better quality. This calls for adaptation of existing lean techniques and exploration of the new waste-free solutions that go far beyond manufacturing. This book brings together the opinions of a number of leading academics and researchers from around the world responding to those emerging needs. They tried to find answer to the question how to move forward from "Spaghetti World" of supply, production, distribution, sales, administration, product development, logistics, accounting, etc. Through individual chapters in this book authors present their views, approaches, concepts and developed tools. The reader will learn the key issues currently being addressed in production management research and practice throughout the world.
This book synthesizes the results of the seventh in a successful series of workshops that were established by Shanghai Jiao Tong University and Technische Universitat Berlin, bringing together researchers from both universities in order to present research results to an international community. Aspects covered here include, among others, Models and specification; Simulation of different properties; Middleware for distributed real-time systems; Signal Analysis; Control methods; Applications in airborne and medical systems."
Service computing is a cutting-edge area, popular in both industry and academia. New challenges have been introduced to develop service-oriented systems with high assurance requirements. High Assurance Services Computing captures and makes accessible the most recent practical developments in service-oriented high-assurance systems. An edited volume contributed by well-established researchers in this field worldwide, this book reports the best current practices and emerging methods in the areas of service-oriented techniques for high assurance systems. Available results from industry and government, R&D laboratories and academia are included, along with unreported results from the "hands-on" experiences of software professionals in the respective domains. Designed for practitioners and researchers working for industrial organizations and government agencies, High Assurance Services Computing is also suitable for advanced-level students in computer science and engineering.
In this volume authors of academia and practice provide practitioners, scientists and graduate students with a good overview of basic methods and paradigms, as well as important issues and trends across the broad spectrum of parallel and distributed processing. In particular, the book covers fundamental topics such as efficient parallel algorithms, languages for parallel processing, parallel operating systems, architecture of parallel and distributed systems, management of resources, tools for parallel computing, parallel database systems and multimedia object servers, and networking aspects of distributed and parallel computing. Three chapters are dedicated to applications: parallel and distributed scientific computing, high-performance computing in molecular sciences, and multimedia applications for parallel and distributed systems. Summing up, the Handbook is indispensable for academics and professionals who are interested in learning the leading experts view of the topic.
A tutorial approach to using the UML modeling language in system-on-chip design Based on the DAC 2004 tutorial, applicable for students and professionals Contributions by top-level international researchers The best work at the first UML for SoC workshop Unique combination of both UML capabilities and SoC design issues Condenses research and development ideas that are only found in multiple conference proceedings and many other books into one place Will be the seminal reference work for this area for years to come
th This volume contains papers presented during 13 International Conference on Inf- mation Systems Development - Advances in Theory, Practice and Education (ISD'2004), held in Vilnius, Lithuania, September 9-11, 2004. The intended audience for this book comprises researchers and practitioners interested in current trends in the InformationS- tems Development (ISD) ?eld. Papers cover a wide range of topics: ISD methodologies, methodengineering, businessandISmodelling, websystemsengineering, databaserelated issues, informationanalysisanddatamining, qualityassessment, costingmethods, security issues, impact of organizational environment, and motivation and job satisfaction among IS developers. The selection of papers was carried out by the International Program C- mittee. All papers were reviewed in advance by three reviewers and evaluated according to their relevance, originality and presentation quality. Papers were evaluated only on their own merits, independent of other submissions. Out of 117 submissions Program Comm- tee selected 75 research papers to be presented at the Conference. 39 best papers and 5 papers presented by invited speakers are published in this volume. th The13 InternationalConferenceonInformationSystemsDevelopmentcontinuesthe tradition started with the ?rst Polish-Scandinavian Seminar on Current Trends in Infor- tion Systems Development Methodologies, held in Gdansk, Poland in 1988. Through the years this seminar has evolved into one of most prestigious conferences in the ?eld. ISD Conferenceprovidesan internationalforumfor the exchangeof ideasbetween the research community and practitioners and offers a venue where ISD related educational issues are discussed. ISD progresses rapidly, continually creating new challenges for the professionals - volved. New concepts and approaches emerge in research as well as in practice.
This book fills the critical need for an in-depth technical reference providing the methods and techniques for building and maintaining confidence in many varities of system software. The intent is to help develop reliable answers to such critical questions as: 1) Are we building the right software for the need? and 2) Are we building the software right? Software Verification and Validation: An Engineering and Scientific Approach is structured for research scientists and practitioners in industry. The book is also suitable as a secondary textbook for advanced-level students in computer science and engineering.
Component Models and Systems for Grid Applications is the essential reference for the most current research on Grid technologies. This first volume of the CoreGRID series addresses such vital issues as the architecture of the Grid, the way software will influence the development of the Grid, and the practical applications of Grid technologies for individuals and businesses alike. Part I of the book, "Application-Oriented Designs," focuses on development methodology and how it may contribute to a more component-based use of the Grid. "Middleware Architecture," the second part, examines portable Grid engines, hierarchical infrastructures, interoperability, as well as workflow modeling environments. The final part of the book, "Communication Frameworks," looks at dynamic self-adaptation, collective operations, and higher-order components. With Component Models and Systems for Grid Applications, editors Vladimir Getov and Thilo Kielmann offer the computing professional and the computing researcher the most informative, up-to-date, and forward-looking thoughts on the fast-growing field of Grid studies.
The extreme ?exibility of recon?gurable architectures and their performance pot- tial have made them a vehicle of choice in a wide range of computing domains, from rapid circuit prototyping to high-performance computing. The increasing availab- ity of transistors on a die has allowed the emergence of recon?gurable architectures with a large number of computing resources and interconnection topologies. To - ploit the potential of these recon?gurable architectures, programmers are forced to map their applications, typically written in high-level imperative programming l- guages, such as C or MATLAB, to hardware-oriented languages such as VHDL or Verilog. In this process, they must assume the role of hardware designers and software programmers and navigate a maze of program transformations, mapping, and synthesis steps to produce ef?cient recon?gurable computing implementations. The richness and sophistication of any of these application mapping steps make the mapping of computations to these architectures an increasingly daunting process. It is thus widely believed that automatic compilation from high-level programming languages is the key to the success of recon?gurable computing. This book describes a wide range of code transformations and mapping te- niques for programs described in high-level programming languages, most - tably imperative languages, to recon?gurable architectures.
We will occasionally footnote a portion of text with a "**, to indicate Notes on the that this portion can be initially bypassed. The reasons for bypassing a Text portion of the text include: the subject is a special topic that will not be referenced later, the material can be skipped on first reading, or the level of mathematics is higher than the rest of the text. In cases where a topic is self-contained, we opt to collect the material into an appendix that can be read by students at their leisure. The material in the text cannot be fully assimilated until one makes it Notes on "their own" by applying the material to specific problems. Self-discovery Problems is the best teacher and although they are no substitute for an inquiring mind, problems that explore the subject from different viewpoints can often help the student to think about the material in a uniquely per sonal way. With this in mind, we have made problems an integral part of this work and have attempted to make them interesting as well as informative."
The aim of CoreGRID is to strengthen and advance scientific and technological excellence in the area of Grid and Peer-to-Peer technologies in order to overcome the current fragmentation and duplication of effort in this area. To achieve this objective, the workshop brought together a critical mass of well-established researchers from a number of institutions which have all constructed an ambitious joint program of activities. Priority in the workshop was given to work conducted in collaboration between partners from different research institutions and to promising research proposals that could foster such collaboration in the future.
A Software Process Model Handbook for Incorporating People's Capabilities offers the most advanced approach to date, empirically validated at software development organizations. This handbook adds a valuable contribution to the much-needed literature on people-related aspects in software engineering. The primary focus is on the particular challenge of extending software process definitions to more explicitly address people-related considerations. The capability concept is not present nor has it been considered in most software process models. The authors have developed a capabilities-oriented software process model, which has been formalized in UML and implemented as a tool. A Software Process Model Handbook for Incorporating People's Capabilities guides readers through the incorporation of the individual's capabilities into the software process. Structured to meet the needs of research scientists and graduate-level students in computer science and engineering, this book is also suitable for practitioners in industry.
Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.
Effective compilers allow for a more efficient execution of application programs for a given computer architecture, while well-conceived architectural features can support more effective compiler optimization techniques. A well thought-out strategy of trade-offs between compilers and computer architectures is the key to the successful designing of highly efficient and effective computer systems. From embedded micro-controllers to large-scale multiprocessor systems, it is important to understand the interaction between compilers and computer architectures. The goal of the Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT) is to promote new ideas and to present recent developments in compiler techniques and computer architectures that enhance each other's capabilities and performance. Interaction Between Compilers and Computer Architectures is an updated and revised volume consisting of seven papers originally presented at the Fifth Workshop on Interaction between Compilers and Computer Architectures (INTERACT-5), which was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico in 2001. This volume explores recent developments and ideas for better integration of the interaction between compilers and computer architectures in designing modern processors and computer systems. Interaction Between Compilers and Computer Architectures is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
Despite its increasing importance, the verification and validation of the human-machine interface is perhaps the most overlooked aspect of system development. Although much has been written about the design and developmentprocess, very little organized information is available on how to verifyand validate highly complex and highly coupled dynamic systems. Inability toevaluate such systems adequately may become the limiting factor in our ability to employ systems that our technology and knowledge allow us to design. This volume, based on a NATO Advanced Science Institute held in 1992, is designed to provide guidance for the verification and validation of all highly complex and coupled systems. Air traffic control isused an an example to ensure that the theory is described in terms that will allow its implementation, but the results can be applied to all complex and coupled systems. The volume presents the knowledge and theory ina format that will allow readers from a wide variety of backgrounds to apply it to the systems for which they are responsible. The emphasis is on domains where significant advances have been made in the methods of identifying potential problems and in new testing methods and tools. Also emphasized are techniques to identify the assumptions on which a system is built and to spot their weaknesses.
A practical introduction to the development of proofs and certified programs using Coq. An invaluable tool for researchers, students, and engineers interested in formal methods and the development of zero-fault software.
This book presents the most recent concerns and research results in industrial fault diagnosis using intelligent techniques. It focuses on computational intelligence applications to fault diagnosis with real-world applications used in different chapters to validate the different diagnosis methods. The book includes one chapter dealing with a novel coherent fault diagnosis distributed methodology for complex systems.
Reliability and Risk Issues in Large Scale Safety-critical Digital Control Systems provides a comprehensive coverage of reliability issues and their corresponding countermeasures in the field of large-scale digital control systems, from the hardware and software in digital systems to the human operators who supervise the overall process of large-scale systems. Unlike other books which examine theories and issues in individual fields, this book reviews important problems and countermeasures across the fields of software reliability, software verification and validation, digital systems, human factors engineering and human reliability analysis. Divided into four sections dealing with software reliability, digital system reliability, human reliability and human operators in large-scale digital systems, the book offers insights from professional researchers in each specialized field in a diverse yet unified approach.
This Handbook is about methods, tools and examples of how to architect an enterprise through considering all life cycle aspects of Enterprise Entities (such as individual enterprises, enterprise networks, virtual enterprises, projects and other complex systems including a mixture of automated and human processes). The book is based on ISO15704:2000, or the GERAM Framework (Generalised Enterprise Reference Architecture and Methodology) that generalises the requirements of Enterprise Reference Architectures. Various Architecture Frameworks (PERA, CIMOSA, Grai-GIM, Zachman, C4ISR/DoDAF) are shown in light of GERAM to allow a deeper understanding of their contributions and therefore their correct and knowledgeable use. The handbook addresses a wide variety of audience, and covers methods and tools necessary to design or redesign enterprises, as well as to structure the implementation into manageable projects.
Memory Architecture Exploration for Programmable Embedded Systems
addresses efficient exploration of alternative memory
architectures, assisted by a "compiler-in-the-loop" that allows
effective matching of the target application to the
processor-memory architecture. This new approach for memory
architecture exploration replaces the traditional black-box view of
the memory system and allows for aggressive co-optimization of the
programmable processor together with a customized memory system.
Recent accidents in a range of industries have increased concern
over the design, development, management and control of
safety-critical systems. Attention has now focused upon the role of
human error both in the development and in the operation of complex
processes. This volume contains 20 original and significant contributions addressing these critical questions. The papers were presented at the 7th IFIP Working Group 13.5 Working Conference on Human Error, Safety and Systems Development, which was held in August 2004 in conjunction with the 18th IFIP World Computer Congress in Toulouse, France, and sponsored by the International Federation for Information Processing (IFIP).
New concepts and technologies are being introduced continuously for application development in the World-Wide Web. Selecting the right implementation strategies and tools when building a Web application has become a tedious task, requiring in-depth knowledge and significant experience from both software developers and software managers. The mission of this book is to guide the reader through the opaque jungle of Web technologies. Based on their long industrial and academic experience, Stefan Jablonski and his coauthors provide a framework architecture for Web applications which helps choose the best strategy for a given project. The authors classify common technologies and standards like .NET, CORBA, J2EE, DCOM, WSDL and many more with respect to platform, architectural layer, and application package, and guide the reader through a three-phase development process consisting of preparation, design, and technology selection steps. The whole approach is exemplified using a real-world case: the architectural design of an order-entry management system.
This book is the latest contribution to the Chip Design Languages series and it consists of selected papers presented at the Forum on Specifications and Design Languages (FDL'06), in September 2006. The book represents the state-of-the-art in research and practice, and it identifies new research directions. It highlights the role of specification and modelling languages, and presents practical experiences with specification and modelling languages. |
![]() ![]() You may like...
On Complicity and Compromise
Chiara Lepora, Robert E. Goodin
Hardcover
R4,282
Discovery Miles 42 820
The Big Book of Primary Club Resources…
Fe Luton, Lian Jacobs
Hardcover
R4,005
Discovery Miles 40 050
Simulations in the Political Science…
Mark Harvey, James Fielder, …
Paperback
R1,220
Discovery Miles 12 200
Active Particles, Volume 1 - Advances in…
Nicola Bellomo, Pierre Degond, …
Hardcover
R3,596
Discovery Miles 35 960
|