![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
Software is continuously increasing in complexity. Paradigmatic shifts and new development frameworks make it easier to implement software - but not to test it. Software testing remains to be a topic with many open questions with regard to both technical low-level aspects and to the organizational embedding of testing. However, a desired level of software quality cannot be achieved by either choosing a technical procedure or by optimizing testing processes. In fact, it requires a holistic approach.This Brief summarizes the current knowledge of software testing and introduces three current research approaches. The base of knowledge is presented comprehensively in scope but concise in length; thereby the volume can be used as a reference. Research is highlighted from different points of view. Firstly, progress on developing a tool for automated test case generation (TCG) based on a program's structure is introduced. Secondly, results from a project with industry partners on testing best practices are highlighted. Thirdly, embedding testing into e-assessment of programming exercises is described."
Event-Triggered and Time-Triggered Control Paradigms presents a valuable survey about existing architectures for safety-critical applications and discusses the issues that must be considered when moving from a federated to an integrated architecture. The book focuses on one key topic - the amalgamation of the event-triggered and the time-triggered control paradigm into a coherent integrated architecture. The architecture provides for the integration of independent distributed application subsystems by introducing multi-criticality nodes and virtual networks of known temporal properties. The feasibility and the tangible advantages of this new architecture are demonstrated with practical examples taken from the automotive industry. Event-Triggered and Time-Triggered Control Paradigms offers significant insights into the architecture and design of integrated embedded systems, both at the conceptual and at the practical level.
This book is intended for students and practitioners who have had a calculus-based statistics course and who have an interest in safety considerations such as reliability, strength, and duration-of-load or service life. Many persons studying statistical science will be employed professionally where the problems encountered are obscure, what should be analyzed is not clear, the appropriate assumptions are equivocal, and data are scant. In this book there is no disclosure with many of the data sets what type of investigation should be made or what assumptions are to be used.
This edited book serves as a companion volume to the Seventh INFORMS Telecommunications Conference held in Boca Raton, Florida, March 7-10, 2004. The 18 papers in this book were carefully selected after a thorough re view process. The research presented within these articles focuses on the latest methodological developments in three key areas-pricing of telecommunica tions services, network design, and resource allocation-that are most relevant to current telecommunications planning. With the global deregulation of the telecommunications industry, effective pricing and revenue management, as well as an understanding of competi tive pressures are key factors that will improve revenue in telecommunica tions companies. Chapters 1-5 address these topics by focusing on pricing of telecommunications services. They present some novel ideas related to pricing (including auction-based pricing of network bandwidth) and modeling compe tition in the industry. The successful telecommunications companies of the future will likely be the ones that can minimize their costs while meeting customer expectations. In this context the optimal design/provisioning of telecommunication networks plays an important role. Chapters 6-12 address these topics by focusing on net work design for a wide range of technologies including SONET, SDH, WDM, and MPLS. They include the latest research developments related to the mod eling and solving of network design problems. Day-to-day management/control of telecommunications networks is depen dent upon the optimal allocation of resources. Chapters 13-18 provide insight ful solutions to several intriguing resource allocation problems."
From Model-Driven Design to Resource Management for Distributed Embedded Systems presents 16 original contributions and 12 invited papers presented at the Working Conference on Distributed and Parallel Embedded Systems - DIPES 2006, sponsored by the International Federation for Information Processing - IFIP. Coverage includes model-driven design, testing and evolution of embedded systems, timing analysis and predictability, scheduling, allocation, communication and resource management in distributed real-time systems.
Reservation procedures constitute the core of many popular data transmission protocols. They consist of two steps: A request phase in which a station reserves the communication channel and a transmission phase in which the actual data transmission takes place. Such procedures are often applied in communication networks that are characterised by a shared communication channel with large round-trip times. In this book, we propose queuing models for situations that require a reservation procedure and validate their applicability in the context of cable networks. We offer various mathematical models to better understand the performance of these reservation procedures. The book covers four key performance models, and modifications to these: Contention trees, the repairman model, the bulk service queue, and tandem queues. The relevance of this book is not limited to reservation procedures and cable networks, and performance analysts from a variety of areas may benefit, as all models have found application in other fields as well.
Lean Manufacturing has proved to be one of the most successful and most powerful production business systems over the last decades. Its application enabled many companies to make a big leap towards better utilization of resources and thus provide better service to the customers through faster response, higher quality and lowered costs. Lean is often described as "eyes for flow and eyes for muda" philosophy. It simply means that value is created only when all the resources flow through the system. If the flow is stopped no value but only costs and time are added, which is muda (Jap. waste). Since the philosophy was born at the Toyota many solutions were tailored for the high volume environment. But in turbulent, fast-changing market environment and progressing globalization, customers tend to require more customization, lower volumes and higher variety at much less cost and of better quality. This calls for adaptation of existing lean techniques and exploration of the new waste-free solutions that go far beyond manufacturing. This book brings together the opinions of a number of leading academics and researchers from around the world responding to those emerging needs. They tried to find answer to the question how to move forward from "Spaghetti World" of supply, production, distribution, sales, administration, product development, logistics, accounting, etc. Through individual chapters in this book authors present their views, approaches, concepts and developed tools. The reader will learn the key issues currently being addressed in production management research and practice throughout the world.
This book constitutes the proceedings of the 4th International Workshop on Traffic Monitoring and Analysis, TMA 2012, held in Vienna, Austria, in March 2012. The thoroughly refereed 10 full papers and 8 short papers presented in this volume were carefully reviewed and selected from 31 submissions. The contributions are organized in topical sections on traffic analysis and characterization: new results and improved measurement techniques; measurement for QoS, security and service level agreements; and tools for network measurement and experimentation.
Input/Output in Parallel and Distributed Computer Systems has attracted increasing attention over the last few years, as it has become apparent that input/output performance, rather than CPU performance, may be the key limiting factor in the performance of future systems. This I/O bottleneck is caused by the increasing speed mismatch between processing units and storage devices, the use of multiple processors operating simultaneously in parallel and distributed systems, and by the increasing I/O demands of new classes of applications, like multimedia. It is also important to note that, to varying degrees, the I/O bottleneck exists at multiple levels of the memory hierarchy. All indications are that the I/O bottleneck will be with us for some time to come, and is likely to increase in importance. Input/Output in Parallel and Distributed Computer Systems is based on papers presented at the 1994 and 1995 IOPADS workshops held in conjunction with the International Parallel Processing Symposium. This book is divided into three parts. Part I, the Introduction, contains four invited chapters which provide a tutorial survey of I/O issues in parallel and distributed systems. The chapters in Parts II and III contain selected research papers from the 1994 and 1995 IOPADS workshops; many of these papers have been substantially revised and updated for inclusion in this volume. Part II collects the papers from both years which deal with various aspects of system software, and Part III addresses architectural issues. Input/Output in Parallel and Distributed Computer Systems is suitable as a secondary text for graduate level courses in computer architecture, software engineering, and multimedia systems, and as a reference for researchers and practitioners in industry.
In this volume authors of academia and practice provide practitioners, scientists and graduate students with a good overview of basic methods and paradigms, as well as important issues and trends across the broad spectrum of parallel and distributed processing. In particular, the book covers fundamental topics such as efficient parallel algorithms, languages for parallel processing, parallel operating systems, architecture of parallel and distributed systems, management of resources, tools for parallel computing, parallel database systems and multimedia object servers, and networking aspects of distributed and parallel computing. Three chapters are dedicated to applications: parallel and distributed scientific computing, high-performance computing in molecular sciences, and multimedia applications for parallel and distributed systems. Summing up, the Handbook is indispensable for academics and professionals who are interested in learning the leading experts view of the topic.
Service computing is a cutting-edge area, popular in both industry and academia. New challenges have been introduced to develop service-oriented systems with high assurance requirements. High Assurance Services Computing captures and makes accessible the most recent practical developments in service-oriented high-assurance systems. An edited volume contributed by well-established researchers in this field worldwide, this book reports the best current practices and emerging methods in the areas of service-oriented techniques for high assurance systems. Available results from industry and government, R&D laboratories and academia are included, along with unreported results from the "hands-on" experiences of software professionals in the respective domains. Designed for practitioners and researchers working for industrial organizations and government agencies, High Assurance Services Computing is also suitable for advanced-level students in computer science and engineering.
A tutorial approach to using the UML modeling language in system-on-chip design Based on the DAC 2004 tutorial, applicable for students and professionals Contributions by top-level international researchers The best work at the first UML for SoC workshop Unique combination of both UML capabilities and SoC design issues Condenses research and development ideas that are only found in multiple conference proceedings and many other books into one place Will be the seminal reference work for this area for years to come
This book fills the critical need for an in-depth technical reference providing the methods and techniques for building and maintaining confidence in many varities of system software. The intent is to help develop reliable answers to such critical questions as: 1) Are we building the right software for the need? and 2) Are we building the software right? Software Verification and Validation: An Engineering and Scientific Approach is structured for research scientists and practitioners in industry. The book is also suitable as a secondary textbook for advanced-level students in computer science and engineering.
The extreme ?exibility of recon?gurable architectures and their performance pot- tial have made them a vehicle of choice in a wide range of computing domains, from rapid circuit prototyping to high-performance computing. The increasing availab- ity of transistors on a die has allowed the emergence of recon?gurable architectures with a large number of computing resources and interconnection topologies. To - ploit the potential of these recon?gurable architectures, programmers are forced to map their applications, typically written in high-level imperative programming l- guages, such as C or MATLAB, to hardware-oriented languages such as VHDL or Verilog. In this process, they must assume the role of hardware designers and software programmers and navigate a maze of program transformations, mapping, and synthesis steps to produce ef?cient recon?gurable computing implementations. The richness and sophistication of any of these application mapping steps make the mapping of computations to these architectures an increasingly daunting process. It is thus widely believed that automatic compilation from high-level programming languages is the key to the success of recon?gurable computing. This book describes a wide range of code transformations and mapping te- niques for programs described in high-level programming languages, most - tably imperative languages, to recon?gurable architectures.
We will occasionally footnote a portion of text with a "**, to indicate Notes on the that this portion can be initially bypassed. The reasons for bypassing a Text portion of the text include: the subject is a special topic that will not be referenced later, the material can be skipped on first reading, or the level of mathematics is higher than the rest of the text. In cases where a topic is self-contained, we opt to collect the material into an appendix that can be read by students at their leisure. The material in the text cannot be fully assimilated until one makes it Notes on "their own" by applying the material to specific problems. Self-discovery Problems is the best teacher and although they are no substitute for an inquiring mind, problems that explore the subject from different viewpoints can often help the student to think about the material in a uniquely per sonal way. With this in mind, we have made problems an integral part of this work and have attempted to make them interesting as well as informative."
This volume comprises the edited proceedings of the second CoreGRID Integration Workshop, CGIW'2006, held October 2006 in Krakow, Poland. A "Network of Excellence" funded by the European Commission 's Sixth Framework Program, CoreGRID aims to strengthen and advance scientific and technological excellence in the area of Grid and Peer-to-Peer technologies by bringing together a critical mass of well-established researchers from 41 European research institutions. Designed for a professional audience of industry practitioners and researchers, the volume is also suitable for advanced-level students in computer science.
The aim of CoreGRID is to strengthen and advance scientific and technological excellence in the area of Grid and Peer-to-Peer technologies in order to overcome the current fragmentation and duplication of effort in this area. To achieve this objective, the workshop brought together a critical mass of well-established researchers from a number of institutions which have all constructed an ambitious joint program of activities. Priority in the workshop was given to work conducted in collaboration between partners from different research institutions and to promising research proposals that could foster such collaboration in the future.
A Software Process Model Handbook for Incorporating People's Capabilities offers the most advanced approach to date, empirically validated at software development organizations. This handbook adds a valuable contribution to the much-needed literature on people-related aspects in software engineering. The primary focus is on the particular challenge of extending software process definitions to more explicitly address people-related considerations. The capability concept is not present nor has it been considered in most software process models. The authors have developed a capabilities-oriented software process model, which has been formalized in UML and implemented as a tool. A Software Process Model Handbook for Incorporating People's Capabilities guides readers through the incorporation of the individual's capabilities into the software process. Structured to meet the needs of research scientists and graduate-level students in computer science and engineering, this book is also suitable for practitioners in industry.
Effective compilers allow for a more efficient execution of application programs for a given computer architecture, while well-conceived architectural features can support more effective compiler optimization techniques. A well thought-out strategy of trade-offs between compilers and computer architectures is the key to the successful designing of highly efficient and effective computer systems. From embedded micro-controllers to large-scale multiprocessor systems, it is important to understand the interaction between compilers and computer architectures. The goal of the Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT) is to promote new ideas and to present recent developments in compiler techniques and computer architectures that enhance each other's capabilities and performance. Interaction Between Compilers and Computer Architectures is an updated and revised volume consisting of seven papers originally presented at the Fifth Workshop on Interaction between Compilers and Computer Architectures (INTERACT-5), which was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico in 2001. This volume explores recent developments and ideas for better integration of the interaction between compilers and computer architectures in designing modern processors and computer systems. Interaction Between Compilers and Computer Architectures is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
This book is intended to serve as a textbook for a second course in the im plementation (Le. microarchitecture) of computer architectures. The subject matter covered is the collection of techniques that are used to achieve the highest performance in single-processor machines; these techniques center the exploitation of low-level parallelism (temporal and spatial) in the processing of machine instructions. The target audience consists students in the final year of an undergraduate program or in the first year of a postgraduate program in computer science, computer engineering, or electrical engineering; professional computer designers will also also find the book useful as an introduction to the topics covered. Typically, the author has used the material presented here as the basis of a full-semester undergraduate course or a half-semester post graduate course, with the other half of the latter devoted to multiple-processor machines. The background assumed of the reader is a good first course in computer architecture and implementation - to the level in, say, Computer Organization and Design, by D. Patterson and H. Hennessy - and familiarity with digital-logic design. The book consists of eight chapters: The first chapter is an introduction to all of the main ideas that the following chapters cover in detail: the topics covered are the main forms of pipelining used in high-performance uniprocessors, a taxonomy of the space of pipelined processors, and performance issues. It is also intended that this chapter should be readable as a brief "stand-alone" survey."
Based on both theoretical investigations and industrial experience, this book provides an extensive approach to support the planning and optimization process for modern communication networks. The book contains a thorough survey and a detailed comparison of state-of-the-art numerical algorithms in the matrix-geometric field.
Model based testing is the most powerful technique for testing hardware and software systems. Models in Hardware Testing describes the use of models at all the levels of hardware testing. The relevant fault models for nanoscaled CMOS technology are introduced, and their implications on fault simulation, automatic test pattern generation, fault diagnosis, memory testing and power aware testing are discussed. Models and the corresponding algorithms are considered with respect to the most recent state of the art, and they are put into a historical context by a concluding chapter on the use of physical fault models in fault tolerance.
This book is the latest contribution to the Chip Design Languages series and it consists of selected papers presented at the Forum on Specifications and Design Languages (FDL'06), in September 2006. The book represents the state-of-the-art in research and practice, and it identifies new research directions. It highlights the role of specification and modelling languages, and presents practical experiences with specification and modelling languages.
This Handbook is about methods, tools and examples of how to architect an enterprise through considering all life cycle aspects of Enterprise Entities (such as individual enterprises, enterprise networks, virtual enterprises, projects and other complex systems including a mixture of automated and human processes). The book is based on ISO15704:2000, or the GERAM Framework (Generalised Enterprise Reference Architecture and Methodology) that generalises the requirements of Enterprise Reference Architectures. Various Architecture Frameworks (PERA, CIMOSA, Grai-GIM, Zachman, C4ISR/DoDAF) are shown in light of GERAM to allow a deeper understanding of their contributions and therefore their correct and knowledgeable use. The handbook addresses a wide variety of audience, and covers methods and tools necessary to design or redesign enterprises, as well as to structure the implementation into manageable projects.
The present textbook contains the recordsof a two-semester course on que- ing theory, including an introduction to matrix-analytic methods. This course comprises four hours oflectures and two hours of exercises per week andhas been taughtattheUniversity of Trier, Germany, for about ten years in - quence. The course is directed to last year undergraduate and?rst year gr- uate students of applied probability and computer science, who have already completed an introduction to probability theory. Its purpose is to present - terial that is close enough to concrete queueing models and their applications, while providing a sound mathematical foundation for the analysis of these. Thus the goal of the present book is two-fold. On the one hand, students who are mainly interested in applications easily feel bored by elaborate mathematical questions in the theory of stochastic processes. The presentation of the mathematical foundations in our courses is chosen to cover only the necessary results, which are needed for a solid foundation of the methods of queueing analysis. Further, students oriented - wards applications expect to have a justi?cation for their mathematical efforts in terms of immediate use in queueing analysis. This is the main reason why we have decided to introduce new mathematical concepts only when they will be used in the immediate sequel. On the other hand, students of applied probability do not want any heur- tic derivations just for the sake of yielding fast results for the model at hand. |
You may like...
|