Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > General theory of computing > Systems analysis & design
This book synthesizes the results of the seventh in a successful series of workshops that were established by Shanghai Jiao Tong University and Technische Universitat Berlin, bringing together researchers from both universities in order to present research results to an international community. Aspects covered here include, among others, Models and specification; Simulation of different properties; Middleware for distributed real-time systems; Signal Analysis; Control methods; Applications in airborne and medical systems."
The 2008 TUB-SJTU joint workshop on Autonomous Systems Self-Organization, Management, and Control was held on October 6, 2008 at Shanghai Jiao Tong University, Shanghai, China. The workshop, sponsored by Shanghai Jiao Tong University and Technical University of Berlin brought together scientists and researchers from both universities to present and discuss the latest progress on autonomous systems and its applications in diverse areas. Autonomous systems are designed to integrate machines, computing, sensing, and software to create intelligent systems capable of interacting with the complexities of the real world. Autonomous systems represent the physical embodiment of machine intelligence. Topics of interest include, but are not limited to theory and modeling for autonomous systems; organization of autonomous systems; learning and perception; complex systems; multi-agent systems; robotics and control; applications of autonomous systems.
A tutorial approach to using the UML modeling language in system-on-chip design Based on the DAC 2004 tutorial, applicable for students and professionals Contributions by top-level international researchers The best work at the first UML for SoC workshop Unique combination of both UML capabilities and SoC design issues Condenses research and development ideas that are only found in multiple conference proceedings and many other books into one place Will be the seminal reference work for this area for years to come
This book fills the critical need for an in-depth technical reference providing the methods and techniques for building and maintaining confidence in many varities of system software. The intent is to help develop reliable answers to such critical questions as: 1) Are we building the right software for the need? and 2) Are we building the software right? Software Verification and Validation: An Engineering and Scientific Approach is structured for research scientists and practitioners in industry. The book is also suitable as a secondary textbook for advanced-level students in computer science and engineering.
A practical introduction to the development of proofs and certified programs using Coq. An invaluable tool for researchers, students, and engineers interested in formal methods and the development of zero-fault software.
Component Models and Systems for Grid Applications is the essential reference for the most current research on Grid technologies. This first volume of the CoreGRID series addresses such vital issues as the architecture of the Grid, the way software will influence the development of the Grid, and the practical applications of Grid technologies for individuals and businesses alike. Part I of the book, "Application-Oriented Designs," focuses on development methodology and how it may contribute to a more component-based use of the Grid. "Middleware Architecture," the second part, examines portable Grid engines, hierarchical infrastructures, interoperability, as well as workflow modeling environments. The final part of the book, "Communication Frameworks," looks at dynamic self-adaptation, collective operations, and higher-order components. With Component Models and Systems for Grid Applications, editors Vladimir Getov and Thilo Kielmann offer the computing professional and the computing researcher the most informative, up-to-date, and forward-looking thoughts on the fast-growing field of Grid studies.
The extreme ?exibility of recon?gurable architectures and their performance pot- tial have made them a vehicle of choice in a wide range of computing domains, from rapid circuit prototyping to high-performance computing. The increasing availab- ity of transistors on a die has allowed the emergence of recon?gurable architectures with a large number of computing resources and interconnection topologies. To - ploit the potential of these recon?gurable architectures, programmers are forced to map their applications, typically written in high-level imperative programming l- guages, such as C or MATLAB, to hardware-oriented languages such as VHDL or Verilog. In this process, they must assume the role of hardware designers and software programmers and navigate a maze of program transformations, mapping, and synthesis steps to produce ef?cient recon?gurable computing implementations. The richness and sophistication of any of these application mapping steps make the mapping of computations to these architectures an increasingly daunting process. It is thus widely believed that automatic compilation from high-level programming languages is the key to the success of recon?gurable computing. This book describes a wide range of code transformations and mapping te- niques for programs described in high-level programming languages, most - tably imperative languages, to recon?gurable architectures.
We will occasionally footnote a portion of text with a "**, to indicate Notes on the that this portion can be initially bypassed. The reasons for bypassing a Text portion of the text include: the subject is a special topic that will not be referenced later, the material can be skipped on first reading, or the level of mathematics is higher than the rest of the text. In cases where a topic is self-contained, we opt to collect the material into an appendix that can be read by students at their leisure. The material in the text cannot be fully assimilated until one makes it Notes on "their own" by applying the material to specific problems. Self-discovery Problems is the best teacher and although they are no substitute for an inquiring mind, problems that explore the subject from different viewpoints can often help the student to think about the material in a uniquely per sonal way. With this in mind, we have made problems an integral part of this work and have attempted to make them interesting as well as informative."
This volume comprises the edited proceedings of the second CoreGRID Integration Workshop, CGIW'2006, held October 2006 in Krakow, Poland. A "Network of Excellence" funded by the European Commission 's Sixth Framework Program, CoreGRID aims to strengthen and advance scientific and technological excellence in the area of Grid and Peer-to-Peer technologies by bringing together a critical mass of well-established researchers from 41 European research institutions. Designed for a professional audience of industry practitioners and researchers, the volume is also suitable for advanced-level students in computer science.
The aim of CoreGRID is to strengthen and advance scientific and technological excellence in the area of Grid and Peer-to-Peer technologies in order to overcome the current fragmentation and duplication of effort in this area. To achieve this objective, the workshop brought together a critical mass of well-established researchers from a number of institutions which have all constructed an ambitious joint program of activities. Priority in the workshop was given to work conducted in collaboration between partners from different research institutions and to promising research proposals that could foster such collaboration in the future.
A Software Process Model Handbook for Incorporating People's Capabilities offers the most advanced approach to date, empirically validated at software development organizations. This handbook adds a valuable contribution to the much-needed literature on people-related aspects in software engineering. The primary focus is on the particular challenge of extending software process definitions to more explicitly address people-related considerations. The capability concept is not present nor has it been considered in most software process models. The authors have developed a capabilities-oriented software process model, which has been formalized in UML and implemented as a tool. A Software Process Model Handbook for Incorporating People's Capabilities guides readers through the incorporation of the individual's capabilities into the software process. Structured to meet the needs of research scientists and graduate-level students in computer science and engineering, this book is also suitable for practitioners in industry.
Cellular automata can be viewed both as computational models and modelling systems of real processes. This volume emphasises the first aspect. In articles written by leading researchers, sophisticated massive parallel algorithms (firing squad, life, Fischer's primes recognition) are treated. Their computational power and the specific complexity classes they determine are surveyed, while some recent results in relation to chaos from a new dynamic systems point of view are also presented. Audience: This book will be of interest to specialists of theoretical computer science and the parallelism challenge.
This book is intended to serve as a textbook for a second course in the im plementation (Le. microarchitecture) of computer architectures. The subject matter covered is the collection of techniques that are used to achieve the highest performance in single-processor machines; these techniques center the exploitation of low-level parallelism (temporal and spatial) in the processing of machine instructions. The target audience consists students in the final year of an undergraduate program or in the first year of a postgraduate program in computer science, computer engineering, or electrical engineering; professional computer designers will also also find the book useful as an introduction to the topics covered. Typically, the author has used the material presented here as the basis of a full-semester undergraduate course or a half-semester post graduate course, with the other half of the latter devoted to multiple-processor machines. The background assumed of the reader is a good first course in computer architecture and implementation - to the level in, say, Computer Organization and Design, by D. Patterson and H. Hennessy - and familiarity with digital-logic design. The book consists of eight chapters: The first chapter is an introduction to all of the main ideas that the following chapters cover in detail: the topics covered are the main forms of pipelining used in high-performance uniprocessors, a taxonomy of the space of pipelined processors, and performance issues. It is also intended that this chapter should be readable as a brief "stand-alone" survey."
Based on both theoretical investigations and industrial experience, this book provides an extensive approach to support the planning and optimization process for modern communication networks. The book contains a thorough survey and a detailed comparison of state-of-the-art numerical algorithms in the matrix-geometric field.
Parallel and distributed computation has been gaining a great lot of attention in the last decades. During this period, the advances attained in computing and communication technologies, and the reduction in the costs of those technolo gies, played a central role in the rapid growth of the interest in the use of parallel and distributed computation in a number of areas of engineering and sciences. Many actual applications have been successfully implemented in various plat forms varying from pure shared-memory to totally distributed models, passing through hybrid approaches such as distributed-shared memory architectures. Parallel and distributed computation differs from dassical sequential compu tation in some of the following major aspects: the number of processing units, independent local dock for each unit, the number of memory units, and the programming model. For representing this diversity, and depending on what level we are looking at the problem, researchers have proposed some models to abstract the main characteristics or parameters (physical components or logical mechanisms) of parallel computers. The problem of establishing a suitable model is to find a reasonable trade-off among simplicity, power of expression and universality. Then, be able to study and analyze more precisely the behavior of parallel applications."
Effective compilers allow for a more efficient execution of application programs for a given computer architecture, while well-conceived architectural features can support more effective compiler optimization techniques. A well thought-out strategy of trade-offs between compilers and computer architectures is the key to the successful designing of highly efficient and effective computer systems. From embedded micro-controllers to large-scale multiprocessor systems, it is important to understand the interaction between compilers and computer architectures. The goal of the Annual Workshop on Interaction between Compilers and Computer Architectures (INTERACT) is to promote new ideas and to present recent developments in compiler techniques and computer architectures that enhance each other's capabilities and performance. Interaction Between Compilers and Computer Architectures is an updated and revised volume consisting of seven papers originally presented at the Fifth Workshop on Interaction between Compilers and Computer Architectures (INTERACT-5), which was held in conjunction with the IEEE HPCA-7 in Monterrey, Mexico in 2001. This volume explores recent developments and ideas for better integration of the interaction between compilers and computer architectures in designing modern processors and computer systems. Interaction Between Compilers and Computer Architectures is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
Reliability and Risk Issues in Large Scale Safety-critical Digital Control Systems provides a comprehensive coverage of reliability issues and their corresponding countermeasures in the field of large-scale digital control systems, from the hardware and software in digital systems to the human operators who supervise the overall process of large-scale systems. Unlike other books which examine theories and issues in individual fields, this book reviews important problems and countermeasures across the fields of software reliability, software verification and validation, digital systems, human factors engineering and human reliability analysis. Divided into four sections dealing with software reliability, digital system reliability, human reliability and human operators in large-scale digital systems, the book offers insights from professional researchers in each specialized field in a diverse yet unified approach.
Memory Architecture Exploration for Programmable Embedded Systems
addresses efficient exploration of alternative memory
architectures, assisted by a "compiler-in-the-loop" that allows
effective matching of the target application to the
processor-memory architecture. This new approach for memory
architecture exploration replaces the traditional black-box view of
the memory system and allows for aggressive co-optimization of the
programmable processor together with a customized memory system.
Recent accidents in a range of industries have increased concern
over the design, development, management and control of
safety-critical systems. Attention has now focused upon the role of
human error both in the development and in the operation of complex
processes. This volume contains 20 original and significant contributions addressing these critical questions. The papers were presented at the 7th IFIP Working Group 13.5 Working Conference on Human Error, Safety and Systems Development, which was held in August 2004 in conjunction with the 18th IFIP World Computer Congress in Toulouse, France, and sponsored by the International Federation for Information Processing (IFIP).
This Handbook is about methods, tools and examples of how to architect an enterprise through considering all life cycle aspects of Enterprise Entities (such as individual enterprises, enterprise networks, virtual enterprises, projects and other complex systems including a mixture of automated and human processes). The book is based on ISO15704:2000, or the GERAM Framework (Generalised Enterprise Reference Architecture and Methodology) that generalises the requirements of Enterprise Reference Architectures. Various Architecture Frameworks (PERA, CIMOSA, Grai-GIM, Zachman, C4ISR/DoDAF) are shown in light of GERAM to allow a deeper understanding of their contributions and therefore their correct and knowledgeable use. The handbook addresses a wide variety of audience, and covers methods and tools necessary to design or redesign enterprises, as well as to structure the implementation into manageable projects.
New concepts and technologies are being introduced continuously for application development in the World-Wide Web. Selecting the right implementation strategies and tools when building a Web application has become a tedious task, requiring in-depth knowledge and significant experience from both software developers and software managers. The mission of this book is to guide the reader through the opaque jungle of Web technologies. Based on their long industrial and academic experience, Stefan Jablonski and his coauthors provide a framework architecture for Web applications which helps choose the best strategy for a given project. The authors classify common technologies and standards like .NET, CORBA, J2EE, DCOM, WSDL and many more with respect to platform, architectural layer, and application package, and guide the reader through a three-phase development process consisting of preparation, design, and technology selection steps. The whole approach is exemplified using a real-world case: the architectural design of an order-entry management system.
This book is the latest contribution to the Chip Design Languages series and it consists of selected papers presented at the Forum on Specifications and Design Languages (FDL'06), in September 2006. The book represents the state-of-the-art in research and practice, and it identifies new research directions. It highlights the role of specification and modelling languages, and presents practical experiences with specification and modelling languages.
This book explores the optimization potential of cross-layer design approaches for wireless ad hoc and sensor network performance, covering both theory and practice. A theoretical section provides an overview of design issues in both strictly layered and cross-layer approaches. A practical section builds on these issues to explore three case studies of diverse ad hoc and sensor network applications and communication technologies.
This year, the IFIP Working Conference on Distributed and Parallel Embedded Sys tems (DIPES 2008) is held as part of the IFIP World Computer Congress, held in Milan on September 7 10, 2008. The embedded systems world has a great deal of experience with parallel and distributed computing. Many embedded computing systems require the high performance that can be delivered by parallel computing. Parallel and distributed computing are often the only ways to deliver adequate real time performance at low power levels. This year's conference attracted 30 submissions, of which 21 were accepted. Prof. Jor ] g Henkel of the University of Karlsruhe graciously contributed a keynote address on embedded computing and reliability. We would like to thank all of the program committee members for their diligence. Wayne Wolf, Bernd Kleinjohann, and Lisa Kleinjohann Acknowledgements We would like to thank all people involved in the organization of the IFIP World Computer Congress 2008, especially the IPC Co Chairs Judith Bishop and Ivo De Lotto, the Organization Chair Giulio Occhini, as well as the Publications Chair John Impagliazzo. Further thanks go to the authors for their valuable contributions to DIPES 2008. Last but not least we would like to acknowledge the considerable amount of work and enthusiasm spent by our colleague Claudius Stern in preparing theproceedingsofDIPES2008. Hemadeitpossibletoproducethemintheircurrent professional and homogeneous style."
The present textbook contains the recordsof a two-semester course on que- ing theory, including an introduction to matrix-analytic methods. This course comprises four hours oflectures and two hours of exercises per week andhas been taughtattheUniversity of Trier, Germany, for about ten years in - quence. The course is directed to last year undergraduate and?rst year gr- uate students of applied probability and computer science, who have already completed an introduction to probability theory. Its purpose is to present - terial that is close enough to concrete queueing models and their applications, while providing a sound mathematical foundation for the analysis of these. Thus the goal of the present book is two-fold. On the one hand, students who are mainly interested in applications easily feel bored by elaborate mathematical questions in the theory of stochastic processes. The presentation of the mathematical foundations in our courses is chosen to cover only the necessary results, which are needed for a solid foundation of the methods of queueing analysis. Further, students oriented - wards applications expect to have a justi?cation for their mathematical efforts in terms of immediate use in queueing analysis. This is the main reason why we have decided to introduce new mathematical concepts only when they will be used in the immediate sequel. On the other hand, students of applied probability do not want any heur- tic derivations just for the sake of yielding fast results for the model at hand. |
You may like...
Exploring Enterprise Service Bus in the…
Robin Singh Bhadoria, Narendra Chaudhari, …
Hardcover
R5,277
Discovery Miles 52 770
Large-Scale Fuzzy Interconnected Control…
Zhixiong Zhong, Chih-Min Lin
Hardcover
R4,591
Discovery Miles 45 910
Handbook of Research on 5G Networks and…
Augustine O Nwajana, Isibor Kennedy Ihianle
Hardcover
R8,415
Discovery Miles 84 150
Business Analysis Techniques - 123…
James Cadle, Debra Paul, …
Paperback
R1,431
Discovery Miles 14 310
|