Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > General theory of computing > Systems analysis & design
Overview and Goals Data arriving in time order (a data stream) arises in fields ranging from physics to finance to medicine to music, just to name a few. Often the data comes from sensors (in physics and medicine for example) whose data rates continue to improve dramati cally as sensor technology improves. Further, the number of sensors is increasing, so correlating data between sensors becomes ever more critical in orderto distill knowl edge from the data. On-line response is desirable in many applications (e.g., to aim a telescope at a burst of activity in a galaxy or to perform magnetic resonance-based real-time surgery). These factors - data size, bursts, correlation, and fast response motivate this book. Our goal is to help you design fast, scalable algorithms for the analysis of single or multiple time series. Not only will you find useful techniques and systems built from simple primi tives, but creative readers will find many other applications of these primitives and may see how to create new ones of their own. Our goal, then, is to help research mathematicians and computer scientists find new algorithms and to help working scientists and financial mathematicians design better, faster software."
This volume contains the complete proceedings of a NATO Advanced Study Institute on various aspects of the reliability of electronic and other systems. The aim of the Insti~ute was to bring together specialists in this subject. An important outcome of this Conference, as many of the delegates have pointed out to me, was complementing theoretical concepts and practical applications in both software and hardware. The reader will find papers on the mathematical background, on reliability problems in establishments where system failure may be hazardous, on reliability assessment in mechanical systems, and also on life cycle cost models and spares allocation. The proceedings contain the texts of all the lectures delivered and also verbatim accounts of panel discussions on subjects chosen from a wide range of important issues. In this introduction I will give a short account of each contribution, stressing what I feel are the most interesting topics introduced by a lecturer or a panel member. To visualise better the extent and structure. of the Institute, I present a tree-like diagram showing the subjects which my co-directors and I would have wished to include in our deliberations (Figures 1 and 2). The names of our lecturers appear underlined under suitable headings. It can be seen that we have managed to cover most of the issues which seemed important to us. VI SYSTEM EFFECTIVENESS _---~-I~--_- Performance Safety Reliability ~intenance ~istic Lethality Hazards Support S.N.R. JARDINE Max. Vel. etc.
The book provides an in-depth understanding of the fundamentals of superconducting electronics and the practical considerations for the fabrication of superconducting electronic structures. Additionally, it covers in detail the opportunities afforded by superconductivity for uniquely sensitive electronic devices and illustrates how these devices (in some cases employing high-temperature, ceramic superconductors) can be applied in analog and digital signal processing, laboratory instruments, biomagnetism, geophysics, nondestructive evaluation and radioastronomy. Improvements in cryocooler technology for application to cryoelectronics are also covered. This is the first book in several years to treat the fundamentals and applications of superconducting electronics in a comprehensive manner, and it is the very first book to consider the implications of high-temperature, ceramic superconductors for superconducting electronic devices. Not only does this new class of superconductors create new opportunities, but recently impressive milestones have been reached in superconducting analog and digital signal processing which promise to lead to a new generation of sensing, processing and computational systems. The 15 chapters are authored by acknowledged leaders in the fundamental science and in the applications of this increasingly active field, and many of the authors provide a timely assessment of the potential for devices and applications based upon ceramic-oxide superconductors or hybrid structures incorporating these new superconductors with other materials. The book takes the reader from a basic discussion of applicable (BCS and Ginzburg-Landau) theories and tunneling phenomena, through the structure and characteristics of Josephson devices and circuits, to applications that utilize the world's most sensitive magnetometer, most sensitive microwave detector, and fastest arithmetic logic unit.
The main objective of this workshop was to review and discuss the state of the art and the latest advances* in the area of 1-10 Gbit/s throughput for local and metropolitan area networks. The first generation of local area networks had throughputs in the range 1-20 Mbit/s. Well-known examples of this first generation networks are the Ethernet and the Token Ring. The second generation of networks allowed throughputs in the range 100-200 Mbit/s. Representatives of this generation are the FDDI double ring and the DQDB (IEEE 802.6) networks. The third generation networks will have throughputs in the range 1-10 Gbit/s. The rapid development and deployment of fiber optics worldwide, as well as the projected emergence of a market for broadband services, have given rise to the development of broadband ISDN standards. Currently, the Asynchronous Transfer Mode (ATM) appears to be a viable solution to broadband networks. The possibility of all-optical networks in the future is being examined. This would allow the tapping of approximately 50 terahertz or so available in the lightwave range of the frequency spectrum. It is envisaged that using such a high-speed network it will be feasible to distribute high-quality video to the home, to carry out rapid retrieval of radiological and other scientific images, and to enable multi-media conferencing between various parties.
The first ESPRIT Basic Research Project on Predictably Dependable Computing Systems (No. 3092, PDCS) commenced in May 1989, and ran until March 1992. The institutions and principal investigators that were involved in PDCS were: City University, London, UK (Bev Littlewood), lEI del CNR, Pisa, Italy (Lorenzo Strigini), Universitiit Karlsruhe, Germany (Tom Beth), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The work continued after March 1992, and a three-year successor project (No. 6362, PDCS2) officially started in August 1992, with a slightly changed membership: Chalmers University of Technology, Goteborg, Sweden (Erland Jonsson), City University, London, UK (Bev Littlewood), CNR, Pisa, Italy (Lorenzo Strigini), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), Universite Catholique de Louvain, Belgium (Pierre-Jacques Courtois), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The summary objective of both projects has been "to contribute to making the process of designing and constructing dependable computing systems much more predictable and cost-effective." In the case of PDCS2, the concentration has been on the problems of producing dependable distributed real-time systems and especially those where the dependability requirements centre on issues of safety and/or security.
This book constitutes the refereed proceedings of the 15th International Conference on Principles of Distributed Systems, OPODIS 2011, held in Toulouse, France, in December 2011. The 26 revised papers presented in this volume were carefully reviewed and selected from 96 submissions. They represent the current state of the art of the research in the field of the design, analysis and development of distributed and real-time systems.
Helps in the development of large software projects. Uses a well-known open-source software prototype system (Vesta
developed at Digital and Compaq Systems Research Lab).
This book constitutes the refereed proceedings of the 14th International Conference on Model Driven Engineering Languages and Systems, MODELS 2011, held in Wellington, New Zealand, in October 2011. The papers address a wide range of topics in research (foundations track) and practice (applications track). For the first time a new category of research papers, vision papers, are included presenting "outside the box" thinking. The foundations track received 167 full paper submissions, of which 34 were selected for presentation. Out of these, 3 papers were vision papers. The application track received 27 submissions, of which 13 papers were selected for presentation. The papers are organized in topical sections on model transformation, model complexity, aspect oriented modeling, analysis and comprehension of models, domain specific modeling, models for embedded systems, model synchronization, model based resource management, analysis of class diagrams, verification and validation, refactoring models, modeling visions, logics and modeling, development methods, and model integration and collaboration.
This series in Computers and Medicine had its origins when I met Jerry Stone of Springer-Verlag at a SCAMC meeting in 1982. We determined that there was a need for good collections of papers that would help disseminate the results of research and application in this field. I had already decided to do what is now Information Systems for Patient Care, and Jerry contributed the idea of making it part of a series. In 1984 the first book was published, and-thanks to Jerry's efforts - Computers and Medicine was underway. Since that time, there have been many changes. Sadly, Jerry died at a very early age and cannot share in the success of the series that he helped found. On the bright side, however, many of the early goals of the series have been met. As the result of equipment improvements and the consequent lowering of costs, com puters are being used in a growing number of medical applications, and the health care community is very computer literate. Thus, the focus of concern has turned from learning about the technology to understanding how that technology can be exploited in a medical environment."
Protection of enterprise networks from malicious intrusions is critical to the economy and security of our nation. This article gives an overview of the techniques and challenges for security risk analysis of enterprise networks. A standard model for security analysis will enable us to answer questions such as "are we more secure than yesterday" or "how does the security of one network configuration compare with another one". In this article, we will present a methodology for quantitative security risk analysis that is based on the model of attack graphs and the Common Vulnerability Scoring System (CVSS). Our techniques analyze all attack paths through a network, for an attacker to reach certain goal(s).
Human error plays a significant role in many accidents involving safety-critical systems, and it is now a standard requirement in both the US and Europe for Human Factors (HF) to be taken into account in system design and safety assessment. This book will be an essential guide for anyone who uses HF in their everyday work, providing them with consistent and ready-to-use procedures and methods that can be applied to real-life problems. The first part of the book looks at the theoretical framework, methods and techniques that the engineer or safety analyst needs to use when working on a HF-related project. The second part presents four case studies that show the reader how the above framework and guidelines work in practice. The case studies are based on real-life projects carried out by the author for a major European railway system, and in collaboration with international companies such as the International Civil Aviation Organisation, Volvo, Daimler-Chrysler and FIAT.
This book constitutes the proceedings of the International Conference on Information Security and Assurance, held in Brno, Czech Republic in August 2011.
As computer systems evolve, the volume of data to be processed increases significantly, either as a consequence of the expanding amount of available information, or due to the possibility of performing highly complex operations that were not feasible in the past. Nevertheless, tasks that depend on the manipulation of large amounts of information are still performed at large computational cost, i.e., either the processing time will be large, or they will require intensive use of computer resources. In this scenario, the efficient use of available computational resources is paramount, and creates a demand for systems that can optimize the use of resources in relation to the amount of data to be processed. This problem becomes increasingly critical when the volume of information to be processed is variable, i.e., there is a seasonal variation of demand. Such demand variations are caused by a variety of factors, such as an unanticipated burst of client requests, a time-critical simulation, or high volumes of simultaneous video uploads, e.g. as a consequence of a public contest. In these cases, there are moments when the demand is very low (resources are almost idle) while, conversely, at other moments, the processing demand exceeds the resources capacity. Moreover, from an economical perspective, seasonal demands do not justify a massive investment in infrastructure, just to provide enough computing power for peak situations. In this light, the ability to build adaptive systems, capable of using on demand resources provided by Cloud Computing infrastructures is very attractive.
Supervisory Control Theory (SCT) provides a tool to model and control human-engineered complex systems, such as computer networks, World Wide Web, identification and spread of malicious executables, and command, control, communication, and information systems. Although there are some excellent monographs and books on SCT to control and diagnose discrete-event systems, there is a need for a research monograph that provides a coherent quantitative treatment of SCT theory for decision and control of complex systems. This new monograph will assimilate many new concepts that have been recently reported or are in the process of being reported in open literature. The major objectives here are to present a) a quantitative approach, supported by a formal theory, for discrete-event decision and control of human-engineered complex systems; and b) a set of applications to emerging technological areas such as control of software systems, malicious executables, and complex engineering systems. The monograph will provide the necessary background materials in automata theory and languages for supervisory control. It will introduce a new paradigm of language measure to quantitatively compare the performance of different automata models of a physical system. A novel feature of this approach is to generate discrete-event robust optimal decision and control algorithms for both military and commercial systems.
This book serves both as an introduction to computer architecture and as a guide to using a hardware description language (HDL) to design, model and simulate real digital systems. The book starts with an introduction to Verilog - the HDL chosen for the book since it is widely used in industry and straightforward to learn. Next, the instruction set architecture (ISA) for the simple VeSPA (Very Small Processor Architecture) processor is defined - this is a real working device that has been built and tested at the University of Minnesota by the authors. The VeSPA ISA is used throughout the remainder of the book to demonstrate how behavioural and structural models can be developed and intermingled in Verilog. Although Verilog is used throughout, the lessons learned will be equally applicable to other HDLs. Written for senior and graduate students, this book is also an ideal introduction to Verilog for practising engineers.
Input/Output in Parallel and Distributed Computer Systems has attracted increasing attention over the last few years, as it has become apparent that input/output performance, rather than CPU performance, may be the key limiting factor in the performance of future systems. This I/O bottleneck is caused by the increasing speed mismatch between processing units and storage devices, the use of multiple processors operating simultaneously in parallel and distributed systems, and by the increasing I/O demands of new classes of applications, like multimedia. It is also important to note that, to varying degrees, the I/O bottleneck exists at multiple levels of the memory hierarchy. All indications are that the I/O bottleneck will be with us for some time to come, and is likely to increase in importance. Input/Output in Parallel and Distributed Computer Systems is based on papers presented at the 1994 and 1995 IOPADS workshops held in conjunction with the International Parallel Processing Symposium. This book is divided into three parts. Part I, the Introduction, contains four invited chapters which provide a tutorial survey of I/O issues in parallel and distributed systems. The chapters in Parts II and III contain selected research papers from the 1994 and 1995 IOPADS workshops; many of these papers have been substantially revised and updated for inclusion in this volume. Part II collects the papers from both years which deal with various aspects of system software, and Part III addresses architectural issues. Input/Output in Parallel and Distributed Computer Systems is suitable as a secondary text for graduate level courses in computer architecture, software engineering, and multimedia systems, and as a reference for researchers and practitioners in industry.
This report describes the partially completed correctness proof of the Viper 'block model'. Viper 7,8,9,11,23] is a microprocessor designed by W. J. Cullyer, C. Pygott and J. Kershaw at the Royal Signals and Radar Establishment in Malvern, England, (henceforth 'RSRE') for use in safety-critical applications such as civil aviation and nuclear power plant control. It is currently finding uses in areas such as the de ployment of weapons from tactical aircraft. To support safety-critical applications, Viper has a particulary simple design about which it is relatively easy to reason using current techniques and models. The designers, who deserve much credit for the promotion of formal methods, intended from the start that Viper be formally verified. Their idea was to model Viper in a sequence of decreasingly abstract levels, each of which concentrated on some aspect ofthe design, such as the flow ofcontrol, the processingofinstructions, and so on. That is, each model would be a specification of the next (less abstract) model, and an implementation of the previous model (if any). The verification effort would then be simplified by being structured according to the sequence of abstraction levels. These models (or levels) of description were characterized by the design team. The first two levels, and part of the third, were written by them in a logical language amenable to reasoning and proof."
Holger Scherl introduces the reader to the reconstruction problem in computed tomography and its major scientific challenges that range from computational efficiency to the fulfillment of Tuy's sufficiency condition. The assessed hardware architectures include multi- and many-core systems, cell broadband engine architecture, graphics processing units, and field programmable gate arrays.
This two-volume set LNCS 6771 and 6772 constitutes the refereed proceedings of the Symposium on Human Interface 2011, held in Orlando, FL, USA in July 2011 in the framework of the 14th International Conference on Human-Computer Interaction, HCII 2011 with 10 other thematically similar conferences. The 137 revised papers presented in the two volumes were carefully reviewed and selected from numerous submissions. The papers accepted for presentation thoroughly cover the thematic area of human interface and the management of information. The 75 papers of this first volume address the following major topics: design and development methods and tools; information and user interfaces design; visualisation techniques and applications; security and privacy; touch and gesture interfaces; adaption and personalisation; and measuring and recognising human behavior.
This two-volume set LNCS 6771 and 6772 constitutes the refereed proceedings of the Symposium on Human Interface 2011, held in Orlando, FL, USA in July 2011 in the framework of the 14th International Conference on Human-Computer Interaction, HCII 2011 with 10 other thematically similar conferences. The 137 revised papers presented in the two volumes were carefully reviewed and selected from numerous submissions. The papers accepted for presentation thoroughly cover the thematic area of human interface and the management of information. The 62 papers of this second volume address the following major topics: access to information; supporting communication; supporting work, collaboration; decision-making and business; mobile and ubiquitous information; and information in aviation.
Focus on issues and principles in context awareness, sensor processing and software design (rather than sensor networks or HCI or particular commercial systems). Designed as a textbook, with readings and lab problems in most chapters. Focus on concepts, algorithms and ideas rather than particular technologies.
This book constitutes the refereed proceedings of the 11th
International Conference on Next Generation Teletraffic and
Wired/Wireless Advanced Networking, NEW2AN 2011 and the 4th
Conference on Smart Spaces, ruSMART 2011 jointly held in St.
Petersburg, Russia, in August 2011.
This book constitutes the refereed proceedings of the 8th International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR 2011, held in St. Petersburg, Russia in July, 2011. The book presents 30 revised full papers selected from a total of 52 submissions. The book is divided in sections on discrete and continuous optimization, segmentation, motion and video, learning and shape analysis.
This book constitutes the refereed proceedings of the 11th IFIP
WG 6.1 International Conference on Distributed Applications and
Interoperable Systems, DAIS 2011, held in Reykjavik, Iceland, in
June 2011 as one of the DisCoTec 2011 events.
An up-to-date and comprehensive overview of information and database systems design and implementation. The book provides an accessible presentation and explanation of technical architecture for systems complying with TOGAF standards, the accepted international framework. Covering nearly the full spectrum of architectural concern, the authors also illustrate and concretize the notion of traceability from business goals, strategy through to technical architecture, providing the reader with a holistic and commanding view. The work has two mutually supportive foci. First, information technology technical architecture, the in-depth, illustrative and contemporary treatment of which comprises the core and majority of the book; and secondly, a strategic and business context. |
You may like...
Cases on Lean Thinking Applications in…
Eduardo Guilherme Satolo, Robisom Damasceno Calado
Hardcover
R6,281
Discovery Miles 62 810
Implementing Data Analytics and…
Chintan Bhatt, Neeraj Kumar, …
Hardcover
R6,256
Discovery Miles 62 560
|