![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
Motivation It is now possible to build powerful single-processor and multiprocessor systems and use them efficiently for data processing, which has seen an explosive ex pansion in many areas of computer science and engineering. One approach to meeting the performance requirements of the applications has been to utilize the most powerful single-processor system that is available. When such a system does not provide the performance requirements, pipelined and parallel process ing structures can be employed. The concept of parallel processing is a depar ture from sequential processing. In sequential computation one processor is in volved and performs one operation at a time. On the other hand, in parallel computation several processors cooperate to solve a problem, which reduces computing time because several operations can be carried out simultaneously. Using several processors that work together on a given computation illustrates a new paradigm in computer problem solving which is completely different from sequential processing. From the practical point of view, this provides sufficient justification to investigate the concept of parallel processing and related issues, such as parallel algorithms. Parallel processing involves utilizing several factors, such as parallel architectures, parallel algorithms, parallel programming lan guages and performance analysis, which are strongly interrelated. In general, four steps are involved in performing a computational problem in parallel. The first step is to understand the nature of computations in the specific application domain.
The Marktoberdorf Summer School 1995 'Logic of Computation' was the 16th in a series of Advanced Study Institutes under the sponsorship of the NATO Scientific Affairs Division held in Marktoberdorf. Its scientific goal was to survey recent progress on the impact of logical methods in software development. The courses dealt with many different aspects of this interplay, where major progress has been made. Of particular importance were the following. * The proofs-as-programs paradigm, which makes it possible to extract verified programs directly from proofs. Here a higher order logic or type theoretic setup of the underlying language has developed into a standard. * Extensions of logic programming, e.g. by allowing more general formulas and/or higher order languages. * Proof theoretic methods, which provide tools to deal with questions of feasibility of computations and also to develop a general mathematical understanding of complexity questions. * Rewrite systems and unification, again in a higher order context. Closely related is the now well-established Grabner basis theory, which recently has found interesting applications. * Category theoretic and more generally algebraic methods and techniques to analyze the semantics of programming languages. All these issues were covered by a team of leading researchers. Their courses were grouped under the following headings.
This volume contains papers presented at the NATO sponsored Advanced Research Workshop on "Software for Parallel Computation" held at the University of Calabria, Cosenza, Italy, from June 22 to June 26, 1992. The purpose of the workshop was to evaluate the current state-of-the-art of the software for parallel computation, identify the main factors inhibiting practical applications of parallel computers and suggest possible remedies. In particular it focused on parallel software, programming tools, and practical experience of using parallel computers for solving demanding problems. Critical issues relative to the practical use of parallel computing included: portability, reusability and debugging, parallelization of sequential programs, construction of parallel algorithms, and performance of parallel programs and systems. In addition to NATO, the principal sponsor, the following organizations provided a generous support for the workshop: CERFACS, France, C.I.R.A., Italy, C.N.R., Italy, University of Calabria, Italy, ALENIA, Italy, The Boeing Company, U.S.A., CISE, Italy, ENEL - D.S.R., Italy, Alliant Computer Systems, Bull RN Sud, Italy, Convex Computer, Digital Equipment Corporation, Rewlett Packard, Meiko Scientific, U.K., PARSYTEC Computer, Germany, TELMAT Informatique, France, Thinking Machines Corporation.
An up-to-date and comprehensive overview of information and database systems design and implementation. The book provides an accessible presentation and explanation of technical architecture for systems complying with TOGAF standards, the accepted international framework. Covering nearly the full spectrum of architectural concern, the authors also illustrate and concretize the notion of traceability from business goals, strategy through to technical architecture, providing the reader with a holistic and commanding view. The work has two mutually supportive foci. First, information technology technical architecture, the in-depth, illustrative and contemporary treatment of which comprises the core and majority of the book; and secondly, a strategic and business context.
Computer Systems and Software Engineering is a compilation of sixteen state-of-the-art lectures and keynote speeches given at the COMPEURO '92 conference. The contributions are from leading researchers, each of whom gives a new insight into subjects ranging from hardware design through parallelism to computer applications. The pragmatic flavour of the contributions makes the book a valuable asset for both researchers and designers alike. The book covers the following subjects: Hardware Design: memory technology, logic design, algorithms and architecture; Parallel Processing: programming, cellular neural networks and load balancing; Software Engineering: machine learning, logic programming and program correctness; Visualization: the graphical computer interface.
This book constitutes the thoroughly refereed post-conference proceedings of the First International Workshop on Energy Efficient Data Centers (E2DC 2012) held in Madrid, Spain, in May 2012. The 13 revised full papers presented were carefully selected from 32 submissions. The papers cover topics from information and communication technologies of green data centers to business models and GreenSLA solutions. The first section presents contributions in form of position and short papers, related to various European projects. The other two sections comprise papers with more in-depth technical details. The topics covered include energy-efficient data center management and service delivery as well as energy monitoring and optimization techniques for data centers.
The State of Memory Technology Over the past decade there has been rapid growth in the speed of micropro cessors. CPU speeds are approximately doubling every eighteen months, while main memory speed doubles about every ten years. The International Tech nology Roadmap for Semiconductors (ITRS) study suggests that memory will remain on its current growth path. The ITRS short-and long-term targets indicate continued scaling improvements at about the current rate by 2016. This translates to bit densities increasing at two times every two years until the introduction of 8 gigabit dynamic random access memory (DRAM) chips, after which densities will increase four times every five years. A similar growth pattern is forecast for other high-density chip areas and high-performance logic (e.g., microprocessors and application specific inte grated circuits (ASICs)). In the future, molecular devices, 64 gigabit DRAMs and 28 GHz clock signals are targeted. Although densities continue to grow, we still do not see significant advances that will improve memory speed. These trends have created a problem that has been labeled the Memory Wall or Memory Gap."
The main objective of pervasive computing systems is to create environments where computers become invisible by being seamlessly integrated and connected into our everyday environment, where such embedded computers can then provide inf- mation and exercise intelligent control when needed, but without being obtrusive. Pervasive computing and intelligent multimedia technologies are becoming incre- ingly important to the modern way of living. However, many of their potential applications have not yet been fully realized. Intelligent multimedia allows dynamic selection, composition and presentation of the most appropriate multimedia content based on user preferences. A variety of applications of pervasive computing and - telligent multimedia are being developed for all walks of personal and business life. Pervasive computing (often synonymously called ubiquitous computing, palpable computing or ambient intelligence) is an emerging ?eld of research that brings in revolutionary paradigms for computing models in the 21st century. Pervasive c- puting is the trend towards increasingly ubiquitous connected computing devices in the environment, a trend being brought about by a convergence of advanced el- tronic - and particularly, wireless - technologies and the Internet. Recent advances in pervasive computers, networks, telecommunications and information technology, along with the proliferation of multimedia mobile devices - such as laptops, iPods, personal digital assistants (PDAs) and cellular telephones - have further stimulated the development of intelligent pervasive multimedia applications. These key te- nologiesarecreatingamultimediarevolutionthatwillhavesigni?cantimpactacross a wide spectrum of consumer, business, healthcare and governmental domains.
This book constitutes the joint thoroughly refereed post-proceedings of the Second International Workshop on Modeling Social Media, MSM 2011, held in Boston, MA, USA, in October 2011, and the Second International Workshop on Mining Ubiquitous and Social Environments, MUSE 2011, held in Athens, Greece, in September 2011. The 9 full papers included in the book are revised and significantly extended versions of papers submitted to the workshops. They cover a wide range of topics organized in three main themes: communities and networks in ubiquitous social media; mining approaches; and issues of user modeling, privacy and security.
In this work, the unique power measurement capabilities of the Cray XT architecture were exploited to gain an understanding of power and energy use, and the effects of tuning both CPU and network bandwidth. Modifications were made to deterministically halt cores when idle. Additionally, capabilities were added to alter operating P-state. At the application level, an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale is gained by simultaneously collecting current and voltage measurements on the hosting nodes. The effects of both CPU and network bandwidth tuning are examined, and energy savings opportunities without impact on run-time performance are demonstrated. This research suggests that next-generation large-scale platforms should not only approach CPU frequency scaling differently, but could also benefit from the capability to tune other platform components to achieve more energy-efficient performance.
This book constitutes the proceedings of the Third International Workshop on Traffic Monitoring and Analysis, TMA 2011, held in Vienna, Austria, on April 27, 2011 - co-located with EW 2011, the 17th European Wireless Conference. The workshop is an initiative from the COST Action IC0703 "Data Traffic Monitoring and Analysis: Theory, Techniques, Tools and Applications for the Future Networks." The 10 revised full papers and 6 poster papers presented together with 4 short papers were carefully reviewed and selected from 29 submissions. The papers are organized in topical sections on traffic analysis, applications and privacy, traffic classification, and a poster session.
This book constitutes the refereed proceedings of the 12th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems, DAIS 2012, held in Stockholm, Sweden, in June 2012 as one of the DisCoTec 2012 events. The 12 revised full papers and 9 short papers presented were carefully reviewed and selected from 58 submissions. The papers are organized in topical sections on peer-to-peer and large scale systems; security and reliability in web, cloud, p2p, and mobile systems; wireless, mobile, and pervasive systems; multidisciplinary approaches and case studies, ranging from Grid and parallel computing to multimedia and socio-technical systems; and service-oriented computing and e-commerce.
This book constitutes the refereed proceedings of the 8th International Workshop on OpenMP, held in in Rome, Italy, in June 2012. The 18 technical full papers presented together with 7 posters were carefully reviewed and selected from 30 submissions. The papers are organized in topical sections on proposed extensions to OpenMP, runtime environments, optimization and accelerators, task parallelism, validations and benchmarks
This book contains the results of an Advanced Research Workshop that took place in Grenoble, France, in June 1988. The objective of this NATO ARW on Advanced Information Technologies for Industrial Material Flow Systems (MFS) was to bring together eminent research professionals from academia, industry and government who specialize in the study and application of information technology for material flow contro ' The current world status was reviewed and an agenda for needed research was discussed and established. The workshop focused on the following subjects: The nature of information within the material flow domain. Status of contemporary databases for engineering and material flow. Distributed databases and information integration. Artificial intelligence techniques and models for material flow problem solving. Digital communications for material flow systems. Robotics, intelligent systems, and material flow contro ' Material handling and storage systems information and contro ' Implementation, organization, and economic research-issues as related to the above. Material flow control is as important as manufacturing and other process control in the computer integrated environment. Important developments have been occurring internationally in information technology, robotics, artificial intelligence and their application in material flow/material handling systems. In a traditional sense, material flow in manufacturing (and other industrial operations) consists of the independent movement of work-in-process between processing entities in order to fulfill the requirements of the appropriate production and process plans. Generally, information, in this environment, has been communicated from processors to movers.
The first ESPRIT Basic Research Project on Predictably Dependable Computing Systems (No. 3092, PDCS) commenced in May 1989, and ran until March 1992. The institutions and principal investigators that were involved in PDCS were: City University, London, UK (Bev Littlewood), lEI del CNR, Pisa, Italy (Lorenzo Strigini), Universitiit Karlsruhe, Germany (Tom Beth), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The work continued after March 1992, and a three-year successor project (No. 6362, PDCS2) officially started in August 1992, with a slightly changed membership: Chalmers University of Technology, Goteborg, Sweden (Erland Jonsson), City University, London, UK (Bev Littlewood), CNR, Pisa, Italy (Lorenzo Strigini), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), Universite Catholique de Louvain, Belgium (Pierre-Jacques Courtois), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The summary objective of both projects has been "to contribute to making the process of designing and constructing dependable computing systems much more predictable and cost-effective." In the case of PDCS2, the concentration has been on the problems of producing dependable distributed real-time systems and especially those where the dependability requirements centre on issues of safety and/or security.
This book contains the papers presented at the international research sympo sium "Solid Modeling by Computers: From Theory to Applications," held at the General Motors Research Laboratories on September 25-27, 1983. This was the 28th syposium in aseries which the Research Laboratories began sponsor ing in 1957. Each symposium has focused on a topic that is both under active study at the Research Laboratories and is also of interest to the larger technical community. Solid modeling is still a very young research area, young even when com pared with other computer-related research fields. Ten years ago, few people recognized the importance of being able to create complete and unambiguous computer models of mechanical parts. Today there is wide recognition that computer representations of solids are aprerequisite for the automation of many engineering analyses and manufacturing applications. In September 1983, the time was ripe for a symposium on this subject. Re search had already demonstrated the efficacy of solid modeling as a tool in computer automated design and manufacturing, and there were significant re suIts wh ich could be presented at the symposium. Yet the field was still young enough that we could bring together theorists in solid modeling and practition ers applying solid modeling to other research areas in a group sm all enough to allow a stimulating exchange of ideas."
This volume contains the complete proceedings of a NATO Advanced Study Institute on various aspects of the reliability of electronic and other systems. The aim of the Insti~ute was to bring together specialists in this subject. An important outcome of this Conference, as many of the delegates have pointed out to me, was complementing theoretical concepts and practical applications in both software and hardware. The reader will find papers on the mathematical background, on reliability problems in establishments where system failure may be hazardous, on reliability assessment in mechanical systems, and also on life cycle cost models and spares allocation. The proceedings contain the texts of all the lectures delivered and also verbatim accounts of panel discussions on subjects chosen from a wide range of important issues. In this introduction I will give a short account of each contribution, stressing what I feel are the most interesting topics introduced by a lecturer or a panel member. To visualise better the extent and structure. of the Institute, I present a tree-like diagram showing the subjects which my co-directors and I would have wished to include in our deliberations (Figures 1 and 2). The names of our lecturers appear underlined under suitable headings. It can be seen that we have managed to cover most of the issues which seemed important to us. VI SYSTEM EFFECTIVENESS _---~-I~--_- Performance Safety Reliability ~intenance ~istic Lethality Hazards Support S.N.R. JARDINE Max. Vel. etc. This book constitutes the thoroughly refereed post-conference proceedings of the 7th International Conference on Heterogeneous Networking for Quality, Reliability, Security and Robustness, QShine 2010. The 37 revised full papers presented along with 7 papers from the allocated Dedicated Short Range Communications Workshop, DSRC 2010, were carefully selected from numerous submissions. Conference papers are organized into 9 technical sessions, covering the topics of cognitive radio networks, security, resource allocation, wireless protocols and algorithms, advanced networking systems, sensor networks, scheduling and optimization, routing protocols, multimedia and stream processing. Workshop papers are organized into two sessions: DSRC networks and DSRC security.
This book constitutes the refereed proceedings of the 19th International Conference on Analytical and Stochastic Modelling Techniques and Applications, ASMTA 2012, held in Grenoble, France, in June 2012. The 20 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on queueing systems; networking applications; Markov chains; stochastic modelling.
The communication complexity of two-party protocols is an only 15 years old complexity measure, but it is already considered to be one of the fundamen tal complexity measures of recent complexity theory. Similarly to Kolmogorov complexity in the theory of sequential computations, communication complex ity is used as a method for the study of the complexity of concrete computing problems in parallel information processing. Especially, it is applied to prove lower bounds that say what computer resources (time, hardware, memory size) are necessary to compute the given task. Besides the estimation of the compu tational difficulty of computing problems the proved lower bounds are useful for proving the optimality of algorithms that are already designed. In some cases the knowledge about the communication complexity of a given problem may be even helpful in searching for efficient algorithms to this problem. The study of communication complexity becomes a well-defined indepen dent area of complexity theory. In addition to a strong relation to several funda mental complexity measures (and so to several fundamental problems of com plexity theory) communication complexity has contributed to the study and to the understanding of the nature of determinism, nondeterminism, and random ness in algorithmics. There already exists a non-trivial mathematical machinery to handle the communication complexity of concrete computing problems, which gives a hope that the approach based on communication complexity will be in strumental in the study of several central open problems of recent complexity theory."
This book constitutes the refereed proceedings of the 25th International Conference on Architecture of Computing Systems, ARCS 2012, held in Munich, Germany, in February/March 2012. The 20 revised full papers presented in 7 technical sessions were carefully reviewed and selected from 65 submissions. The papers are organized in topical sections on robustness and fault tolerance, power-aware processing, parallel processing, processor cores, optimization, and communication and memory.
Nonlinear Assignment Problems (NAPs) are natural extensions of the classic Linear Assignment Problem, and despite the efforts of many researchers over the past three decades, they still remain some of the hardest combinatorial optimization problems to solve exactly. The purpose of this book is to provide in a single volume, major algorithmic aspects and applications of NAPs as contributed by leading international experts. The chapters included in this book are concerned with major applications and the latest algorithmic solution approaches for NAPs. Approximation algorithms, polyhedral methods, semidefinite programming approaches and heuristic procedures for NAPs are included, while applications of this problem class in the areas of multiple-target tracking in the context of military surveillance systems, of experimental high energy physics, and of parallel processing are presented. Audience: Researchers and graduate students in the areas of combinatorial optimization, mathematical programming, operations research, physics, and computer science.
Efficient parallel solutions have been found to many problems. Some of them can be obtained automatically from sequential programs, using compilers. However, there is a large class of problems - irregular problems - that lack efficient solutions. IRREGULAR 94 - a workshop and summer school organized in Geneva - addressed the problems associated with the derivation of efficient solutions to irregular problems. This book, which is based on the workshop, draws on the contributions of outstanding scientists to present the state of the art in irregular problems, covering aspects ranging from scientific computing, discrete optimization, and automatic extraction of parallelism. Audience: This first book on parallel algorithms for irregular problems is of interest to advanced graduate students and researchers in parallel computer science.
This book brings together experts to discuss relevant results in software process modeling, and expresses their personal view of this field. It is designed for a professional audience of researchers and practitioners in industry, and graduate-level students.
Unlike current survey articles and textbooks, here the so-called confluence and termination hierarchies play a key role. Throughout, the relationships between the properties in the hierarchies are reviewed, and it is shown that for every implication X => Y in the hierarchies, the property X is undecidable for all term rewriting systems satisfying Y. Topics covered include: the newest techniques for proving termination of rewrite systems; a comprehensive chapter on conditional term rewriting systems; a state-of-the-art survey of modularity in term rewriting, and a uniform framework for term and graph rewriting, as well as the first result on conditional graph rewriting. |
You may like...
Preparing Pre-Service Teachers to Teach…
Chrystalla Mouza, Aman Yadav, …
Hardcover
R2,770
Discovery Miles 27 700
Oracle Database 12c Performance Tuning…
Sam Alapati, Darl Kuhn, …
Paperback
Driving the Development, Management, and…
Kiran Ahuja, Arun Khosla
Hardcover
R5,576
Discovery Miles 55 760
|