0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (38)
  • R250 - R500 (168)
  • R500+ (2,590)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > General theory of computing > Systems analysis & design

Wireless Technology - Applications, Management, and Security (Paperback, 2009 ed.): Steven Powell, J.P. Shim Wireless Technology - Applications, Management, and Security (Paperback, 2009 ed.)
Steven Powell, J.P. Shim
R4,343 Discovery Miles 43 430 Ships in 10 - 15 working days

Wireless technology and handheld devices are dramatically changing the degrees of interaction throughout the world, further creating a ubiquitous network society. The emergence of advanced wireless telecommunication technologies and devices in today's society has increased accuracy and access rate, all of which are increasingly essential as the volume of information handled by users expands at an accelerated pace. The requirement for mobility leads to increasing pressure for applications and wireless systems to revolve around the concept of continuous communication with anyone, anywhere, and anytime. With the wireless technology and devices come ?exibility in network design and quicker deployment time. Over the past decades, numerous wireless telecommu- cation topics have received increasing attention from industry professionals, a- demics, and government agencies. Among these topics are the wireless Internet; multimedia; 3G/4G wireless networks and systems; mobile and wireless network security; wireless network modeling, algorithms, and simulation; satellite based s- tems; 802.11x; RFID; and broadband wireless access.

Models in Hardware Testing - Lecture Notes of the Forum in Honor of Christian Landrault (Paperback, 2010 ed.): Hans-Joachim... Models in Hardware Testing - Lecture Notes of the Forum in Honor of Christian Landrault (Paperback, 2010 ed.)
Hans-Joachim Wunderlich
R2,873 Discovery Miles 28 730 Ships in 10 - 15 working days

Model based testing is the most powerful technique for testing hardware and software systems. Models in Hardware Testing describes the use of models at all the levels of hardware testing. The relevant fault models for nanoscaled CMOS technology are introduced, and their implications on fault simulation, automatic test pattern generation, fault diagnosis, memory testing and power aware testing are discussed. Models and the corresponding algorithms are considered with respect to the most recent state of the art, and they are put into a historical context by a concluding chapter on the use of physical fault models in fault tolerance.

Using Subject Headings for Online Retrieval - Theory, Practice and Potential (Hardcover): Karen Markey Drabenstott, Diane... Using Subject Headings for Online Retrieval - Theory, Practice and Potential (Hardcover)
Karen Markey Drabenstott, Diane Vizine-Goetz
R5,565 R4,230 Discovery Miles 42 300 Save R1,335 (24%) Ships in 12 - 17 working days

Using Subject Headings for Online Retrieval is an indispensable tool for online system designers who are developing new systems or refining existing ones. The book describes subject analysis and subject searching in online catalogs, including the limitations of retrieval, and demonstrates how such limitations can be overcome through system design and programming. The book describes the Library of Congress Subject Headings system and system characteristics, shows how information is stored in machine-readable files, and offers examples of and recommendations for successful retrieval methods. Tables are included to support these recommendations, and diagrams, graphs, and bar charts are used to provide results of data analysis. Practitioners in institutions using or considering the installation of an online catalog will refer to this book often to generate specifications. Researchers in library systems, information retrieval, and user behavior will appreciate the book's detailing of the results of an extensive, empirical study of the subject terms entered into online systems by end users. Using Subject Headings for Online Retrieval also addresses the needs of advanced students in library schools and instructors in library automation, information retrieval, cataloging, indexing, and user behavior.

Service Placement in Ad Hoc Networks (Paperback, 2012): Georg Wittenburg, Jochen Schiller Service Placement in Ad Hoc Networks (Paperback, 2012)
Georg Wittenburg, Jochen Schiller
R1,521 Discovery Miles 15 210 Ships in 10 - 15 working days

Service provisioning in ad hoc networks is challenging given the difficulties of communicating over a wireless channel and the potential heterogeneity and mobility of the devices that form the network. Service placement is the process of selecting an optimal set of nodes to host the implementation of a service in light of a given service demand and network topology. The key advantage of active service placement in ad hoc networks is that it allows for the service configuration to be adapted continuously at run time. "Service Placement in Ad Hoc Networks" proposes the SPi service placement framework as a novel approach to service placement in ad hoc networks. The SPi framework takes advantage of the interdependencies between service placement, service discovery and the routing of service requests to minimize signaling overhead. The work also proposes the Graph Cost / Single Instance and the Graph Cost / Multiple Instances placement algorithms.

Solid Modeling by Computers - From Theory to Applications (Paperback, Softcover reprint of the original 1st ed. 1984): Mary S.... Solid Modeling by Computers - From Theory to Applications (Paperback, Softcover reprint of the original 1st ed. 1984)
Mary S. Pickett, John W. Boyse
R1,581 Discovery Miles 15 810 Ships in 10 - 15 working days

This book contains the papers presented at the international research sympo sium "Solid Modeling by Computers: From Theory to Applications," held at the General Motors Research Laboratories on September 25-27, 1983. This was the 28th syposium in aseries which the Research Laboratories began sponsor ing in 1957. Each symposium has focused on a topic that is both under active study at the Research Laboratories and is also of interest to the larger technical community. Solid modeling is still a very young research area, young even when com pared with other computer-related research fields. Ten years ago, few people recognized the importance of being able to create complete and unambiguous computer models of mechanical parts. Today there is wide recognition that computer representations of solids are aprerequisite for the automation of many engineering analyses and manufacturing applications. In September 1983, the time was ripe for a symposium on this subject. Re search had already demonstrated the efficacy of solid modeling as a tool in computer automated design and manufacturing, and there were significant re suIts wh ich could be presented at the symposium. Yet the field was still young enough that we could bring together theorists in solid modeling and practition ers applying solid modeling to other research areas in a group sm all enough to allow a stimulating exchange of ideas."

High-Capacity Local and Metropolitan Area Networks - Architecture and Performance Issues (Paperback, Softcover reprint of the... High-Capacity Local and Metropolitan Area Networks - Architecture and Performance Issues (Paperback, Softcover reprint of the original 1st ed. 1991)
Guy Pujolle
R2,969 Discovery Miles 29 690 Ships in 10 - 15 working days

The main objective of this workshop was to review and discuss the state of the art and the latest advances* in the area of 1-10 Gbit/s throughput for local and metropolitan area networks. The first generation of local area networks had throughputs in the range 1-20 Mbit/s. Well-known examples of this first generation networks are the Ethernet and the Token Ring. The second generation of networks allowed throughputs in the range 100-200 Mbit/s. Representatives of this generation are the FDDI double ring and the DQDB (IEEE 802.6) networks. The third generation networks will have throughputs in the range 1-10 Gbit/s. The rapid development and deployment of fiber optics worldwide, as well as the projected emergence of a market for broadband services, have given rise to the development of broadband ISDN standards. Currently, the Asynchronous Transfer Mode (ATM) appears to be a viable solution to broadband networks. The possibility of all-optical networks in the future is being examined. This would allow the tapping of approximately 50 terahertz or so available in the lightwave range of the frequency spectrum. It is envisaged that using such a high-speed network it will be feasible to distribute high-quality video to the home, to carry out rapid retrieval of radiological and other scientific images, and to enable multi-media conferencing between various parties.

Advances in Computer Graphics Hardware V - Rendering, Ray Tracing and Visualization Systems (Paperback, Softcover reprint of... Advances in Computer Graphics Hardware V - Rendering, Ray Tracing and Visualization Systems (Paperback, Softcover reprint of the original 1st ed. 1992)
Richard L Grimsdale, Arie Kaufman
R1,498 Discovery Miles 14 980 Ships in 10 - 15 working days

This volume contains papers representing a comprehensive record of the contributions to the fifth workshop at EG '90 in Lausanne. The Eurographics hardware workshops have now become an established forum for the exchange of information about the latest developments in this field of growing importance. The first workshop took place during EG '86 in Lisbon. All participants considered this to be a very rewarding event to be repeated at future EG conferences. This view was reinforced at the EG '87 Hardware Workshop in Amsterdam and firmly established the need for such a colloquium in this specialist area within the annual EG conference. The third EG Hardware Workshop took place in Nice in 1988 and the fourth in Hamburg at EG '89. The first part of the book is devoted to rendering machines. The papers in this part address techniques for accelerating the rendering of images and efficient ways of improv ing their quality. The second part on ray tracing describes algorithms and architectures for producing photorealistic images, with emphasis on ways of reducing the time for this computationally intensive task. The third part on visualization systems covers a num ber of topics, including voxel-based systems, radiosity, animation and special rendering techniques. The contributions show that there is flourishing activity in the development of new algorithmic and architectural ideas and, in particular, in absorbing the impact of VLSI technology. The increasing diversity of applications encourage new solutions, and graphics hardware has become a research area of high activity and importance.

Data Mangement in Cloud, Grid and P2P Systems - 5th International Conference, Globe 2012, Vienna, Austria, September 5-6, 2012,... Data Mangement in Cloud, Grid and P2P Systems - 5th International Conference, Globe 2012, Vienna, Austria, September 5-6, 2012, Proceedings (Paperback, 2012 ed.)
Abdelkader Hameurlain, Farookh Khadeer Hussain, Franck Morvan, A. Min Tjoa
R1,939 Discovery Miles 19 390 Ships in 10 - 15 working days

This book constitutes the refereed proceedings of the 5th International Conference on Data Management in Grid and Peer-to-Peer Systems, Globe 2012, held in Vienna, Austria, in September 2012 in conjunction with DEXA 2012. The 9 revised full papers presented were carefully reviewed and selected from 15 submissions. The papers are organized in topical sections on data management in the cloud, cloud MapReduce and performance evaluation, and data stream systems and distributed data mining.

High Performance Discovery In Time Series - Techniques and Case Studies (Paperback, Softcover reprint of the original 1st ed.... High Performance Discovery In Time Series - Techniques and Case Studies (Paperback, Softcover reprint of the original 1st ed. 2004)
New York University; New York University; Edited by Donna Ryan
R2,848 Discovery Miles 28 480 Ships in 10 - 15 working days

Overview and Goals Data arriving in time order (a data stream) arises in fields ranging from physics to finance to medicine to music, just to name a few. Often the data comes from sensors (in physics and medicine for example) whose data rates continue to improve dramati cally as sensor technology improves. Further, the number of sensors is increasing, so correlating data between sensors becomes ever more critical in orderto distill knowl edge from the data. On-line response is desirable in many applications (e.g., to aim a telescope at a burst of activity in a galaxy or to perform magnetic resonance-based real-time surgery). These factors - data size, bursts, correlation, and fast response motivate this book. Our goal is to help you design fast, scalable algorithms for the analysis of single or multiple time series. Not only will you find useful techniques and systems built from simple primi tives, but creative readers will find many other applications of these primitives and may see how to create new ones of their own. Our goal, then, is to help research mathematicians and computer scientists find new algorithms and to help working scientists and financial mathematicians design better, faster software."

Electronic Systems Effectiveness and Life Cycle Costing (Paperback, Softcover reprint of the original 1st ed. 1983): J.K.... Electronic Systems Effectiveness and Life Cycle Costing (Paperback, Softcover reprint of the original 1st ed. 1983)
J.K. Skwirzynski
R3,034 Discovery Miles 30 340 Ships in 10 - 15 working days

This volume contains the complete proceedings of a NATO Advanced Study Institute on various aspects of the reliability of electronic and other systems. The aim of the Insti~ute was to bring together specialists in this subject. An important outcome of this Conference, as many of the delegates have pointed out to me, was complementing theoretical concepts and practical applications in both software and hardware. The reader will find papers on the mathematical background, on reliability problems in establishments where system failure may be hazardous, on reliability assessment in mechanical systems, and also on life cycle cost models and spares allocation. The proceedings contain the texts of all the lectures delivered and also verbatim accounts of panel discussions on subjects chosen from a wide range of important issues. In this introduction I will give a short account of each contribution, stressing what I feel are the most interesting topics introduced by a lecturer or a panel member. To visualise better the extent and structure. of the Institute, I present a tree-like diagram showing the subjects which my co-directors and I would have wished to include in our deliberations (Figures 1 and 2). The names of our lecturers appear underlined under suitable headings. It can be seen that we have managed to cover most of the issues which seemed important to us. VI SYSTEM EFFECTIVENESS _---~-I~--_- Performance Safety Reliability ~intenance ~istic Lethality Hazards Support S.N.R. JARDINE Max. Vel. etc.

Advanced Information Technologies for Industrial Material Flow Systems (Paperback, Softcover reprint of the original 1st ed.... Advanced Information Technologies for Industrial Material Flow Systems (Paperback, Softcover reprint of the original 1st ed. 1989)
Shimon Y. Nof, Colin L. Moodie
R3,027 Discovery Miles 30 270 Ships in 10 - 15 working days

This book contains the results of an Advanced Research Workshop that took place in Grenoble, France, in June 1988. The objective of this NATO ARW on Advanced Information Technologies for Industrial Material Flow Systems (MFS) was to bring together eminent research professionals from academia, industry and government who specialize in the study and application of information technology for material flow contro ' The current world status was reviewed and an agenda for needed research was discussed and established. The workshop focused on the following subjects: The nature of information within the material flow domain. Status of contemporary databases for engineering and material flow. Distributed databases and information integration. Artificial intelligence techniques and models for material flow problem solving. Digital communications for material flow systems. Robotics, intelligent systems, and material flow contro ' Material handling and storage systems information and contro ' Implementation, organization, and economic research-issues as related to the above. Material flow control is as important as manufacturing and other process control in the computer integrated environment. Important developments have been occurring internationally in information technology, robotics, artificial intelligence and their application in material flow/material handling systems. In a traditional sense, material flow in manufacturing (and other industrial operations) consists of the independent movement of work-in-process between processing entities in order to fulfill the requirements of the appropriate production and process plans. Generally, information, in this environment, has been communicated from processors to movers.

Superconducting Electronics (Paperback, Softcover reprint of the original 1st ed. 1989): Harold Weinstock, Martin Nisenoff Superconducting Electronics (Paperback, Softcover reprint of the original 1st ed. 1989)
Harold Weinstock, Martin Nisenoff
R2,938 Discovery Miles 29 380 Ships in 10 - 15 working days

The book provides an in-depth understanding of the fundamentals of superconducting electronics and the practical considerations for the fabrication of superconducting electronic structures. Additionally, it covers in detail the opportunities afforded by superconductivity for uniquely sensitive electronic devices and illustrates how these devices (in some cases employing high-temperature, ceramic superconductors) can be applied in analog and digital signal processing, laboratory instruments, biomagnetism, geophysics, nondestructive evaluation and radioastronomy. Improvements in cryocooler technology for application to cryoelectronics are also covered. This is the first book in several years to treat the fundamentals and applications of superconducting electronics in a comprehensive manner, and it is the very first book to consider the implications of high-temperature, ceramic superconductors for superconducting electronic devices. Not only does this new class of superconductors create new opportunities, but recently impressive milestones have been reached in superconducting analog and digital signal processing which promise to lead to a new generation of sensing, processing and computational systems. The 15 chapters are authored by acknowledged leaders in the fundamental science and in the applications of this increasingly active field, and many of the authors provide a timely assessment of the potential for devices and applications based upon ceramic-oxide superconductors or hybrid structures incorporating these new superconductors with other materials. The book takes the reader from a basic discussion of applicable (BCS and Ginzburg-Landau) theories and tunneling phenomena, through the structure and characteristics of Josephson devices and circuits, to applications that utilize the world's most sensitive magnetometer, most sensitive microwave detector, and fastest arithmetic logic unit.

Predictably Dependable Computing Systems (Paperback, Softcover reprint of the original 1st ed. 1995): Brian Randell, Jean... Predictably Dependable Computing Systems (Paperback, Softcover reprint of the original 1st ed. 1995)
Brian Randell, Jean Claude Laprie, Hermann Kopetz, Bev Littlewood
R2,964 Discovery Miles 29 640 Ships in 10 - 15 working days

The first ESPRIT Basic Research Project on Predictably Dependable Computing Systems (No. 3092, PDCS) commenced in May 1989, and ran until March 1992. The institutions and principal investigators that were involved in PDCS were: City University, London, UK (Bev Littlewood), lEI del CNR, Pisa, Italy (Lorenzo Strigini), Universitiit Karlsruhe, Germany (Tom Beth), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The work continued after March 1992, and a three-year successor project (No. 6362, PDCS2) officially started in August 1992, with a slightly changed membership: Chalmers University of Technology, Goteborg, Sweden (Erland Jonsson), City University, London, UK (Bev Littlewood), CNR, Pisa, Italy (Lorenzo Strigini), LAAS-CNRS, Toulouse, France (Jean-Claude Laprie), Universite Catholique de Louvain, Belgium (Pierre-Jacques Courtois), University of Newcastle upon Tyne, UK (Brian Randell), LRI-CNRS/Universite Paris-Sud, France (Marie-Claude Gaudel), Technische Universitiit Wien, Austria (Hermann Kopetz), and University of York, UK (John McDermid). The summary objective of both projects has been "to contribute to making the process of designing and constructing dependable computing systems much more predictable and cost-effective." In the case of PDCS2, the concentration has been on the problems of producing dependable distributed real-time systems and especially those where the dependability requirements centre on issues of safety and/or security.

Principles of Distributed Systems - 15th International Conference, OPODIS 2011, Toulouse, France, December 13-16, 2011,... Principles of Distributed Systems - 15th International Conference, OPODIS 2011, Toulouse, France, December 13-16, 2011, Proceedings (Paperback, 2011 ed.)
Antonio Fernandez Anta, Giuseppe Lipari, Matthieu Roy
R1,602 Discovery Miles 16 020 Ships in 10 - 15 working days

This book constitutes the refereed proceedings of the 15th International Conference on Principles of Distributed Systems, OPODIS 2011, held in Toulouse, France, in December 2011. The 26 revised papers presented in this volume were carefully reviewed and selected from 96 submissions. They represent the current state of the art of the research in the field of the design, analysis and development of distributed and real-time systems.

Software Configuration Management Using Vesta (Paperback, 2006): Clark Allan Heydon, Roy Levin, Timothy P. Mann, Yuan Yu Software Configuration Management Using Vesta (Paperback, 2006)
Clark Allan Heydon, Roy Levin, Timothy P. Mann, Yuan Yu
R2,869 Discovery Miles 28 690 Ships in 10 - 15 working days

Helps in the development of large software projects.

Uses a well-known open-source software prototype system (Vesta developed at Digital and Compaq Systems Research Lab).

Implementing Health Care Information Systems (Paperback, Softcover reprint of the original 1st ed. 1989): Helmuth F Orthner,... Implementing Health Care Information Systems (Paperback, Softcover reprint of the original 1st ed. 1989)
Helmuth F Orthner, Bruce I. Blum
R2,921 Discovery Miles 29 210 Ships in 10 - 15 working days

This series in Computers and Medicine had its origins when I met Jerry Stone of Springer-Verlag at a SCAMC meeting in 1982. We determined that there was a need for good collections of papers that would help disseminate the results of research and application in this field. I had already decided to do what is now Information Systems for Patient Care, and Jerry contributed the idea of making it part of a series. In 1984 the first book was published, and-thanks to Jerry's efforts - Computers and Medicine was underway. Since that time, there have been many changes. Sadly, Jerry died at a very early age and cannot share in the success of the series that he helped found. On the bright side, however, many of the early goals of the series have been met. As the result of equipment improvements and the consequent lowering of costs, com puters are being used in a growing number of medical applications, and the health care community is very computer literate. Thus, the focus of concern has turned from learning about the technology to understanding how that technology can be exploited in a medical environment."

Model Driven Engineering Languages and Systems - 14th International Conference, MODELS 2011, Wellington, New Zealand, October... Model Driven Engineering Languages and Systems - 14th International Conference, MODELS 2011, Wellington, New Zealand, October 16-21, 2011, Proceedings (Paperback)
Jon Whittle, Tony Clark, Thomas Kuhne
R1,679 Discovery Miles 16 790 Ships in 10 - 15 working days

This book constitutes the refereed proceedings of the 14th International Conference on Model Driven Engineering Languages and Systems, MODELS 2011, held in Wellington, New Zealand, in October 2011. The papers address a wide range of topics in research (foundations track) and practice (applications track). For the first time a new category of research papers, vision papers, are included presenting "outside the box" thinking. The foundations track received 167 full paper submissions, of which 34 were selected for presentation. Out of these, 3 papers were vision papers. The application track received 27 submissions, of which 13 papers were selected for presentation. The papers are organized in topical sections on model transformation, model complexity, aspect oriented modeling, analysis and comprehension of models, domain specific modeling, models for embedded systems, model synchronization, model based resource management, analysis of class diagrams, verification and validation, refactoring models, modeling visions, logics and modeling, development methods, and model integration and collaboration.

Quantitative Security Risk Assessment of Enterprise Networks (Paperback, 2011 ed.): Xinming Ou, Anoop Singhal Quantitative Security Risk Assessment of Enterprise Networks (Paperback, 2011 ed.)
Xinming Ou, Anoop Singhal
R1,521 Discovery Miles 15 210 Ships in 10 - 15 working days

Protection of enterprise networks from malicious intrusions is critical to the economy and security of our nation. This article gives an overview of the techniques and challenges for security risk analysis of enterprise networks. A standard model for security analysis will enable us to answer questions such as "are we more secure than yesterday" or "how does the security of one network configuration compare with another one". In this article, we will present a methodology for quantitative security risk analysis that is based on the model of attack graphs and the Common Vulnerability Scoring System (CVSS). Our techniques analyze all attack paths through a network, for an attacker to reach certain goal(s).

Guide to Applying Human Factors Methods - Human Error and Accident Management in Safety-Critical Systems (Paperback, Softcover... Guide to Applying Human Factors Methods - Human Error and Accident Management in Safety-Critical Systems (Paperback, Softcover reprint of the original 1st ed. 2004)
Carlo Cacciabue
R1,663 Discovery Miles 16 630 Ships in 10 - 15 working days

Human error plays a significant role in many accidents involving safety-critical systems, and it is now a standard requirement in both the US and Europe for Human Factors (HF) to be taken into account in system design and safety assessment. This book will be an essential guide for anyone who uses HF in their everyday work, providing them with consistent and ready-to-use procedures and methods that can be applied to real-life problems. The first part of the book looks at the theoretical framework, methods and techniques that the engineer or safety analyst needs to use when working on a HF-related project. The second part presents four case studies that show the reader how the above framework and guidelines work in practice. The case studies are based on real-life projects carried out by the author for a major European railway system, and in collaboration with international companies such as the International Civil Aviation Organisation, Volvo, Daimler-Chrysler and FIAT.

Current Trends in Hardware Verification and Automated Theorem Proving (Paperback, Softcover reprint of the original 1st ed.... Current Trends in Hardware Verification and Automated Theorem Proving (Paperback, Softcover reprint of the original 1st ed. 1989)
Graham Birtwistle, P.A. Subrahmanyam
R2,933 Discovery Miles 29 330 Ships in 10 - 15 working days

This report describes the partially completed correctness proof of the Viper 'block model'. Viper 7,8,9,11,23] is a microprocessor designed by W. J. Cullyer, C. Pygott and J. Kershaw at the Royal Signals and Radar Establishment in Malvern, England, (henceforth 'RSRE') for use in safety-critical applications such as civil aviation and nuclear power plant control. It is currently finding uses in areas such as the de ployment of weapons from tactical aircraft. To support safety-critical applications, Viper has a particulary simple design about which it is relatively easy to reason using current techniques and models. The designers, who deserve much credit for the promotion of formal methods, intended from the start that Viper be formally verified. Their idea was to model Viper in a sequence of decreasingly abstract levels, each of which concentrated on some aspect ofthe design, such as the flow ofcontrol, the processingofinstructions, and so on. That is, each model would be a specification of the next (less abstract) model, and an implementation of the previous model (if any). The verification effort would then be simplified by being structured according to the sequence of abstraction levels. These models (or levels) of description were characterized by the design team. The first two levels, and part of the third, were written by them in a logical language amenable to reasoning and proof."

Information Security and Assurance - International Conference, ISA 2011, Brno, Czech Republic, August 15-17, 2011, Proceedings... Information Security and Assurance - International Conference, ISA 2011, Brno, Czech Republic, August 15-17, 2011, Proceedings (Paperback, 2011)
Tai-Hoon Kim, Hojjat Adeli, Rosslin John Robles, Maricel Balitanas
R1,558 Discovery Miles 15 580 Ships in 10 - 15 working days

This book constitutes the proceedings of the International Conference on Information Security and Assurance, held in Brno, Czech Republic in August 2011.

Video Processing in the Cloud (Paperback, 2011 ed.): Rafael Silva Pereira, Karin K. Breitman Video Processing in the Cloud (Paperback, 2011 ed.)
Rafael Silva Pereira, Karin K. Breitman
R1,521 Discovery Miles 15 210 Ships in 10 - 15 working days

As computer systems evolve, the volume of data to be processed increases significantly, either as a consequence of the expanding amount of available information, or due to the possibility of performing highly complex operations that were not feasible in the past. Nevertheless, tasks that depend on the manipulation of large amounts of information are still performed at large computational cost, i.e., either the processing time will be large, or they will require intensive use of computer resources. In this scenario, the efficient use of available computational resources is paramount, and creates a demand for systems that can optimize the use of resources in relation to the amount of data to be processed. This problem becomes increasingly critical when the volume of information to be processed is variable, i.e., there is a seasonal variation of demand. Such demand variations are caused by a variety of factors, such as an unanticipated burst of client requests, a time-critical simulation, or high volumes of simultaneous video uploads, e.g. as a consequence of a public contest. In these cases, there are moments when the demand is very low (resources are almost idle) while, conversely, at other moments, the processing demand exceeds the resources capacity. Moreover, from an economical perspective, seasonal demands do not justify a massive investment in infrastructure, just to provide enough computing power for peak situations. In this light, the ability to build adaptive systems, capable of using on demand resources provided by Cloud Computing infrastructures is very attractive.

Input/Output in Parallel and Distributed Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1996): Ravi... Input/Output in Parallel and Distributed Computer Systems (Paperback, Softcover reprint of the original 1st ed. 1996)
Ravi Jain, John Werth, James C. Browne
R5,612 Discovery Miles 56 120 Ships in 10 - 15 working days

Input/Output in Parallel and Distributed Computer Systems has attracted increasing attention over the last few years, as it has become apparent that input/output performance, rather than CPU performance, may be the key limiting factor in the performance of future systems. This I/O bottleneck is caused by the increasing speed mismatch between processing units and storage devices, the use of multiple processors operating simultaneously in parallel and distributed systems, and by the increasing I/O demands of new classes of applications, like multimedia. It is also important to note that, to varying degrees, the I/O bottleneck exists at multiple levels of the memory hierarchy. All indications are that the I/O bottleneck will be with us for some time to come, and is likely to increase in importance. Input/Output in Parallel and Distributed Computer Systems is based on papers presented at the 1994 and 1995 IOPADS workshops held in conjunction with the International Parallel Processing Symposium. This book is divided into three parts. Part I, the Introduction, contains four invited chapters which provide a tutorial survey of I/O issues in parallel and distributed systems. The chapters in Parts II and III contain selected research papers from the 1994 and 1995 IOPADS workshops; many of these papers have been substantially revised and updated for inclusion in this volume. Part II collects the papers from both years which deal with various aspects of system software, and Part III addresses architectural issues. Input/Output in Parallel and Distributed Computer Systems is suitable as a secondary text for graduate level courses in computer architecture, software engineering, and multimedia systems, and as a reference for researchers and practitioners in industry.

Quantitative Measure for Discrete Event Supervisory Control (Paperback, 2005): Asok Ray, Vir V Phoha, Shashi Phoha Quantitative Measure for Discrete Event Supervisory Control (Paperback, 2005)
Asok Ray, Vir V Phoha, Shashi Phoha
R2,871 Discovery Miles 28 710 Ships in 10 - 15 working days

Supervisory Control Theory (SCT) provides a tool to model and control human-engineered complex systems, such as computer networks, World Wide Web, identification and spread of malicious executables, and command, control, communication, and information systems. Although there are some excellent monographs and books on SCT to control and diagnose discrete-event systems, there is a need for a research monograph that provides a coherent quantitative treatment of SCT theory for decision and control of complex systems. This new monograph will assimilate many new concepts that have been recently reported or are in the process of being reported in open literature. The major objectives here are to present a) a quantitative approach, supported by a formal theory, for discrete-event decision and control of human-engineered complex systems; and b) a set of applications to emerging technological areas such as control of software systems, malicious executables, and complex engineering systems. The monograph will provide the necessary background materials in automata theory and languages for supervisory control. It will introduce a new paradigm of language measure to quantitatively compare the performance of different automata models of a physical system. A novel feature of this approach is to generate discrete-event robust optimal decision and control algorithms for both military and commercial systems.

Sensing and Systems in Pervasive Computing - Engineering Context Aware Systems (Paperback, Edition.): Dan Chalmers Sensing and Systems in Pervasive Computing - Engineering Context Aware Systems (Paperback, Edition.)
Dan Chalmers
R1,152 Discovery Miles 11 520 Ships in 10 - 15 working days

Focus on issues and principles in context awareness, sensor processing and software design (rather than sensor networks or HCI or particular commercial systems).

Designed as a textbook, with readings and lab problems in most chapters.

Focus on concepts, algorithms and ideas rather than particular technologies.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
The Theory of Everything Else
Dan Schreiber Paperback R380 R339 Discovery Miles 3 390
Working Effectively With Your Teaching…
Sara Alston Paperback R455 Discovery Miles 4 550
Shadow Nations - Tribal Sovereignty and…
Bruce Duthu Hardcover R1,406 Discovery Miles 14 060
The Works of the Rev. Jonathan Swift
Jonathan Swift Paperback R614 Discovery Miles 6 140
The Big Book of Primary Club Resources…
Fe Luton, Lian Jacobs Hardcover R4,005 Discovery Miles 40 050
Prostate Care
R224 R195 Discovery Miles 1 950
Simulations in the Political Science…
Mark Harvey, James Fielder, … Paperback R1,220 Discovery Miles 12 200
Crystal Care Kneading Massager
R1,004 Discovery Miles 10 040
Collagen Hot Chocolate
R139 Discovery Miles 1 390
Microsoft Xbox Series Wireless…
R1,699 R1,399 Discovery Miles 13 990

 

Partners