0
Your cart

Your cart is empty

Browse All Departments
Price
  • R50 - R100 (1)
  • R100 - R250 (48)
  • R250 - R500 (173)
  • R500+ (2,556)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > General theory of computing > Systems analysis & design

Wireless Technology - Applications, Management, and Security (Paperback, 2009 ed.): Steven Powell, J.P. Shim Wireless Technology - Applications, Management, and Security (Paperback, 2009 ed.)
Steven Powell, J.P. Shim
R4,006 Discovery Miles 40 060 Ships in 18 - 22 working days

Wireless technology and handheld devices are dramatically changing the degrees of interaction throughout the world, further creating a ubiquitous network society. The emergence of advanced wireless telecommunication technologies and devices in today's society has increased accuracy and access rate, all of which are increasingly essential as the volume of information handled by users expands at an accelerated pace. The requirement for mobility leads to increasing pressure for applications and wireless systems to revolve around the concept of continuous communication with anyone, anywhere, and anytime. With the wireless technology and devices come ?exibility in network design and quicker deployment time. Over the past decades, numerous wireless telecommu- cation topics have received increasing attention from industry professionals, a- demics, and government agencies. Among these topics are the wireless Internet; multimedia; 3G/4G wireless networks and systems; mobile and wireless network security; wireless network modeling, algorithms, and simulation; satellite based s- tems; 802.11x; RFID; and broadband wireless access.

Information, Organization and Information Systems Design - An Integrated Approach to Information Problems (Paperback, Softcover... Information, Organization and Information Systems Design - An Integrated Approach to Information Problems (Paperback, Softcover reprint of the original 1st ed. 2000)
Bart Prakken
R4,011 Discovery Miles 40 110 Ships in 18 - 22 working days

During the last two decades, there have been many reports about the success and failure of investments in ICT and information systems. Failures in particular have drawn a lot of attention. The outcome of the implementation of information and communication systems has often been disastrous. Recent research does not show that results have improved. This raises the question why so many ICT projects perform so badly. Information, Organization and Information Systems Design: An Integrated Approach to Information Problems aims at discussing measures to improve the results of information systems. Bart Prakken identifies various factors that explain the shortfall of information systems. Subsequently, he provides a profound discussion of the measures that can be taken to remove the causes of failure. When organizations are confronted with information problems, they will almost automatically look for ICT solutions. However, Prakken argues that more fundamental and often cheaper solutions are in many cases available. When looking for solutions to information problems, the inter-relationship between organization, information and the people within the organization should explicitly be taken into account. The measures that the author proposes are based on organizational redesign, particularly using the sociotechnical approach. In cases where ICT solutions do have to be introduced, Prakken discusses a number of precautionary measures that will help their implementation. The book aims to contribute to the scientific debate on how to solve information problems, and can be used in graduate and postgraduate courses. It is also helpful to managers.

Hardware Evolution - Automatic Design of Electronic Circuits in Reconfigurable Hardware by Artificial Evolution (Paperback,... Hardware Evolution - Automatic Design of Electronic Circuits in Reconfigurable Hardware by Artificial Evolution (Paperback, Softcover reprint of the original 1st ed. 1998)
Adrian Thompson
R2,610 Discovery Miles 26 100 Ships in 18 - 22 working days

Evolution through natural selection has been going on for a very long time. Evolution through artificial selection has been practiced by humans for a large part of our history, in the breeding of plants and livestock. Artificial evolution, where we evolve an artifact through artificial selection, has been around since electronic computers became common: about 30 years. Right from the beginning, people have suggested using artificial evolution to design electronics automatically.l Only recently, though, have suitable re configurable silicon chips become available that make it easy for artificial evolution to work with a real, physical, electronic medium: before them, ex periments had to be done entirely in software simulations. Early research concentrated on the potential applications opened-up by the raw speed ad vantage of dedicated digital hardware over software simulation on a general purpose computer. This book is an attempt to show that there is more to it than that. In fact, a radically new viewpoint is possible, with fascinating consequences. This book was written as a doctoral thesis, submitted in September 1996. As such, it was a rather daring exercise in ruthless brevity. Believing that the contribution I had to make was essentially a simple one, I resisted being drawn into peripheral discussions. In the places where I deliberately drop a subject, this implies neither that it's not interesting, nor that it's not relevant: just that it's not a crucial part of the tale I want to tell here."

Analysis, Architectures and Modelling of Embedded Systems - Third IFIP TC 10 International Embedded Systems Symposium, IESS... Analysis, Architectures and Modelling of Embedded Systems - Third IFIP TC 10 International Embedded Systems Symposium, IESS 2009, Langenargen, Germany, September 14-16, 2009, Proceedings (Paperback, 2009 ed.)
Achim Rettberg, Mauro C. Zanella, Michael Amann, Michael Keckeisen, Franz J. Rammig
R2,662 Discovery Miles 26 620 Ships in 18 - 22 working days

This book presents the technical program of the International Embedded Systems Symposium (IESS) 2009. Timely topics, techniques and trends in embedded system design are covered by the chapters in this volume, including modelling, simulation, verification, test, scheduling, platforms and processors. Particular emphasis is paid to automotive systems and wireless sensor networks. Sets of actual case studies in the area of embedded system design are also included. Over recent years, embedded systems have gained an enormous amount of proce- ing power and functionality and now enter numerous application areas, due to the fact that many of the formerly external components can now be integrated into a single System-on-Chip. This tendency has resulted in a dramatic reduction in the size and cost of embedded systems. As a unique technology, the design of embedded systems is an essential element of many innovations. Embedded systems meet their performance goals, including real-time constraints, through a combination of special-purpose hardware and software components tailored to the system requirements. Both the development of new features and the reuse of existing intellectual property components are essential to keeping up with ever more demanding customer requirements. Furthermore, design complexities are steadily growing with an increasing number of components that have to cooperate properly. Embedded system designers have to cope with multiple goals and constraints simul- neously, including timing, power, reliability, dependability, maintenance, packaging and, last but not least, price.

From Fault Classification to Fault Tolerance for Multi-Agent Systems (Paperback, 2013 ed.): Katia Potiron, Amal EL... From Fault Classification to Fault Tolerance for Multi-Agent Systems (Paperback, 2013 ed.)
Katia Potiron, Amal EL Fallah-Seghrouchni, Patrick Taillibert
R1,592 Discovery Miles 15 920 Ships in 18 - 22 working days

Faults are a concern for Multi-Agent Systems (MAS) designers, especially if the MAS are built for industrial or military use because there must be some guarantee of dependability. Some fault classification exists for classical systems, and is used to define faults. When dependability is at stake, such fault classification may be used from the beginning of the system's conception to define fault classes and specify which types of faults are expected. Thus, one may want to use fault classification for MAS; however, From Fault Classification to Fault Tolerance for Multi-Agent Systems argues that working with autonomous and proactive agents implies a special analysis of the faults potentially occurring in the system. Moreover, the field of Fault Tolerance (FT) provides numerous methods adapted to handle different kinds of faults. Some handling methods have been studied within the MAS domain, adapting to their specificities and capabilities but increasing the large amount of FT methods. Therefore, unless being an expert in fault tolerance, it is difficult to choose, evaluate or compare fault tolerance methods, preventing a lot of developed applications from not only to being more pleasant to use but, more importantly, from at least being tolerant to common faults. From Fault Classification to Fault Tolerance for Multi-Agent Systems shows that specification phase guidelines and fault handler studies can be derived from the fault classification extension made for MAS. From this perspective, fault classification can become a unifying concept between fault tolerance methods in MAS.

VLSI for Neural Networks and Artificial Intelligence (Paperback, Softcover reprint of the original 1st ed. 1994): Jose... VLSI for Neural Networks and Artificial Intelligence (Paperback, Softcover reprint of the original 1st ed. 1994)
Jose G.Delgado- Frias, W R Moore
R4,021 Discovery Miles 40 210 Ships in 18 - 22 working days

Neural network and artificial intelligence algorithrns and computing have increased not only in complexity but also in the number of applications. This in turn has posed a tremendous need for a larger computational power that conventional scalar processors may not be able to deliver efficiently. These processors are oriented towards numeric and data manipulations. Due to the neurocomputing requirements (such as non-programming and learning) and the artificial intelligence requirements (such as symbolic manipulation and knowledge representation) a different set of constraints and demands are imposed on the computer architectures/organizations for these applications. Research and development of new computer architectures and VLSI circuits for neural networks and artificial intelligence have been increased in order to meet the new performance requirements. This book presents novel approaches and trends on VLSI implementations of machines for these applications. Papers have been drawn from a number of research communities; the subjects span analog and digital VLSI design, computer design, computer architectures, neurocomputing and artificial intelligence techniques. This book has been organized into four subject areas that cover the two major categories of this book; the areas are: analog circuits for neural networks, digital implementations of neural networks, neural networks on multiprocessor systems and applications, and VLSI machines for artificial intelligence. The topics that are covered in each area are briefly introduced below.

Programming Environments for Massively Parallel Distributed Systems - Working Conference of the IFIP WG 10.3, April 25-29, 1994... Programming Environments for Massively Parallel Distributed Systems - Working Conference of the IFIP WG 10.3, April 25-29, 1994 (Paperback, Softcover reprint of the original 1st ed. 1994)
Karsten M. Decker, Rene M. Rehmann
R1,447 Discovery Miles 14 470 Ships in 18 - 22 working days

Massively Parallel Systems (MPSs) with their scalable computation and storage space promises are becoming increasingly important for high-performance computing. The growing acceptance of MPSs in academia is clearly apparent. However, in industrial companies, their usage remains low. The programming of MPSs is still the big obstacle, and solving this software problem is sometimes referred to as one of the most challenging tasks of the 1990's. The 1994 working conference on "Programming Environments for Massively Parallel Systems" was the latest event of the working group WG 10.3 of the International Federation for Information Processing (IFIP) in this field. It succeeded the 1992 conference in Edinburgh on "Programming Environments for Parallel Computing". The research and development work discussed at the conference addresses the entire spectrum of software problems including virtual machines which are less cumbersome to program; more convenient programming models; advanced programming languages, and especially more sophisticated programming tools; but also algorithms and applications.

Advances in Computer Graphics Hardware V - Rendering, Ray Tracing and Visualization Systems (Paperback, Softcover reprint of... Advances in Computer Graphics Hardware V - Rendering, Ray Tracing and Visualization Systems (Paperback, Softcover reprint of the original 1st ed. 1992)
Richard L Grimsdale, Arie Kaufman
R1,386 Discovery Miles 13 860 Ships in 18 - 22 working days

This volume contains papers representing a comprehensive record of the contributions to the fifth workshop at EG '90 in Lausanne. The Eurographics hardware workshops have now become an established forum for the exchange of information about the latest developments in this field of growing importance. The first workshop took place during EG '86 in Lisbon. All participants considered this to be a very rewarding event to be repeated at future EG conferences. This view was reinforced at the EG '87 Hardware Workshop in Amsterdam and firmly established the need for such a colloquium in this specialist area within the annual EG conference. The third EG Hardware Workshop took place in Nice in 1988 and the fourth in Hamburg at EG '89. The first part of the book is devoted to rendering machines. The papers in this part address techniques for accelerating the rendering of images and efficient ways of improv ing their quality. The second part on ray tracing describes algorithms and architectures for producing photorealistic images, with emphasis on ways of reducing the time for this computationally intensive task. The third part on visualization systems covers a num ber of topics, including voxel-based systems, radiosity, animation and special rendering techniques. The contributions show that there is flourishing activity in the development of new algorithmic and architectural ideas and, in particular, in absorbing the impact of VLSI technology. The increasing diversity of applications encourage new solutions, and graphics hardware has become a research area of high activity and importance.

ImageCLEF - Experimental Evaluation in Visual Information Retrieval (Paperback, 2010 ed.): Henning Muller, Paul Clough, Thomas... ImageCLEF - Experimental Evaluation in Visual Information Retrieval (Paperback, 2010 ed.)
Henning Muller, Paul Clough, Thomas Deselaers, Barbara Caputo
R2,728 Discovery Miles 27 280 Ships in 18 - 22 working days

The pervasive creation and consumption of content, especially visual content, is ingrained into our modern world. We're constantly consuming visual media content, in printed form and in digital form, in work and in leisure pursuits. Like our cave- man forefathers, we use pictures to record things which are of importance to us as memory cues for the future, but nowadays we also use pictures and images to document processes; we use them in engineering, in art, in science, in medicine, in entertainment and we also use images in advertising. Moreover, when images are in digital format, either scanned from an analogue format or more often than not born digital, we can use the power of our computing and networking to exploit images to great effect. Most of the technical problems associated with creating, compressing, storing, transmitting, rendering and protecting image data are already solved. We use - cepted standards and have tremendous infrastructure and the only outstanding ch- lenges, apart from managing the scale issues associated with growth, are to do with locating images. That involves analysing them to determine their content, clas- fying them into related groupings, and searching for images. To overcome these challenges we currently rely on image metadata, the description of the images, - ther captured automatically at creation time or manually added afterwards.

The Design of a Microprocessor (Paperback, Softcover reprint of the original 1st ed. 1989): Wilhelm G Spruth The Design of a Microprocessor (Paperback, Softcover reprint of the original 1st ed. 1989)
Wilhelm G Spruth
R1,441 Discovery Miles 14 410 Ships in 18 - 22 working days

This text has been produced for the benefit of students in computer and infor mation science and for experts involved in the design of microprocessors. It deals with the design of complex VLSI chips, specifically of microprocessor chip sets. The aim is on the one hand to provide an overview of the state of the art, and on the other hand to describe specific design know-how. The depth of detail presented goes considerably beyond the level of information usually found in computer science text books. The rapidly developing discipline of designing complex VLSI chips, especially microprocessors, requires a significant extension of the state of the art. We are observing the genesis of a new engineering discipline, the design and realization of very complex logical structures, and we are obviously only at the beginning. This discipline is still young and immature, alternate concepts are still evolving, and "the best way to do it" is still being explored. Therefore it is not yet possible to describe the different methods in use and to evaluate them. However, the economic impact is significant today, and the heavy investment that companies in the USA, the Far East, and in Europe, are making in gener ating VLSI design competence is a testimony to the importance this field is expected to have in the future. Staying competitive requires mastering and extending this competence.

Dynamic Reconfiguration - Architectures and Algorithms (Paperback, Softcover reprint of the original 1st ed. 2004):... Dynamic Reconfiguration - Architectures and Algorithms (Paperback, Softcover reprint of the original 1st ed. 2004)
Ramachandran Vaidyanathan, Jerry Trahan
R1,474 Discovery Miles 14 740 Ships in 18 - 22 working days

Dynamic Reconfiguration: Architectures and Algorithms offers a comprehensive treatment of dynamically reconfigurable computer architectures and algorithms for them. The coverage is broad starting from fundamental algorithmic techniques, ranging across algorithms for a wide array of problems and applications, to simulations between models. The presentation employs a single reconfigurable model (the reconfigurable mesh) for most algorithms, to enable the reader to distill key ideas without the cumbersome details of a myriad of models. In addition to algorithms, the book discusses topics that provide a better understanding of dynamic reconfiguration such as scalability and computational power, and more recent advances such as optical models, run-time reconfiguration (on FPGA and related platforms), and implementing dynamic reconfiguration. The book, featuring many examples and a large set of exercises, is an excellent textbook or reference for a graduate course. It is also a useful reference to researchers and system developers in the area.

Architecture Design and Validation Methods (Paperback, Softcover reprint of the original 1st ed. 2000): Egon Boerger Architecture Design and Validation Methods (Paperback, Softcover reprint of the original 1st ed. 2000)
Egon Boerger
R1,429 Discovery Miles 14 290 Ships in 18 - 22 working days

This state-of-the-art survey gives a systematic presentation of recent advances in the design and validation of computer architectures. The book covers a comprehensive range of architecture design and validation methods, from computer aided high-level design of VLSI circuits and systems to layout and testable design, including the modeling and synthesis of behavior and dataflow, cell-based logic optimization, machine assisted verification, and virtual machine design.

High Performance Discovery In Time Series - Techniques and Case Studies (Paperback, Softcover reprint of the original 1st ed.... High Performance Discovery In Time Series - Techniques and Case Studies (Paperback, Softcover reprint of the original 1st ed. 2004)
New York University; New York University; Edited by Donna Ryan
R2,630 Discovery Miles 26 300 Ships in 18 - 22 working days

Overview and Goals Data arriving in time order (a data stream) arises in fields ranging from physics to finance to medicine to music, just to name a few. Often the data comes from sensors (in physics and medicine for example) whose data rates continue to improve dramati cally as sensor technology improves. Further, the number of sensors is increasing, so correlating data between sensors becomes ever more critical in orderto distill knowl edge from the data. On-line response is desirable in many applications (e.g., to aim a telescope at a burst of activity in a galaxy or to perform magnetic resonance-based real-time surgery). These factors - data size, bursts, correlation, and fast response motivate this book. Our goal is to help you design fast, scalable algorithms for the analysis of single or multiple time series. Not only will you find useful techniques and systems built from simple primi tives, but creative readers will find many other applications of these primitives and may see how to create new ones of their own. Our goal, then, is to help research mathematicians and computer scientists find new algorithms and to help working scientists and financial mathematicians design better, faster software."

VHDL for Simulation, Synthesis and Formal Proofs of Hardware (Paperback, Softcover reprint of the original 1st ed. 1992): Jean... VHDL for Simulation, Synthesis and Formal Proofs of Hardware (Paperback, Softcover reprint of the original 1st ed. 1992)
Jean Mermet
R5,132 Discovery Miles 51 320 Ships in 18 - 22 working days

The success of VHDL since it has been balloted in 1987 as an IEEE standard may look incomprehensible to the large population of hardware designers, who had never heared of Hardware Description Languages before (for at least 90% of them), as well as to the few hundreds of specialists who had been working on these languages for a long time (25 years for some of them). Until 1988, only a very small subset of designers, in a few large companies, were used to describe their designs using a proprietary HDL, or sometimes a HDL inherited from a University when some software environment happened to be developped around it, allowing usability by third parties. A number of benefits were definitely recognized to this practice, such as functional verification of a specification through simulation, first performance evaluation of a tentative design, and sometimes automatic microprogram generation or even automatic high level synthesis. As there was apparently no market for HDL's, the ECAD vendors did not care about them, start-up companies were seldom able to survive in this area, and large users of proprietary tools were spending more and more people and money just to maintain their internal system.

Superconducting Electronics (Paperback, Softcover reprint of the original 1st ed. 1989): Harold Weinstock, Martin Nisenoff Superconducting Electronics (Paperback, Softcover reprint of the original 1st ed. 1989)
Harold Weinstock, Martin Nisenoff
R2,712 Discovery Miles 27 120 Ships in 18 - 22 working days

The book provides an in-depth understanding of the fundamentals of superconducting electronics and the practical considerations for the fabrication of superconducting electronic structures. Additionally, it covers in detail the opportunities afforded by superconductivity for uniquely sensitive electronic devices and illustrates how these devices (in some cases employing high-temperature, ceramic superconductors) can be applied in analog and digital signal processing, laboratory instruments, biomagnetism, geophysics, nondestructive evaluation and radioastronomy. Improvements in cryocooler technology for application to cryoelectronics are also covered. This is the first book in several years to treat the fundamentals and applications of superconducting electronics in a comprehensive manner, and it is the very first book to consider the implications of high-temperature, ceramic superconductors for superconducting electronic devices. Not only does this new class of superconductors create new opportunities, but recently impressive milestones have been reached in superconducting analog and digital signal processing which promise to lead to a new generation of sensing, processing and computational systems. The 15 chapters are authored by acknowledged leaders in the fundamental science and in the applications of this increasingly active field, and many of the authors provide a timely assessment of the potential for devices and applications based upon ceramic-oxide superconductors or hybrid structures incorporating these new superconductors with other materials. The book takes the reader from a basic discussion of applicable (BCS and Ginzburg-Landau) theories and tunneling phenomena, through the structure and characteristics of Josephson devices and circuits, to applications that utilize the world's most sensitive magnetometer, most sensitive microwave detector, and fastest arithmetic logic unit.

Parallel Language and Compiler Research in Japan (Paperback, Softcover reprint of the original 1st ed. 1995): Lubomir Bic,... Parallel Language and Compiler Research in Japan (Paperback, Softcover reprint of the original 1st ed. 1995)
Lubomir Bic, Alexandru Nicolau, Mitsuhisa Sato
R5,208 Discovery Miles 52 080 Ships in 18 - 22 working days

Parallel Language and Compiler Research in Japan offers the international community an opportunity to learn in-depth about key Japanese research efforts in the particular software domains of parallel programming and parallelizing compilers. These are important topics that strongly bear on the effectiveness and affordability of high performance computing systems. The chapters of this book convey a comprehensive and current depiction of leading edge research efforts in Japan that focus on parallel software design, development, and optimization that could be obtained only through direct and personal interaction with the researchers themselves.

Cooperative Networking in a Heterogeneous Wireless Medium (Paperback, 2013 ed.): Muhammad Ismail, Weihua Zhuang Cooperative Networking in a Heterogeneous Wireless Medium (Paperback, 2013 ed.)
Muhammad Ismail, Weihua Zhuang
R1,356 Discovery Miles 13 560 Ships in 18 - 22 working days

This brief focuses on radio resource allocation in a heterogeneous wireless medium. It presents radio resource allocation algorithms with decentralized implementation, which support both single-network and multi-homing services. The brief provides a set of cooperative networking algorithms, which rely on the concepts of short-term call traffic load prediction, network cooperation, convex optimization, and decomposition theory. In the proposed solutions, mobile terminals play an active role in the resource allocation operation, instead of their traditional role as passive service recipients in the networking environment.

High-Capacity Local and Metropolitan Area Networks - Architecture and Performance Issues (Paperback, Softcover reprint of the... High-Capacity Local and Metropolitan Area Networks - Architecture and Performance Issues (Paperback, Softcover reprint of the original 1st ed. 1991)
Guy Pujolle
R2,741 Discovery Miles 27 410 Ships in 18 - 22 working days

The main objective of this workshop was to review and discuss the state of the art and the latest advances* in the area of 1-10 Gbit/s throughput for local and metropolitan area networks. The first generation of local area networks had throughputs in the range 1-20 Mbit/s. Well-known examples of this first generation networks are the Ethernet and the Token Ring. The second generation of networks allowed throughputs in the range 100-200 Mbit/s. Representatives of this generation are the FDDI double ring and the DQDB (IEEE 802.6) networks. The third generation networks will have throughputs in the range 1-10 Gbit/s. The rapid development and deployment of fiber optics worldwide, as well as the projected emergence of a market for broadband services, have given rise to the development of broadband ISDN standards. Currently, the Asynchronous Transfer Mode (ATM) appears to be a viable solution to broadband networks. The possibility of all-optical networks in the future is being examined. This would allow the tapping of approximately 50 terahertz or so available in the lightwave range of the frequency spectrum. It is envisaged that using such a high-speed network it will be feasible to distribute high-quality video to the home, to carry out rapid retrieval of radiological and other scientific images, and to enable multi-media conferencing between various parties.

Using Subject Headings for Online Retrieval - Theory, Practice and Potential (Hardcover): Karen Markey Drabenstott, Diane... Using Subject Headings for Online Retrieval - Theory, Practice and Potential (Hardcover)
Karen Markey Drabenstott, Diane Vizine-Goetz
R5,449 R4,143 Discovery Miles 41 430 Save R1,306 (24%) Ships in 10 - 15 working days

Using Subject Headings for Online Retrieval is an indispensable tool for online system designers who are developing new systems or refining existing ones. The book describes subject analysis and subject searching in online catalogs, including the limitations of retrieval, and demonstrates how such limitations can be overcome through system design and programming. The book describes the Library of Congress Subject Headings system and system characteristics, shows how information is stored in machine-readable files, and offers examples of and recommendations for successful retrieval methods. Tables are included to support these recommendations, and diagrams, graphs, and bar charts are used to provide results of data analysis. Practitioners in institutions using or considering the installation of an online catalog will refer to this book often to generate specifications. Researchers in library systems, information retrieval, and user behavior will appreciate the book's detailing of the results of an extensive, empirical study of the subject terms entered into online systems by end users. Using Subject Headings for Online Retrieval also addresses the needs of advanced students in library schools and instructors in library automation, information retrieval, cataloging, indexing, and user behavior.

Architecture of Computing Systems -- ARCS 2013 - 26th International Conference, Prague, Czech Republic, February 19-22, 2013... Architecture of Computing Systems -- ARCS 2013 - 26th International Conference, Prague, Czech Republic, February 19-22, 2013 Proceedings (Paperback, 2013 ed.)
Hana Kubatova, Christian Hochberger, Martin Danek, Bernhard Sick
R1,427 Discovery Miles 14 270 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 26th International Conference on Architecture of Computing Systems, ARCS 2013, held in Prague, Czech Republic, in February 2013. The 29 papers presented were carefully reviewed and selected from 73 submissions. The topics covered are computer architecture topics such as multi-cores, memory systems, and parallel computing, adaptive system architectures such as reconfigurable systems in hardware and software, customization and application specific accelerators in heterogeneous architectures, organic and autonomic computing including both theoretical and practical results on self-organization, self-configuration, self-optimization, self-healing, and self-protection techniques, operating systems including but not limited to scheduling, memory management, power management, RTOS, energy-awareness, and green computing.

Evaluating AAL Systems Through Competitive Benchmarking - Indoor Localization and Tracking - International Competition, EvAAL... Evaluating AAL Systems Through Competitive Benchmarking - Indoor Localization and Tracking - International Competition, EvAAL 2011, Competition in Valencia, Spain, July 25-29, 2011, and Final Workshop in Lecce ,Italy, September 26, 2011. Revised Selected Papers (Paperback, 2012 ed.)
Stefano Chessa, Stefan Knauth
R1,361 Discovery Miles 13 610 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the international competition aimed at the evaluation and assessment of Ambient Assisted Living (AAL) systems and services, EvAAL 2011, which was organized in two major events, the Competition in Valencia, Spain, in July 2011, and the Final workshop in Lecce, Italy, in September 2011. The papers included in this book describe the organization and technical aspects of the competition, and provide a complete technical description of the competing artefacts and report on the experience lessons learned by the teams during the competition.

Software Configuration Management Using Vesta (Paperback, 2006): Clark Allan Heydon, Roy Levin, Timothy P. Mann, Yuan Yu Software Configuration Management Using Vesta (Paperback, 2006)
Clark Allan Heydon, Roy Levin, Timothy P. Mann, Yuan Yu
R2,649 Discovery Miles 26 490 Ships in 18 - 22 working days

Helps in the development of large software projects.

Uses a well-known open-source software prototype system (Vesta developed at Digital and Compaq Systems Research Lab).

Data Mangement in Cloud, Grid and P2P Systems - 5th International Conference, Globe 2012, Vienna, Austria, September 5-6, 2012,... Data Mangement in Cloud, Grid and P2P Systems - 5th International Conference, Globe 2012, Vienna, Austria, September 5-6, 2012, Proceedings (Paperback, 2012 ed.)
Abdelkader Hameurlain, Farookh Khadeer Hussain, Franck Morvan, A. Min Tjoa
R1,793 Discovery Miles 17 930 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 5th International Conference on Data Management in Grid and Peer-to-Peer Systems, Globe 2012, held in Vienna, Austria, in September 2012 in conjunction with DEXA 2012. The 9 revised full papers presented were carefully reviewed and selected from 15 submissions. The papers are organized in topical sections on data management in the cloud, cloud MapReduce and performance evaluation, and data stream systems and distributed data mining.

Implementing Health Care Information Systems (Paperback, Softcover reprint of the original 1st ed. 1989): Helmuth F Orthner,... Implementing Health Care Information Systems (Paperback, Softcover reprint of the original 1st ed. 1989)
Helmuth F Orthner, Bruce I. Blum
R2,697 Discovery Miles 26 970 Ships in 18 - 22 working days

This series in Computers and Medicine had its origins when I met Jerry Stone of Springer-Verlag at a SCAMC meeting in 1982. We determined that there was a need for good collections of papers that would help disseminate the results of research and application in this field. I had already decided to do what is now Information Systems for Patient Care, and Jerry contributed the idea of making it part of a series. In 1984 the first book was published, and-thanks to Jerry's efforts - Computers and Medicine was underway. Since that time, there have been many changes. Sadly, Jerry died at a very early age and cannot share in the success of the series that he helped found. On the bright side, however, many of the early goals of the series have been met. As the result of equipment improvements and the consequent lowering of costs, com puters are being used in a growing number of medical applications, and the health care community is very computer literate. Thus, the focus of concern has turned from learning about the technology to understanding how that technology can be exploited in a medical environment."

Selected Topics in Performance Evaluation and Benchmarking - 4th TPC Technology Conference, TPCTC 2012, Istanbul, Turkey,... Selected Topics in Performance Evaluation and Benchmarking - 4th TPC Technology Conference, TPCTC 2012, Istanbul, Turkey, August 27, 2012, Revised Selected Papers (Paperback, 2013 ed.)
Raghunath Nambiar, Meikel Poess
R1,294 Discovery Miles 12 940 Ships in 18 - 22 working days

This book constitutes the refereed proceedings of the 4th TPC Technology Conference, TPCTC 2012, held in Istanbul, Turkey, in August 2012.

It contains 10 selected peer-reviewed papers, 2 invited talks, a report from the TPC Public Relations Committee, and a report from the workshop on Big Data Benchmarking, WBDB 2012. The papers present novel ideas and methodologies in performance evaluation, measurement, and characterization.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Noncommutative Geometry and the Standard…
Florian Scheck, Wend Werner, … Hardcover R2,846 Discovery Miles 28 460
Representation Theories and Algebraic…
A. Broer Hardcover R5,394 Discovery Miles 53 940
A Matter of Complexity - Subordination…
Roland Pfau, Markus Steinbach, … Hardcover R3,634 Discovery Miles 36 340
Right Research - Modelling Sustainable…
Chelsea Miya, Oliver Rossier, … Hardcover R1,355 Discovery Miles 13 550
Special Metrics and Group Actions in…
Simon G. Chiossi, Anna Fino, … Hardcover R3,776 Discovery Miles 37 760
The Disruptors - Social Entrepreneurs…
Kerryn Krige, Gus Silber Paperback R266 Discovery Miles 2 660
Algebraic Surfaces
V. Masek Hardcover R2,344 Discovery Miles 23 440
American Sign Language Dictionary for…
Tara Adams Hardcover R862 R612 Discovery Miles 6 120
Tax Practitioners' Perceptions Regarding…
Michael Fidelis-Nwaefulu Hardcover R803 Discovery Miles 8 030
Foreign Vocabulary in Sign Languages - A…
Diane Brentari Hardcover R3,930 Discovery Miles 39 300

 

Partners