Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer hardware & operating systems > Storage media & peripherals
This volume contains the papers presented at the "Second International S- posium on Foundations of Information and Knowledge Systems" (FoIKS 2002), which was held in Schloss Salzau, Germany from February 20th to 23rd, 2002. FoIKS is a biennial event focusing on theoretical foundations of information and knowledge systems. It aims to bring together researchers working on the theoretical foundations of information and knowledge systems and to attract researchers working in mathematical ?elds such as discrete mathematics, c- binatorics, logics, and ?nite model theory who are interested in applying their theories to research on database and knowledge base theory. FoIKS took up the tradition of the conference series "Mathematical Fundamentals of Database S- tems" (MFDBS) which enabled East-West collaboration in the ?eld of database theory. The ?rst FoIKS symposium was held in Burg, Spreewald (Germany) in 2000. Former MFDBS conferences were held in Dresden (Germany) in 1987, Visegr ad (Hungary) in 1989, and in Rostock (Germany) in 1991. Proceedings of these previous events were published by Springer-Verlag as volumes 305, 364, 495, and 1762 of the LNCS series. In addition the FoIKS symposium is intended to be a forum for intensive d- cussions. For this reason the time slot of long and short contributions is 60 and 30 minutes respectively, followed by 30 and 15 minutes for discussions, respectively. Furthermore, participants are asked in advance to prepare as correspondents to a contribution of another author. There are also special sessions for the pres- tation and discussion of open research problems."
This book constitutes the thoroughly refereed joint post-proceedings of five international workshops organized by the Japanese Society of Artificial Intelligence, JSAI in 2001.The 75 revised papers presented were carefully reviewed and selected for inclusion in the volume. In accordance with the five workshops documented, the book offers topical sections on social intelligence design, agent-based approaches in economic and complex social systems, rough set theory and granular computing, chance discovery, and challenges in knowledge discovery and data mining.
This book constitutes the refereed proceedings of the Second International Conference on Research in Smart Cards, E-smart 2001, held in Cannes, France, in September 2001. The 20 revised full papers presented were carefully reviewed and selected from 38 submissions. Among the topics addressed are biometrics, cryptography and electronic signatures on smart card security, formal methods for smart card evaluation and certification, architectures for multi-applications and secure open platforms, and middleware for smart cards and novel applications of smart cards.
Die zunehmende Leistungsfahigkeit mobiler Endgerate (z.B. PDAs, Smartcards) erlaubt den effizienten Einsatz mobiler Datenbanktechnologie im Rahmen einer umfassenden Integration mobiler Anwendungen in bestehende heterogene Systeme. Charakteristika des mobilen Umfeldes sowie beschrankte physische Ressourcen mobiler Gerate stellen spezifische Anforderungen an Architektur und Implementierung mobiler Datenbanksysteme. Das Buch liefert eine fundierte Einfuhrung in generische Architekturen und konkrete Implementierungskonzepte sowie einen Uberblick uber existierende Technologien und Konzepte. Zentrale Themen sind spezifische und klassische Replikationsverfahren und ihre Eignung im mobilen Umfeld, mobile Transaktionsmodelle, spezielle Implementierungskonzepte fur Pico-Datenbanksysteme, besonders effiziente Speicherkonzepte und Methoden der Anfrageausfuhrung. Anhand von marktreifen kommerziellen Datenbanksystemen werden realisierte Anwendungsarchitekturen und Implementierungskonzepte erlautert."
Risikomanagement und Wiederanlauf-(Notfall)-Planung stellen in der heutigen Zeit der "HochverfA1/4gbarkeit" von Technik und Dienstleistungen eine elementare Voraussetzung fA1/4r die WettbewerbsfAhigkeit und ggf. den Fortbestand eines Unternehmens dar. Gerade die Globalisierung der MArkte und die Konzernverflechtungen machen lAnderA1/4bergreifende Konzepte zwingend erforderlich, die nationale Gesetze berA1/4cksichtigen und teilweise A1/4ber sie hinausgehen. Dieses Buch bietet durch die Beleuchtung dieser hochinteressanten Thematik aus den unterschiedlichsten Facetten allen interessierten Lesern sowohl mit praktischen als auch theoretischen Schwerpunkten eine FA1/4lle von Informationen, sei es fA1/4r die Konzeption eigener Projekte oder die Vorbereitung von internen und externen Revisionen.
This is a book for the PC user who would like to understand how their PCs work. It is written for the reader who is not a computer or electrical engineer but who wants enough information so that they can make intelligent buying or upgrading decisions, maximize their productivity, and become less dependant on others for help with their computer questions and problems. The book provides a thorough yet concise description of the entire IBM-type PC, including its subsystems, components, and peripherals. The book concentrates on PCs based on the Pentium and Petium Pro class processors. The book contains easy-to-do experiments that readers can perform to actually see how things work. Understanding PC Computer Hardware can be read cover to cover or used as reference source.
The AVR RISC Microcontroller Handbook is a comprehensive guide to
designing with Atmel's new controller family, which is designed to
offer high speed and low power consumption at a lower cost. The
main text is divided into three sections: hardware, which covers
all internal peripherals; software, which covers programming and
the instruction set; and tools, which explains using Atmel's
Assembler and Simulator (available on the Web) as well as IAR's C
compiler.
The Second Edition of The Cache Memory Book introduces systems
designers to the concepts behind cache design. The book teaches the
basic cache concepts and more exotic techniques. It leads readers
through someof the most intricate protocols used in complex
multiprocessor caches. Written in an accessible, informal style,
this text demystifies cache memory design by translating cache
concepts and jargon into practical methodologies and real-life
examples. It also provides adequate detail to serve as a reference
book for ongoing work in cache memory design.
This book constitutes the refereed proceedings of the 7th International Workshop on Field Programmable Logic and Applications, FPL '97, held in London, UK, in September 1997. The 51 revised full papers in the volume were carefully selected from a large number of high-quality papers. The book is divided into sections on devices and architectures, devices and systems, reconfiguration, design tools, custom computing and codesign, signal processing, image and video processing, sensors and graphics, color and robotics, and applications.
Formal methods for hardware design still find limited use in industry. Yet current practice has to change to cope with decreasing design times and increasing quality requirements. This research report presents results from the Esprit project FORMAT (formal methods in hardware verification) which involved the collaboration of the enterprises Siemens, Italtel, Telefonica I+D, TGI, and AHL, the research institute OFFIS, and the universities of Madrid and Passau. The work presented involves advanced specification languages for hardware design that are intuitive to the designer, like timing diagrams and state based languages, as well as their relation to VHDL and formal languages like temporal logic and a process-algebraic calculus. The results of experimental tests of the tools are also presented.
We first began looking at pointing devices and human performance in 1990 when the senior author, Sarah Douglas, was asked to evaluate the human performance ofa rather novel device: a finger-controlled isometric joystick placed under a key on the keyboard. Since 1990 we have been involved in the development and evaluation ofother isometric joysticks, a foot-controlled mouse, a trackball, and a wearable computer with head mounted display. We unabashedly believe that design and evaluation of pointing devices should evolve from a broad spectrum of values which place the human being at the center. These values include performance iss ues such as pointing-time and errors, physical issues such as comfort and health, and contextual issues such as task usabilityand user acceptance. This book chronicles this six-year history of our relationship as teacher (Douglas) and student (Mithal), as we moved from more traditional evalu ation using Fitts' law as the paradigm, to understanding the basic research literature on psychomotor behavior. During that process we became pro foundly aware that many designers of pointing devices fail to understand the constraints of human performance, and often do not even consider experimental evaluation critical to usability decisions before marketing a device. We also became aware ofthe fact that, contraryto popularbeliefin the human-computer interaction community, the problem of predicting pointing device performance has not been solved by Fitts' law. Similarly, our expectations were biased by the cognitive revolution of the past 15 years with the beliefpointing device research was 'low-level' and uninter esting."
This book constitutes the refereed proceedings of the First
International Conference on Formal Methods in Computer-Aided
Design, FMCAD '96, held in Palo Alto, California, USA, in November
1996.
This book constitutes the refereed proceedings of the Second
International Workshop on Memory Management, IWMM '95, held in
Kinross, Scotland, in September 1995. It contains 17 full revised
papers on all current aspects of memory management; among the
topics addressed are garbage collection for parallel, real-time,
and distributed memory systems, memory management of distributed
and of persistent objects, programming language aspects,
hardware-assisted garbage collection, and open-network garbage
collection.
Of related interest … Digital Telephony John Bellamy "As a departure from conventional treatment of communication theory, the book stresses how systems operate and the rationale behind their design, rather than presenting rigorous analytical formulations." —Telecommunications Journal Both a reference for telecommunication engineers and a text for graduate level engineering and computer science students, this book provides an introduction to all aspects of digital communication, with emphasis on voice digitization, digital transmission, digital switching, network synchronization, network control, and network analysis. Its aim is to present system level design considerations, and then relate the specific equipment to telephone networks around the world, particularly North America. 526 pp. (0 471-08089-6) 1982 A Reference Manual for Telecommunications Engineering Roger L. Freeman Here’s a comprehensive reference for those who design, build, purchase, use, or maintain telecommunications systems, offering the only system design database devoted exclusively to the field. It pulls together a vast amount of information from such diverse sources as CCITT/CCIR, EIA, US Military Standards and Handbooks, NBS, BTL/ATT, REA, and periodicals and monographs published by over twenty principal manufacturers. Covers telephone traffic, transmission factors in telephony, outside plant-metallic pair systems, noise and modulation, radio-frequency data and regulatory information, facsimile transmission, and more. 1504 pp. (0 471-86753-5) 1985
Discover a fun new hobby with helpful possibilities
This is the first book entirely dedicated to the problem of memory management in programming language implementation. Its originality stems from the diversity of languages and approaches presented: functional programming, logic programming, object oriented programming, and parallel and sequential programming. The book contains 29 selected and refereed papers including 3 survey papers, 4 on distributed systems, 4 on parallelism, 4 on functional languages, 3 on logic programming languages, 3 on object oriented languages, 3 on incremental garbage collection, 2 on improving locality, 2 on massively parallel architectures, and an invited paper on the thermodynamics of garbage collection. The book provides a snapshot of the latest research in the domain of memory management for high-level programming language implementations.
Verilog HDL is the standard hardware description language for the design of digital systems and VLSI devices. This volume shows designers how to describe pieces of hardware functionally in Verilog using a top-down design approach, which is illustrated with a number of large design examples. The work is organized to present material in a progressive manner, beginning with an introduction to Verilog HDL and ending with a complete example of the modelling and testing of a large subsystem.
These proceedings contain the papers presented at a workshop on Designing Correct Circuits, jointly organised by the Universities of Oxford and Glasgow, and held in Oxford on 26-28 September 1990. There is a growing interest in the application to hardware design of the techniques of software engineering. As the complexity of hardware systems grows, and as the cost both in money and time of making design errors becomes more apparent, so there is an eagerness to build on the success of mathematical techniques in program develop ment. The harsher constraints on hardware designers mean both that there is a greater need for good abstractions and rigorous assurances of the trustworthyness of designs, and also that there is greater reason to expect that these benefits can be realised. The papers presented at this workshop consider the application of mathematics to hardware design at several different levels of abstraction. At the lowest level of this spectrum, Zhou and Hoare show how to describe and reason about synchronous switching circuits using UNilY, a formalism that was developed for reasoning about parallel programs. Aagaard and Leeser use standard mathematical tech niques to prove correct their implementation of an algorithm for Boolean simplification. The circuits generated by their formal synthesis system are thus correct by construction. Thuau and Pilaud show how the declarative language LUSTRE, which was designed for program ming real-time systems, can be used to specify synchronous circuits.
Dieses Buch enthalt die Beitrage der 4. GI/ITG/GMA-Fachtagung uber Fehlertolerierende Rechensysteme, die im September 1989 in einer Reihe von Tagungen in Munchen 1982, Bonn 1984 sowie Bremerhaven 1987 veranstaltet wurde. Die 31 Beitrage, darunter 4 eingeladene, sind teils in deutscher, uberwiegend aber in englischer Sprache verfasst. Insgesamt wird durch diese Beitrage die Entwicklung der Konzeption und Implementierung fehlertoleranter Systeme in den letzten zwei Jahren vor allem in Europa dokumentiert. Samtliche Beitrage berichten uber neue Forschungs- oder Entwicklungsergebnisse.
Das Buch Auf dem Weg zur Integration Factory fasst den State of the Art sowie die Zukunftsperspektiven im Bereich der integrierten Informationslogistik zusammen. Die Autoren untersuchen zum einen, inwieweit sich bisherige Ansatze zum Data Warehousing sowie zur Enterprise Application Integration mittelfristig technisch, organisatorisch und wirtschaftlich als geeignete Losungen erwiesen haben. Die Kernthemen sind hierbei Architekturen, Vorgehensmodelle, Business Intelligence, Online-Analyse, Customer Relationship Management, Supply Chain Management und Anwendungsintegration. Zum anderen werden neuere Ansatze vorgestellt, welche die Integration des Data Warehouse in die Gesamt-Informationslogistik zum Ziel haben und so die Realisierung neuer Applikationstypen und Geschaftsmodelle ermoglichen. Daruber hinaus wird insbesondere auf eine Verbesserung der Prozessunterstutzung durch einen hoheren Integrationsgrad der Applikationen eingegangen. Das Buch wendet sich vorwiegend an Praktiker aus den Bereichen Betriebswirtschaft und Wirtschaftsinformatik."
In the last few years, a large number of books on microprocessors have appeared on the market. Most of them originated in the context of the 4-bit and the 8-bit microprocessors and their comparatively simple structure. However, the techno-logical development from 8-bit to 16-bit microprossors led to processor components with a substantially more complex structure and with an expanded functionality and also to an increase in the system architecture's complexity. This books takes this advancement into account. It examines 16-bit micro-processor systems and descrihes their structure, their behavior and their programming. The principles of computer or ganization are treated at the component level. This is done by means of a detailed examination of the characteristic functionali ty of microprocessors. Furthermore the interactions between hardware and software, that are typical of microprocessor technology, are introduced. Interfacing techniques are one of the focal points of these considerations. This puplication is organized as a textbook and is intended as a self-teaching course on 16-bit microprocessors for students of computer science and communications, design engineers and users in a wide variety of technical and scientific fields. Basic knowledge of boolean algebra is assumed. The choice of material is based on the 16-bit microprocessors that are currently available on the market; on the other hand, the presentation is not bound to anyone of these microprocessors."
The Illiac IV was the first large scale array computer. As the fore runner of today's advanced computers, it brought whole classes of scientific computations into the realm of practicality. Conceived initially as a grand experiment in computer science, the revolutionary architecture incorporated both a high level of parallelism and pipe lining. After a difficult gestation, the Illiac IV became operational in November 1975. It has for a decade been a substantial driving force behind the develooment of computer technology. Today the Illiac IV continues to service large-scale scientific aoolication areas includ ing computational fluid dynamics, seismic stress wave propagation model ing, climate simulation, digital image processing, astrophysics, numerical analysis, spectroscopy and other diverse areas. This volume brings together previously published material, adapted in an effort to provide the reader with a perspective on the strengths and weaknesses of the Illiac IV and the impact this unique computa tional resource has had on the development of technology. The history and current status of the Illiac system, the design and architecture of the hardware, the programming languages, and a considerable sampling of applications are all covered at some length. A final section is devoted to commentary."
The popularity of the First Edition of this book has been very gratifying. It confirms that there is a genuine need for a text covering the magnetic bubble technology. We are pleased that the readers have found that this book satisfies that need. It has been used as a text for courses in both universities and industry, and as a reference manual by workers active in the field. To meet the need for more copies of the book it seemed preferable to publish a second edition rather than merely a second printing. There has been some significant progress, even in the short time since the initial printing, and we wanted to include that. At the same time we would like to provide the new copies at the lowest possible cost so that they are more easily obtained by students. For this reason the new edition is in soft cover and the recent progress has been described in a final chapter rather than incorporated into the original chapters. This eliminates the expense of resetting and repaging the original text. At the same time up-to-date references have been added and typographical errors have been corrected in the original chapters. It is our hope that this edition will be useful to those with an interest in the fascinating field of magnetic bubbles.
Das Buch soll Mainframe-Anwendern helfen, die Philosophie von Datenbanken zu verstehen und selbst Datenbankrecherchen zu formulieren. Hierfur stellt IBMs Query Management Facility, kurz QMF, eine sehr geschickt komponierte Sammlung von Werkzeugen zur Verfugung. Das Buch richtet sich an Endanwender und zeigt, wie man mit Hilfe von QMF fachliche Probleme loesen kann, auch ohne die Entwicklungsabteilung zu bemuhen. Die beigefugte CD-ROM enthalt die Tabellen, die allen Beispielen zugrunde liegen, als ASCII-Dateien und ermoeglicht die praktische Durchfuhrung der Fallbeispiele mit dem Computer. |
You may like...
Magnetic Information Storage Technology…
Shan X. Wang, Alex M Taratorin
Hardcover
R3,549
Discovery Miles 35 490
Perpendicular Magnetic Recording
Sakhrat Khizroev, Dmitri Litvinov
Hardcover
R2,892
Discovery Miles 28 920
Hardware Based Packet Classification for…
Chad R. Meiners, Alex X. Liu, …
Hardcover
R2,862
Discovery Miles 28 620
|