![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems
This book introduces a novel design methodology which can significantly reduce the ASIP development effort through high degrees of design automation. The key elements of this new design methodology are a powerful application profiler and an automated instruction-set customization tool which considerably lighten the burden of mapping a target application to an ASIP architecture in the initial design stages. The book includes several design case studies with real life embedded applications to demonstrate how the methodology and the tools can be used in practice for accelerating the overall ASIP design process.
This volume comprises a collection of twenty written versions of invited as well as contributed papers presented at the conference held from 20-24 May 1996 in Beijing, China. It covers many areas of logic and the foundations of mathematics, as well as computer science. Also included is an article by M. Yasugi on the Asian Logic Conference which first appeared in Japanese, to provide a glimpse into the history and development of the series.
Quantum Communication, Quantum Networks, and Quantum Sensing represents a self-contained introduction to quantum communication, quantum error-correction, quantum networks, and quantum sensing. It starts with basic concepts from classical detection theory, information theory, and channel coding fundamentals before continuing with basic principles of quantum mechanics including state vectors, operators, density operators, measurements, and dynamics of a quantum system. It continues with fundamental principles of quantum information processing, basic quantum gates, no-cloning and theorem on indistinguishability of arbitrary quantum states. The book then focuses on quantum information theory, quantum detection and Gaussian quantum information theories, and quantum key distribution (QKD). The book then covers quantum error correction codes (QECCs) before introducing quantum networks. The book concludes with quantum sensing and quantum radars, quantum machine learning and fault-tolerant quantum error correction concepts.
Recent developments in computer science clearly show the need for a
better theoretical foundation for some central issues. Methods and
results from mathematical logic, in particular proof theory and
model theory, are of great help here and will be used much more in
future than previously. This book provides an excellent
introduction to the interplay of mathematical logic and computer
science. It contains extensively reworked versions of the lectures
given at the 1997 Marktoberdorf Summer School by leading
researchers in the field.
Systems Performance, Second Edition, covers concepts, strategy, tools, and tuning for operating systems and applications, using Linux-based operating systems as the primary example. A deep understanding of these tools and techniques is critical for developers today. Implementing the strategies described in this thoroughly revised and updated edition can lead to a better end-user experience and lower costs, especially for cloud computing environments that charge by the OS instance. Systems performance expert and best-selling author Brendan Gregg summarizes relevant operating system, hardware, and application theory to quickly get professionals up to speed even if they have never analyzed performance before. Gregg then provides in-depth explanations of the latest tools and techniques, including extended BPF, and shows how to get the most out of cloud, web, and large-scale enterprise systems. Key topics covered include Hardware, kernel, and application internals, and how they perform Methodologies for rapid performance analysis of complex systems Optimizing CPU, memory, file system, disk, and networking usage Sophisticated profiling and tracing with perf, Ftrace, and BPF (BCC and bpftrace) Performance challenges associated with cloud computing hypervisors Benchmarking more effectively Featuring up-to-date coverage of Linux operating systems and environments, Systems Performance, Second Edition, also addresses issues that apply to any computer system. The book will be a go-to reference for many years to come and, like the first edition, required reading at leading tech companies. Register your book for convenient access to downloads, updates, and/or corrections as they become available. See inside book for details.
At the beginning of the 1990s research started in how to combine soft comput ing with reconfigurable hardware in a quite unique way. One of the methods that was developed has been called evolvable hardware. Thanks to evolution ary algorithms researchers have started to evolve electronic circuits routinely. A number of interesting circuits - with features unreachable by means of con ventional techniques - have been developed. Evolvable hardware is quite pop ular right now; more than fifty research groups are spread out over the world. Evolvable hardware has become a part of the curriculum at some universi ties. Evolvable hardware is being commercialized and there are specialized conferences devoted to evolvable hardware. On the other hand, surprisingly, we can feel the lack of a theoretical background and consistent design methodology in the area. Furthermore, it is quite difficult to implement really innovative and practically successful evolvable systems using contemporary digital reconfigurable technology."
The main links with your PC and the outside world are the centronic
port, used for connecting the printer, the RS232 port, used for the
mouse, and the games port for a joystick. This book explores how
these input/output (I/O) ports can be put to use through a range of
other interfacing applications. This is especially useful for
laptop and palmtop PCs which cannot be fitted with internal I/O
cards. A novel approach is taken by this book, combining the
hardware through which the ports can be explored, and the software
programming needed to carry out a range of experiments.
Distributed applications are a necessity in most central application sectors of the contemporary information society, including e-commerce, e-banking, e-learning, e-health, telecommunication and transportation. This results from a tremendous growth of the role that the Internet plays in business, administration and our everyday activities. This trend is going to be even further expanded in the context of advances in broadband wireless communication. New Developments in Distributed Applications and Interoperable Systems focuses on the techniques available or under development with the goal to ease the burden of constructing reliable and maintainable interoperable information systems providing services in the global communicating environment. The topics covered in this book include: Context-aware applications; Integration and interoperability of distributed systems; Software architectures and services for open distributed systems; Management, security and quality of service issues in distributed systems; Software agents and mobility; Internet and other related problem areas. The book contains the proceedings of the Third International Working Conference on Distributed Applications and Interoperable Systems (DAIS'2001), which was held in September 2001 in Krakow, Poland, and sponsored by the International Federation on Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported."
This book describes how engineers can make optimum use of the two industry standard analysis/design tools, SystemC and SystemC-AMS. The authors use a system-level design approach, emphasizing how SystemC and SystemC-AMS features can be exploited most effectively to analyze/understand a given electronic system and explore the design space. The approach taken by this book enables system engineers to concentrate on only those SystemC/SystemC-AMS features that apply to their particular problem, leading to more efficient design. The presentation includes numerous, realistic and complete examples, which are graded in levels of difficulty to illustrate how a variety of systems can be analyzed with these tools.
A Flash memory is a Non Volatile Memory (NVM) whose "unit cells" are fabricated in CMOS technology and programmed and erased electrically. In 1971, Frohman-Bentchkowsky developed a folating polysilicon gate tran sistor [1, 2], in which hot electrons were injected in the floating gate and removed by either Ultra-Violet (UV) internal photoemission or by Fowler Nordheim tunneling. This is the "unit cell" of EPROM (Electrically Pro grammable Read Only Memory), which, consisting of a single transistor, can be very densely integrated. EPROM memories are electrically programmed and erased by UV exposure for 20-30 mins. In the late 1970s, there have been many efforts to develop an electrically erasable EPROM, which resulted in EEPROMs (Electrically Erasable Programmable ROMs). EEPROMs use hot electron tunneling for program and Fowler-Nordheim tunneling for erase. The EEPROM cell consists of two transistors and a tunnel oxide, thus it is two or three times the size of an EPROM. Successively, the combination of hot carrier programming and tunnel erase was rediscovered to achieve a single transistor EEPROM, called Flash EEPROM. The first cell based on this concept has been presented in 1979 [3]; the first commercial product, a 256K memory chip, has been presented by Toshiba in 1984 [4]. The market did not take off until this technology was proven to be reliable and manufacturable [5].
This book brings together concepts and approaches from the fields of photogrammetry and computer vision. In particular, it examines techniques relating to quantitative image analysis, such as orientation, camera modelling, system calibration, self-calibration and error handling. The chapters have been contributed by experts in the relevant fields, and there are examples from automated inspection systems and other real-world cases. The book provides study material for students, researchers, developers and practitioners.
This book grants the reader a comprehensive overview of the state-of-the-art in system-level memory management (data transfer and storage) related issues for complex data-dominated real-time signal and data processing applications. The authors introduce their own system-level data transfer and storage exploration methodology for data-dominated video applications. This methodology tackles the power and area reduction cost components in the architecture for this target domain, namely the system-level busses and the background memories. For the most critical tasks in the methodology, prototype tools have been developed to reduce the design time. The approach is also very heavily application-driven which is illustrated by several realistic demonstrators, partly used as red-thread examples in the book. The quite general applicability and effectiveness has been substantiated for several industrial data-dominated applications, including H.263 video conferencing decoding and medical computer tomography (CT) back projection. To the researcher the book will serve as an excellent reference source, both for the overall description of the methodology and for the detailed descriptions of the system-level methodologies and synthesis techniques and algorithms. To the design engineers and CAD managers it offers an invaluable insight into the anticipated evolution of commercially available design tools as well as allowing them to utilize the book's concepts in their own research and development.
The kernel of any operating system is its most critical component, as the rest of the system depends on it. This book shows how the formal specification of kernels can be followed by a completely formal refinement process that leads to the extraction of executable code. This formal refinement process ensures that the code precisely meets the specification. The author documents the complete process, including proofs.
This book investigates the design of compilers for procedural languages, based on the algebraic laws which these languages satisfy. The particular strategy adopted is to reduce an arbitrary source program to a general normal form, capable of representing an arbitrary target machine. This is achieved by a series of normal form reduction theorems which are proved algebraically from the more basic laws. The normal form and the related reduction theorems can then be instantiated to design compilers for distinct target machines. This constitutes the main novelty of the author's approach to compilation, together with the fact that the entire process is formalised within a single and uniform semantic framework of a procedural language and its algberaic laws. Furthermore, by mechanising the approach using the OBJ3 term rewriting system it is shown that a prototype compiler is developed as a byproduct of its own proof of correctness.
The new organizational paradigms of global cooperation and collaboration require new ways and means for their support. Information and Communication Technology (ICT) can and will play a significant role in this support. However, the many currently available and seemingly conflicting solutions, the confusing terminology, the lack of business justification, and last but not least the insufficient understanding of the technology by the end user community has significantly hampered the large scale application of the relevant ICT support and thereby the acceptance of the new paradigms. Many of these issues have been addressed in the workshops of the international initiative on Enterprise Inter- and Intra-Organizational Integration, which has been supported by the European IST Programme and NIST. The main subjects of the initiative: relations between knowledge management and business process modeling, interoperability of business processes and process models, enterprise engineering and integration, and representation of process models. Ontologies and agent technologies - the latter with their relations to ontologies and models - have been further subjects of discussions in several workshops. Results of the initiative are reported in this volume, which comprises the proceedings of the International Conference on Enterprise Integration and Modeling Technology (ICEIMT'02). The conference was sponsored by the International Federation for Information Processing (IFIP) and held in Valencia, Spain in April 2002. Enterprise Inter- and Intra-Organizational Integration: Building International Consensus provides not only a wealth of information on the state of the art of the subjects of theinitiative, it also identifies opportunities for research and development. Potential projects are identified in the work group reports and some of those will be taken up by organizations involved.
The terms groupware and CSCW (Computer-Supported Cooperative Work) have received significant attention in computer science and related disciplines for quite some time now. This book has two main objectives: first, to outline the meaning of both terms, and second, to point out both the numerous opportunities for users of CSCW systems and the risks of applying them. The book introduces in detail an interdisciplinary application area of distributed systems, namely the computer support of individuals trying to solve a problem in cooperation with each other but not necessarily having identical work places or working times. CSCW can be viewed as a synergism between the areas of distributed systems and (multimedia) communications on the one hand and those of information science and socio-organizational theory on the other hand. Thus, the book is addressed to students of all these disciplines, as well as to users and developers of systems with group communication and cooperation as top priorities.
This book looks at the future of advertising from the perspective of pervasive computing. Pervasive computing encompasses the integration of computers into everyday devices, like the covering of surfaces with interactive displays and networked mobile phones. Advertising is the communication of sponsored messages to inform, convince, and persuade to buy. We believe that our future cities will be digital, giving us instant access to any information we need everywhere, like at bus stops, on the sidewalk, inside the subway and in shopping malls. We will be able to play with and change the appearance of our cities effortlessly, like making flowers grow along a building wall or changing the colour of the street we are in. Like the internet as we know it, this digitalization will be paid for by adverts, which unobtrusively provide us suggestions for nearby restaurants or cafes. If any content annoys us, we will be able to effortlessly say so and change it with simple gestures, and content providers and advertisers will know what we like and be able to act accordingly. This book presents the technological foundations to make this vision a reality.
This collection of papers is the result of a workshop sponsored by NATO's Defense Research Group Panel 8 during the Fall of 1993. The workshop was held at the University of German Armed Forces at Neubiberg (Munich) Germany 29 September-l October, 1993. Robert J. Seidel Paul R. Chatelier U.S. Army Research Institute for the Executive Office of the President Behavioral and Social Sciences Office of Science and Technology Policy Washington, D.C. Washington, D.C. v PREFACE We would like to thank the authors of the papers for providing an excellent coverage of this rapidly developing technology, the session chairpersons for providing excellent structure and management for each group of papers, and each session's discussant's for their summary and personal views of their sessions papers. Our special thanks go to Dr. Rolfe Otte, the German ministry of Defense's research study group member and the person responsible for our being able to have this workshop in Munich. We are also grateful to Dr. H. Closhen of the IABG for technical and administrative assistance throughout the planning and conduct of the workshop.
Since its establishment in 1998, Microsoft Research Asia's trademark and long term commitment has been to foster innovative research and advanced education in the Asia-Pacific region. Through open collaboration and partnership with universities, government and other academic partners, MSRA has been consistently advancing the state-of-the-art in computer science. This book was compiled to record these outstanding collaborations, as Microsoft Research Asia celebrates its 10th Anniversary. The selected papers are all authored or co-authored by faculty members or students through collaboration with MSRA lab researchers, or with the financial support of MSRA. Papers previously published in top-tier international conference proceedings and journals are compiled here into one accessible volume of outstanding research. Innovation Together highlights the outstanding work of Microsoft Research Asia as it celebrates ten years of achievement and looks forward to the next decade of success.
This book provides a comprehensive overview of the
state-of-the-art, data flow-based techniques for the analysis,
modeling and mapping technologies of concurrent applications on
multi-processors. The authors present a flow for designing embedded
hard/firm real-time multiprocessor streaming applications, based on
data flow formalisms, with a particular focus on wireless modem
applications. Architectures are described for the design tools and
run-time scheduling and resource management of such a platform.
Lo, soul! seest thou not God's purpose from the first? The earth to be spann'd, connected by net-work From Passage to India! Walt Whitman, "Leaves of Grass", 1900. The Internet is growing at a tremendous rate today. New services, such as telephony and multimedia, are being added to the pure data-delivery framework of yesterday. Such high demands on capacity could lead to a "bandwidth-crunch" at the core wide-area network resulting in degra dation of service quality. Fortunately, technological innovations have emerged which can provide relief to the end-user to overcome the In ternet's well-known delay and bandwidth limitations. At the physical layer, a major overhaul of existing networks has been envisaged from electronic media (such as twisted-pair and cable) to optical fibers - in the wide area, in the metropolitan area, and even in the local area set tings. In order to exploit the immense bandwidth potential of the optical fiber, interesting multiplexing techniques have been developed over the years. Wavelength division multiplexing (WDM) is such a promising tech nique in which multiple channels are operated along a single fiber si multaneously, each on a different wavelength. These channels can be independently modulated to accommodate dissimilar bit rates and data formats, if so desired. Thus, WDM carves up the huge bandwidth of an optical fiber into channels whose bandwidths (1-10 Gbps) are compati ble with peak electronic processing speed.
I love virtual machines (VMs) and I have done for a long time.If that makes me "sad" or an "anorak," so be it. I love them because they are so much fun, as well as being so useful. They have an element of original sin (writing assembly programs and being in control of an entire machine), while still being able to claim that one is being a respectable member of the community (being structured, modular, high-level, object-oriented, and so on). They also allow one to design machines of one's own, unencumbered by the restrictions of a starts optimising it for some physical particular processor (at least, until one processor or other). I have been building virtual machines, on and off, since 1980 or there abouts. It has always been something of a hobby for me; it has also turned out to be a technique of great power and applicability. I hope to continue working on them, perhaps on some of the ideas outlined in the last chapter (I certainly want to do some more work with register-based VMs and concur rency). I originally wanted to write the book from a purely semantic viewpoint."
This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2006. The reports cover all fields of computational science and engineering ranging from CFD via computational physics and chemistry to computer science with a special emphasis on industrially relevant applications. The book comes with illustrations and tables.
This volume contains 27 contributions to the Second Russian-German Advanced Research Workshop on Computational Science and High Performance Computing presented in March 2005 at Stuttgart, Germany. Contributions range from computer science, mathematics and high performance computing to applications in mechanical and aerospace engineering. |
You may like...
Grammatical and Syntactical Approaches…
Juhyun Lee, Michael J. Ostwald
Hardcover
R5,315
Discovery Miles 53 150
Inclusive Radio Communications for 5G…
Claude Oestges, Francois Quitin
Paperback
R2,896
Discovery Miles 28 960
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Practical TCP/IP and Ethernet Networking…
Deon Reynders, Edwin Wright
Paperback
R1,491
Discovery Miles 14 910
|