![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems
The kernel of any operating system is its most critical component, as the rest of the system depends on it. This book shows how the formal specification of kernels can be followed by a completely formal refinement process that leads to the extraction of executable code. This formal refinement process ensures that the code precisely meets the specification. The author documents the complete process, including proofs.
This book describes a comprehensive approach for synthesis and optimization of logic-in-memory computing hardware and architectures using memristive devices, which creates a firm foundation for practical applications. Readers will get familiar with a new generation of computer architectures that potentially can perform faster, as the necessity for communication between the processor and memory is surpassed. The discussion includes various synthesis methodologies and optimization algorithms targeting implementation cost metrics including latency and area overhead as well as the reliability issue caused by short memory lifetime. Presents a comprehensive synthesis flow for the emerging field of logic-in-memory computing; Describes automated compilation of programmable logic-in-memory computer architectures; Includes several effective optimization algorithm also applicable to classical logic synthesis; Investigates unbalanced write traffic in logic-in-memory architectures and describes wear leveling approaches to alleviate it.
This textbook provides a first introduction to mathematical logic which is closely attuned to the applications of logic in computer science. In it the authors emphasize the notion that deduction is a form of computation. Whilst all the traditional subjects of logic are covered thoroughly: syntax, semantics, completeness, and compactness; much of the book deals with less traditional topics such as resolution theorem proving, logic programming and non-classical logics - modal and intuitionistic - which are becoming increasingly important in computer science. No previous exposure to logic is assumed and so this will be suitable for upper level undergraduates or beginning graduate students in computer science or mathematics.From reviews of the first edition: "... must surely rank as one of the most fruitful textbooks introduced into computer science ... We strongly suggest it as a textbook ..." SIGACT News
Distributed applications are a necessity in most central application sectors of the contemporary information society, including e-commerce, e-banking, e-learning, e-health, telecommunication and transportation. This results from a tremendous growth of the role that the Internet plays in business, administration and our everyday activities. This trend is going to be even further expanded in the context of advances in broadband wireless communication. New Developments in Distributed Applications and Interoperable Systems focuses on the techniques available or under development with the goal to ease the burden of constructing reliable and maintainable interoperable information systems providing services in the global communicating environment. The topics covered in this book include: Context-aware applications; Integration and interoperability of distributed systems; Software architectures and services for open distributed systems; Management, security and quality of service issues in distributed systems; Software agents and mobility; Internet and other related problem areas. The book contains the proceedings of the Third International Working Conference on Distributed Applications and Interoperable Systems (DAIS'2001), which was held in September 2001 in Krakow, Poland, and sponsored by the International Federation on Information Processing (IFIP). The conference program presents the state of the art in research concerning distributed and interoperable systems. This is a topical research area where much activity is currently in progress. Interesting new aspects and innovative contributions are still arising regularly. The DAIS series of conferences is one of the main international forums where these important findings are reported."
This book describes how engineers can make optimum use of the two industry standard analysis/design tools, SystemC and SystemC-AMS. The authors use a system-level design approach, emphasizing how SystemC and SystemC-AMS features can be exploited most effectively to analyze/understand a given electronic system and explore the design space. The approach taken by this book enables system engineers to concentrate on only those SystemC/SystemC-AMS features that apply to their particular problem, leading to more efficient design. The presentation includes numerous, realistic and complete examples, which are graded in levels of difficulty to illustrate how a variety of systems can be analyzed with these tools.
A Flash memory is a Non Volatile Memory (NVM) whose "unit cells" are fabricated in CMOS technology and programmed and erased electrically. In 1971, Frohman-Bentchkowsky developed a folating polysilicon gate tran sistor [1, 2], in which hot electrons were injected in the floating gate and removed by either Ultra-Violet (UV) internal photoemission or by Fowler Nordheim tunneling. This is the "unit cell" of EPROM (Electrically Pro grammable Read Only Memory), which, consisting of a single transistor, can be very densely integrated. EPROM memories are electrically programmed and erased by UV exposure for 20-30 mins. In the late 1970s, there have been many efforts to develop an electrically erasable EPROM, which resulted in EEPROMs (Electrically Erasable Programmable ROMs). EEPROMs use hot electron tunneling for program and Fowler-Nordheim tunneling for erase. The EEPROM cell consists of two transistors and a tunnel oxide, thus it is two or three times the size of an EPROM. Successively, the combination of hot carrier programming and tunnel erase was rediscovered to achieve a single transistor EEPROM, called Flash EEPROM. The first cell based on this concept has been presented in 1979 [3]; the first commercial product, a 256K memory chip, has been presented by Toshiba in 1984 [4]. The market did not take off until this technology was proven to be reliable and manufacturable [5].
This book explains in layman's terms how CMOS transistors work. The author explains step-by-step how CMOS transistors are built, along with an explanation of the purpose of each process step. He describes for readers the key inventions and developments in science and engineering that overcame huge obstacles, enabling engineers to shrink transistor area by over 1 million fold and build billions of transistor switches that switch over a billion times a second, all on a piece of silicon smaller than a thumbnail.
Covering all the essential components of Unix/Linux, including process management, concurrent programming, timer and time service, file systems and network programming, this textbook emphasizes programming practice in the Unix/Linux environment. Systems Programming in Unix/Linux is intended as a textbook for systems programming courses in technically-oriented Computer Science/Engineering curricula that emphasize both theory and programming practice. The book contains many detailed working example programs with complete source code. It is also suitable for self-study by advanced programmers and computer enthusiasts. Systems programming is an indispensable part of Computer Science/Engineering education. After taking an introductory programming course, this book is meant to further knowledge by detailing how dynamic data structures are used in practice, using programming exercises and programming projects on such topics as C structures, pointers, link lists and trees. This book provides a wide range of knowledge about computer systemsoftware and advanced programming skills, allowing readers to interface with operatingsystem kernel, make efficient use of system resources and develop application software.It also prepares readers with the needed background to pursue advanced studies inComputer Science/Engineering, such as operating systems, embedded systems, databasesystems, data mining, artificial intelligence, computer networks, network security,distributed and parallel computing.
This book grants the reader a comprehensive overview of the state-of-the-art in system-level memory management (data transfer and storage) related issues for complex data-dominated real-time signal and data processing applications. The authors introduce their own system-level data transfer and storage exploration methodology for data-dominated video applications. This methodology tackles the power and area reduction cost components in the architecture for this target domain, namely the system-level busses and the background memories. For the most critical tasks in the methodology, prototype tools have been developed to reduce the design time. The approach is also very heavily application-driven which is illustrated by several realistic demonstrators, partly used as red-thread examples in the book. The quite general applicability and effectiveness has been substantiated for several industrial data-dominated applications, including H.263 video conferencing decoding and medical computer tomography (CT) back projection. To the researcher the book will serve as an excellent reference source, both for the overall description of the methodology and for the detailed descriptions of the system-level methodologies and synthesis techniques and algorithms. To the design engineers and CAD managers it offers an invaluable insight into the anticipated evolution of commercially available design tools as well as allowing them to utilize the book's concepts in their own research and development.
This book provides a comprehensive overview of the
state-of-the-art, data flow-based techniques for the analysis,
modeling and mapping technologies of concurrent applications on
multi-processors. The authors present a flow for designing embedded
hard/firm real-time multiprocessor streaming applications, based on
data flow formalisms, with a particular focus on wireless modem
applications. Architectures are described for the design tools and
run-time scheduling and resource management of such a platform.
Web Dynpro ABAP, a NetWeaver web application user interface tool from SAP, enables web programming connected to SAP Systems. The authors' main focus was to create a book based on their own practical experience. Each chapter includes examples which lead through the content step-by-step and enable the reader to gradually explore and grasp the Web Dynpro ABAP process. The authors explain in particular how to design Web Dynpro components, the data binding and interface methods, and the view controller methods. They also describe the other SAP NetWeaver Elements (ABAP Dictionary, Authorization) and the integration of the Web Dynpro Application into the SAP NetWeaver Portal. The new edition has been expanded to include chapters on subjects such as POWER Lists; creating the Modal Windows and External Windows; using Web Dynpro application parameters and Shared Objects to communicate between the Web Dynpro ABAP Application and Business Server Pages; and creating multi-language mails using Web Dynpro ABAP.
The terms groupware and CSCW (Computer-Supported Cooperative Work) have received significant attention in computer science and related disciplines for quite some time now. This book has two main objectives: first, to outline the meaning of both terms, and second, to point out both the numerous opportunities for users of CSCW systems and the risks of applying them. The book introduces in detail an interdisciplinary application area of distributed systems, namely the computer support of individuals trying to solve a problem in cooperation with each other but not necessarily having identical work places or working times. CSCW can be viewed as a synergism between the areas of distributed systems and (multimedia) communications on the one hand and those of information science and socio-organizational theory on the other hand. Thus, the book is addressed to students of all these disciplines, as well as to users and developers of systems with group communication and cooperation as top priorities.
The new organizational paradigms of global cooperation and collaboration require new ways and means for their support. Information and Communication Technology (ICT) can and will play a significant role in this support. However, the many currently available and seemingly conflicting solutions, the confusing terminology, the lack of business justification, and last but not least the insufficient understanding of the technology by the end user community has significantly hampered the large scale application of the relevant ICT support and thereby the acceptance of the new paradigms. Many of these issues have been addressed in the workshops of the international initiative on Enterprise Inter- and Intra-Organizational Integration, which has been supported by the European IST Programme and NIST. The main subjects of the initiative: relations between knowledge management and business process modeling, interoperability of business processes and process models, enterprise engineering and integration, and representation of process models. Ontologies and agent technologies - the latter with their relations to ontologies and models - have been further subjects of discussions in several workshops. Results of the initiative are reported in this volume, which comprises the proceedings of the International Conference on Enterprise Integration and Modeling Technology (ICEIMT'02). The conference was sponsored by the International Federation for Information Processing (IFIP) and held in Valencia, Spain in April 2002. Enterprise Inter- and Intra-Organizational Integration: Building International Consensus provides not only a wealth of information on the state of the art of the subjects of theinitiative, it also identifies opportunities for research and development. Potential projects are identified in the work group reports and some of those will be taken up by organizations involved.
This book presents research in an interdisciplinary field, resulting from the vigorous and fruitful cross-pollination between traditional deontic logic and computer science. AI researchers have used deontic logic as one of the tools in modelling legal reasoning. Computer scientists have discovered that computer systems (including their interaction with other computer systems and with human agents) can often be productively modelled as norm-governed. So, for example, deontic logic has been applied by computer scientists for specifying bureaucratic systems, access and security policies, and soft design or integrity constraints, and for modelling fault tolerance. In turn, computer scientists and AI researchers have also discovered (and made it clear to the rest of us) that various formal tools (e.g. nonmonotonic, temporal and dynamic logics) developed in computer science and artificial intelligence have interesting applications to traditional issues in deontic logic. This volume presents some of the best work done in this area, with the selection at once reflecting the general interdisciplinary (and international) character that this area of research has taken on, as well as reflecting the more specific recent inter-disciplinary developments between traditional deontic logic and computer science.
The advance in robotics has boosted the application of autonomous vehicles to perform tedious and risky tasks or to be cost-effective substitutes for their - man counterparts. Based on their working environment, a rough classi cation of the autonomous vehicles would include unmanned aerial vehicles (UAVs), - manned ground vehicles (UGVs), autonomous underwater vehicles (AUVs), and autonomous surface vehicles (ASVs). UAVs, UGVs, AUVs, and ASVs are called UVs (unmanned vehicles) nowadays. In recent decades, the development of - manned autonomous vehicles have been of great interest, and different kinds of autonomous vehicles have been studied and developed all over the world. In part- ular, UAVs have many applications in emergency situations; humans often cannot come close to a dangerous natural disaster such as an earthquake, a ood, an active volcano, or a nuclear disaster. Since the development of the rst UAVs, research efforts have been focused on military applications. Recently, however, demand has arisen for UAVs such as aero-robotsand ying robotsthat can be used in emergency situations and in industrial applications. Among the wide variety of UAVs that have been developed, small-scale HUAVs (helicopter-based UAVs) have the ability to take off and land vertically as well as the ability to cruise in ight, but their most importantcapability is hovering. Hoveringat a point enables us to make more eff- tive observations of a target. Furthermore, small-scale HUAVs offer the advantages of low cost and easy operation.
This collection of papers is the result of a workshop sponsored by NATO's Defense Research Group Panel 8 during the Fall of 1993. The workshop was held at the University of German Armed Forces at Neubiberg (Munich) Germany 29 September-l October, 1993. Robert J. Seidel Paul R. Chatelier U.S. Army Research Institute for the Executive Office of the President Behavioral and Social Sciences Office of Science and Technology Policy Washington, D.C. Washington, D.C. v PREFACE We would like to thank the authors of the papers for providing an excellent coverage of this rapidly developing technology, the session chairpersons for providing excellent structure and management for each group of papers, and each session's discussant's for their summary and personal views of their sessions papers. Our special thanks go to Dr. Rolfe Otte, the German ministry of Defense's research study group member and the person responsible for our being able to have this workshop in Munich. We are also grateful to Dr. H. Closhen of the IABG for technical and administrative assistance throughout the planning and conduct of the workshop.
Lo, soul! seest thou not God's purpose from the first? The earth to be spann'd, connected by net-work From Passage to India! Walt Whitman, "Leaves of Grass", 1900. The Internet is growing at a tremendous rate today. New services, such as telephony and multimedia, are being added to the pure data-delivery framework of yesterday. Such high demands on capacity could lead to a "bandwidth-crunch" at the core wide-area network resulting in degra dation of service quality. Fortunately, technological innovations have emerged which can provide relief to the end-user to overcome the In ternet's well-known delay and bandwidth limitations. At the physical layer, a major overhaul of existing networks has been envisaged from electronic media (such as twisted-pair and cable) to optical fibers - in the wide area, in the metropolitan area, and even in the local area set tings. In order to exploit the immense bandwidth potential of the optical fiber, interesting multiplexing techniques have been developed over the years. Wavelength division multiplexing (WDM) is such a promising tech nique in which multiple channels are operated along a single fiber si multaneously, each on a different wavelength. These channels can be independently modulated to accommodate dissimilar bit rates and data formats, if so desired. Thus, WDM carves up the huge bandwidth of an optical fiber into channels whose bandwidths (1-10 Gbps) are compati ble with peak electronic processing speed.
This book looks at the future of advertising from the perspective of pervasive computing. Pervasive computing encompasses the integration of computers into everyday devices, like the covering of surfaces with interactive displays and networked mobile phones. Advertising is the communication of sponsored messages to inform, convince, and persuade to buy. We believe that our future cities will be digital, giving us instant access to any information we need everywhere, like at bus stops, on the sidewalk, inside the subway and in shopping malls. We will be able to play with and change the appearance of our cities effortlessly, like making flowers grow along a building wall or changing the colour of the street we are in. Like the internet as we know it, this digitalization will be paid for by adverts, which unobtrusively provide us suggestions for nearby restaurants or cafes. If any content annoys us, we will be able to effortlessly say so and change it with simple gestures, and content providers and advertisers will know what we like and be able to act accordingly. This book presents the technological foundations to make this vision a reality.
I love virtual machines (VMs) and I have done for a long time.If that makes me "sad" or an "anorak," so be it. I love them because they are so much fun, as well as being so useful. They have an element of original sin (writing assembly programs and being in control of an entire machine), while still being able to claim that one is being a respectable member of the community (being structured, modular, high-level, object-oriented, and so on). They also allow one to design machines of one's own, unencumbered by the restrictions of a starts optimising it for some physical particular processor (at least, until one processor or other). I have been building virtual machines, on and off, since 1980 or there abouts. It has always been something of a hobby for me; it has also turned out to be a technique of great power and applicability. I hope to continue working on them, perhaps on some of the ideas outlined in the last chapter (I certainly want to do some more work with register-based VMs and concur rency). I originally wanted to write the book from a purely semantic viewpoint."
This volume contains 27 contributions to the Second Russian-German Advanced Research Workshop on Computational Science and High Performance Computing presented in March 2005 at Stuttgart, Germany. Contributions range from computer science, mathematics and high performance computing to applications in mechanical and aerospace engineering.
Open Radio Access Network (O-RAN) Systems Architecture and Design gives a jump-start to engineers developing O-RAN hardware and software systems, providing a top-down approach to O-RAN systems design. It gives an introduction into why wireless systems look the way they do today before introducing relevant O-RAN and 3GPP standards. The remainder of the book discusses hardware and software aspects of O-RAN system design, including dimensioning and performance targets.
Since its establishment in 1998, Microsoft Research Asia's trademark and long term commitment has been to foster innovative research and advanced education in the Asia-Pacific region. Through open collaboration and partnership with universities, government and other academic partners, MSRA has been consistently advancing the state-of-the-art in computer science. This book was compiled to record these outstanding collaborations, as Microsoft Research Asia celebrates its 10th Anniversary. The selected papers are all authored or co-authored by faculty members or students through collaboration with MSRA lab researchers, or with the financial support of MSRA. Papers previously published in top-tier international conference proceedings and journals are compiled here into one accessible volume of outstanding research. Innovation Together highlights the outstanding work of Microsoft Research Asia as it celebrates ten years of achievement and looks forward to the next decade of success.
|
![]() ![]() You may like...
Intelligent Help Systems for UNIX
Stephen J. Hegner, Paul Mc Kevitt, …
Hardcover
R2,457
Discovery Miles 24 570
Data Abstraction and Problem Solving…
Janet Prichard, Frank Carrano
Paperback
R2,421
Discovery Miles 24 210
Fuzzy Logic in Its 50th Year - New…
Cengiz Kahraman, Uzay Uzay Kaymak, …
Hardcover
Human Factors in Global Software…
Mobashar Rehman, Aamir Amin, …
Hardcover
R6,579
Discovery Miles 65 790
|