![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems
Term rewriting techniques are applicable to various fields of computer science, including software engineering, programming languages, computer algebra, program verification, automated theorem proving and Boolean algebra. These powerful techniques can be successfully applied in all areas that demand efficient methods for reasoning with equations. One of the major problems encountered is the characterization of classes of rewrite systems that have a desirable property, like confluence or termination. In a system that is both terminating and confluent, every computation leads to a result that is unique, regardless of the order in which the rewrite rules are applied. This volume provides a comprehensive and unified presentation of termination and confluence, as well as related properties. Topics and features: *unified presentation and notation for important advanced topics *comprehensive coverage of conditional term-rewriting systems *state-of-the-art survey of modularity in term rewriting *presentation of unified framework for term and graph rewriting *up-to-date discussion of transformational methods for proving termination of logic programs, including the TALP system This unique book offers a comprehensive and unified view of the subject that is suitable for all computer scientists, program designers, and software engineers who study and use term rewriting techniques. Practitioners, researchers and professionals will find the book an essential and authoritative resource and guide for the latest developments and results in the field.
The International Symposium on Supercomputing - New Horizon of Computational Science was held on September 1-3, 1997 at the Science MuseuminTokyo,tocelebrate60-yearbirthdayofProfessorDaiichiroSug- imoto,who hasbeenleadingtheoreticalandnumericalastrophysicsfor 30 years. The conference coveredexceptionally wide range ofsubjects, to follow Sugimoto'saccomplishmentsinmanyfields.Onthefirstdaywehadthree talksonstellarevolutionandsixtalksonstellardynamics. Onthesecond day, six talks on special-purpose computingand four talks on large-scale computing in MolecularDynamicswere given. Onthethirdandthelast day,threetalks on dedicatedcomputerson LatticeQCDcalculationsand sixtalksonpresentandfutureofgeneral-purposeHPCsystemsweregiven. Inaddition,some30posterswerepresentedonvarioussubjectsincompu- tationalscience. Instellarevolution, D.Arnett (Univ. ofArizona) gaveanexcellenttalk on the recent development in three-dimensionalsimulation ofSupernova, inparticularonquantitativecomparisonbetweendifferenttechniquessuch asgrid-basedmethodsandSPH (SmoothedParticleHydrodynamics). Y. Kondo (NASA) discussedresentadvanceinthemodelingoftheevolution ofbinarystars,and1.Hachisu(Univ. ofTokyo)discussedRayleigh-Taylor instabilitiesinsupernovae(contributionnotincluded). Instellardynamics, P.Hut(lAS)gaveasuperbreviewonthelong-term evolution ofstellarsystem, J. Makino (Univ. ofTokyo) described briefly theresultsobtainedonGRAPE-4special-purposecomputerandthefollow- up project,GRAPE-6,whichisapprovedas ofJune 1997. GRAPE-6will be completed by year 2001 with the peak speed around 200 Tflops. R. Spurzem (Rechen-Inst.) and D. Heggie (Univ. of Edinburgh) talked on recentadvanceinthestudyofstarclusters,andE.Athanassoula(Marseille Observatory) describedthe work doneusingtheirGRAPE-3 systems. S. Ida (TokyoInst. ofTechnology) describedthe result ofthe simulationof theformationofMoon. Thefirst talkoftheseconddaywas given by F-H. Hsu oftheIBMT.J. Watson Research center, on "Deep Blue", the special-purpose computer for Chess,which, forthefirst timeinthehistory, wonthematchwiththe besthumanplayer,Mr. GaryKasparov(unfortunately,Hsu'scontribution isnot included in this volume). Then A. Bakker of Delft Inst. of Tech- nology looked back his 20 years ofdevelopingspecial-purpose computers formoleculardynamicsandsimulationofspinsystems. J.Arnoldgavean overviewoftheemergingnewfieldofreconfigurablecomputing,whichfalls inbetweentraditionalgeneral-purposecomputersandspecial-purposecom- puters. S.Okumura(NAO)describedthehistoryofultra-high-performance digital signalprocessors for radio astronomy. They havebuilt a machine with 20GaPS performance in early 80s, and keep improvingthe speed. M. Taiji (ISM) told on general aspects of GRAPE-type systems, and T. Narumi (Univ. of Tokyo) the 100-Tflops GRAPE-type machine for MD calculations,whichwillbefinished by 1999.
Go-to guide for using Microsoft's updated Hyper-V as a virtualization solution Windows Server 2012 Hyper-V offers greater scalability, new components, and more options than ever before for large enterprise systems and small/medium businesses. "Windows Server 2012 Hyper-V Installation and Configuration Guide" is the place to start learning about this new cloud operating system. You'll get up to speed on the architecture, basic deployment and upgrading, creating virtual workloads, designing and implementing advanced network architectures, creating multitenant clouds, backup, disaster recovery, and more. The international team of expert authors offers deep technical detail, as well as hands-on exercises and plenty of real-world scenarios, so you thoroughly understand all features and how best to use them.Explains how to deploy, use, manage, and maintain the Windows Server 2012 Hyper-V virtualization solutions in large enterprises and small- to medium-businesses Provides deep technical detail and plenty of exercises showing you how to work with Hyper-V in real-world settings Shows you how to quickly configure Hyper-V from the GUI and use PowerShell to script and automate common tasks Covers deploying Hyper-V hosts, managing virtual machines, network fabrics, cloud computing, and using file servers Also explores virtual SAN storage, creating guest clusters, backup and disaster recovery, using Hyper-V for Virtual Desktop Infrastructure (VDI), and other topics Help make your Hyper-V virtualization solution a success with "Windows Server 2012 Hyper-V Installation and Configuration Guide."
In "SharePoint 2003 Advanced Concepts," two world-class SharePoint consultants show how to make SharePoint " jump through hoops" for you-and do exactly what you want. Jason Nadrowski and Stacy Draper have built some of the most diverse SharePoint enterprise implementations. Now, drawing on their extraordinary " in the trenches" experience, they present solutions, techniques, and examples you simply won' t find anywhere else. "SharePoint 2003 Advanced Concepts" addresses every facet of SharePoint customization, from site definitions and templates to document libraries and custom properties. The authors cover both Windows SharePoint Services and SharePoint Portal Server 2003 and illuminate SharePoint' s interactions with other technologies-helping you troubleshoot problems far more effectively. Next time you encounter a tough SharePoint development challenge, don' t waste time: get your proven solution right here, in "SharePoint 2003 Advanced Concepts," - Construct more powerful site and list templates - Control how SharePoint uses ghosted and unghosted pages - Use custom site definitions to gain finer control over your site - Build list definitions with custom metadata, views, and forms - Troubleshoot WEBTEMP, ONET.XML, SCHEMA.XML, SharePoint databases, and their interactions - Create custom property types to extend SharePoint' s functionality - Integrate with other systems and SharePoint sites so that you can use their information more effectively - Customize themes and interactive Help, one step at a time - Customize email alerts and system notifications - Extend the capabilities of document libraries - Control document display and behavior based on extensions
Dimensions of Uncertainty in Communication Engineering is a comprehensive and self-contained introduction to the problems of nonaleatory uncertainty and the mathematical tools needed to solve them. The book gathers together tools derived from statistics, information theory, moment theory, interval analysis and probability boxes, dependence bounds, nonadditive measures, and Dempster-Shafer theory. While the book is mainly devoted to communication engineering, the techniques described are also of interest to other application areas, and commonalities to these are often alluded to through a number of references to books and research papers. This is an ideal supplementary book for courses in wireless communications, providing techniques for addressing epistemic uncertainty, as well as an important resource for researchers and industry engineers. Students and researchers in other fields such as statistics, financial mathematics, and transport theory will gain an overview and understanding on these methods relevant to their field.
This book puts the spotlight on how a real-time kernel works using Micrium's C/OS-III as a reference. The book consists of two complete parts. The first describes real-time kernels in generic terms. Part II provide examples for the reader, using the Inineon XMC4500. Together with the IAR Systems Embedded Workbench for ARM development tools, the evaluation board provides everything necessary to enable the reader to be up and running quickly, as well as a fun and educational experience, resulting in a high-level of proficiency in a short time. This book is written for serious embedded systems programmers, consultants, hobbyists, and students interested in understanding the inner workings of a real-time kernel. C/OS-III is not just a great learning platform, but also a full commercial-grade software package, ready to be part of a wide range of products. C/OS-III is a highly portable, ROMable, scalable, preemptive real-time, multitasking kernel designed specifically to address the demanding requirements of today's embedded systems. C/OS-III is the successor to the highly popular C/OS-II real-time kernel but can use most of C/OS-II's ports with minor modifications. Some of the features of C/OS-III are: Preemptive multitasking with round-robin scheduling of tasks at the same priority Unlimited number of tasks and other kernel objects Rich set of services: semaphores, mutual exclusion semaphores with full priority inheritance, event flags, message queues, timers, fixed-size memory block management, and more. Built-in performance measurements
This volume contains the papers presented at the NATO Advanced Study Institute on the Interlinking of Computer Networks held between August 28th and September 8th 1978 at Bonas, France. The development of computer networks has proceeded over the last few decades to the point where a number of scientific and commercial networks are firmly established - albeit using different philosophies of design and operation. Many of these networks are serving similar communities having the same basic computer needs and those communities where the computer resources are complementary. Consequently there is now a considerable interest in the possibility of linking computer networks to provide resource sharing over quite wide geographical distances. The purpose of the Institute organisers was to consider the problems that arise when this form of interlinking is attempted. The problems fall into three categories, namely technical problems, compatibility and management. Only within the last few years have the technical problems been understood sufficiently well to enable interlinking to take place. Consequently considerable value was given during the meeting to discussing the compatibility and management problems that require solution before x FOREWORD global interlinking becomes an accepted and cost effective operation. Existing computer networks were examined in depth and case-histories of their operations were presented by delegates drawn from the international community. The scope and detail of the papers presented should provide a valuable contribution to this emerging field and be useful to Communications Specialists and Managers as well as those concerned with Computer Operations and Development."
This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the Stuttgart High Performance Computing Center in 2007. The reports cover all fields of computational science and engineering, with emphasis on industrially relevant applications. Presenting results for both vector-based and microprocessor-based systems, the book allows comparison between performance levels and usability of various architectures.
Timing issues are of growing importance for the conceptualization and design of computer-based systems. Timing may simply be essential for the correct behaviour of a system, e.g. of a controller. Even if timing is not essential for the correct behaviour of a system, there may be good reasons to introduce it in such a way that suitable timing becomes relevant for the correct behaviour of a complex system. This book is unique in presenting four algebraic theories about processes, each dealing with timing from a different point of view, in a coherent and systematic way. The timing of actions is either relative or absolute and the underlying time scale is either discrete or continuous. All presented theories are extensions of the algebra of communicating processes. The book is essential reading for researchers and advanced students interested in timing issues in the context of the design and analysis of concurrent and communicating processes.
CMOS Memory Circuits is a systematic and comprehensive reference work designed to aid in the understanding of CMOS memory circuits, architectures, and design techniques. CMOS technology is the dominant fabrication method and almost the exclusive choice for semiconductor memory designers. Both the quantity and the variety of complementary-metal-oxide-semiconductor (CMOS) memories are staggering. CMOS memories are traded as mass-products worldwide and are diversified to satisfy nearly all practical requirements in operational speed, power, size, and environmental tolerance. Without the outstanding speed, power, and packing density characteristics of CMOS memories, neither personal computing, nor space exploration, nor superior defense systems, nor many other feats of human ingenuity could be accomplished. Electronic systems need continuous improvements in speed performance, power consumption, packing density, size, weight, and costs. These needs continue to spur the rapid advancement of CMOS memory processing and circuit technologies. CMOS Memory Circuits is essential for those who intend to (1) understand, (2) apply, (3) design and (4) develop CMOS memories.
With the development of Very-Deep Sub-Micron technologies, process variability is becoming increasingly important and is a very important issue in the design of complex circuits. Process variability is the statistical variation of process parameters, meaning that these parameters do not have always the same value, but become a random variable, with a given mean value and standard deviation. This effect can lead to several issues in digital circuit design. The logical consequence of this parameter variation is that circuit characteristics, as delay and power, also become random variables. Because of the delay variability, not all circuits will now have the same performance, but some will be faster and some slower. However, the slowest circuits may be so slow that they will not be appropriate for sale. On the other hand, the fastest circuits that could be sold for a higher price can be very leaky, and also not very appropriate for sale. A main consequence of power variability is that the power consumption of some circuits will be different than expected, reducing reliability, average life expectancy and warranty of products. Sometimes the circuits will not work at all, due to reasons associated with process variations. At the end, these effects result in lower yield and lower profitability. To understand these effects, it is necessary to study the consequences of variability in several aspects of circuit design, like logic gates, storage elements, clock distribution, and any other that can be affected by process variations. The main focus of this book will be storage elements.
The goal of this book is to crystallize the emerging mobile computing technologies and trends by focusing on the most promising solutions in services computing. The book will provide clear proof that mobile technologies are playing an increasingly important and critical role in supporting toy computing. The goal of this book is to bring together academics and practitioners to describe the use and synergy between the above-mentioned technologies. This book is intended for researchers and students working in computer science and engineering, as well as toy industry technology providers, having particular interests in mobile services.
This Handbook is about methods, tools and examples of how to architect an enterprise through considering all life cycle aspects of Enterprise Entities (such as individual enterprises, enterprise networks, virtual enterprises, projects and other complex systems including a mixture of automated and human processes). The book is based on ISO15704:2000, or the GERAM Framework (Generalised Enterprise Reference Architecture and Methodology) that generalises the requirements of Enterprise Reference Architectures. Various Architecture Frameworks (PERA, CIMOSA, Grai-GIM, Zachman, C4ISR/DoDAF) are shown in light of GERAM to allow a deeper understanding of their contributions and therefore their correct and knowledgeable use. The handbook addresses a wide variety of audience, and covers methods and tools necessary to design or redesign enterprises, as well as to structure the implementation into manageable projects.
Many firms are now developing policies for outsourcing IT and other basic functions, this book analyzes this issue from the perspective of both the outsourcer and the insourcer. Dimitris N. Chorafas describes management needs and shows how technology can be used to meet these needs. The book also highlights the benefits and risks that companies face when they attempt to differentiate themselves through new technology. The book is based on an extensive research project in the US, UK, Germany, France, Switzerland, and Sweden.
This book focus on Long Term Evolution (LTE) and beyond. The chapters describe different aspects of research and development in LTE, LTE-Advanced (4G systems) and LTE-450 MHz such as telecommunications regulatory framework, voice over LTE, link adaptation, power control, interference mitigation mechanisms, performance evaluation for different types of antennas, cognitive mesh network, integration of LTE network and satellite, test environment, power amplifiers and so on. It is useful for researchers in the field of mobile communications.
No other area of biology has grown as fast and become as relevant over the last decade as virology. It is with no little amount of amaze ment, that the more we learn about fundamental biological questions and mechanisms of diseases, the more obvious it becomes that viruses perme ate all facets of our lives. While on one hand viruses are known to cause acute and chronic, mild and fatal, focal and generalized diseases, on the other hand, they are used as tools for gaining an understanding of the structure and function of higher organisms, and as vehicles for carrying protective or curative therapies. The wide scope of approaches to different biological and medical virological questions was well rep resented by the speakers that participated in this year's Symposium. While the epidemic by the human immunodeficiency virus type 1 continues to spread without hope for much relief in sight, intriguing questions and answers in the area of diagnostics, clinical manifestations and therapeutical approaches to viral infections are unveiled daily. Let us hope, that with the increasing awareness by our society of the role played by viruses, not only as causative agents of diseases, but also as models for better understanding basic biological principles, more efforts and resources are placed into their study. Luis M. de la Maza Irvine, California Ellena M."
This book is de- voted to some topical prob- lems and various applica- tions of Operator Theory and to its interplay with many other fields of analysis as modern approximation the- ory, theory of dynamic sys- tems, harmonic analysis and complex analysis. It consists of 20 carefully selected sur- veys and research-expository papers. Their scope gives a representative status report on the field drawing a pic- ture of a rapidly developing domain of analysis. An abun- dance of references completes the picture. All papers included in the volume originate from lectures delivered at the l1th edition of the International Workshop on Operator The- ory and its Applications (IWOTA-2000, June 13-16, Bordeaux). Some information about the conference, including the complete list of participants, can be found on forthcoming pages. The editors are indebted to A.Sudakov for helping them in polishing and assembling original TeX files. A. Borichev and N. Nikolski Talence, May 2001 v vii International Workshop on Operator Theory and Its Applications (June 13-June 16, 2000, Universite Bordeaux 1) The International Workshop on Operator Theory and its Applications (IWOTA) is a satellite meeting of the international symposium on the Mathe- matical Theory of Networks and Systems (MNTS). In 2000, the MNTS is held in Perpignan, France, June 19-23. IWOTA 2000 was the eleventh workshop of this kind.
Operating Systems and Services brings together in one place important contributions and up-to-date research results in this fast moving area. Operating Systems and Services serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Media processing applications, such as three-dimensional graphics, video compression, and image processing, currently demand 10-100 billion operations per second of sustained computation. Fortunately, hundreds of arithmetic units can easily fit on a modestly sized 1cm2 chip in modern VLSI. The challenge is to provide these arithmetic units with enough data to enable them to meet the computation demands of media processing applications. Conventional storage hierarchies, which frequently include caches, are unable to bridge the data bandwidth gap between modern DRAM and tens to hundreds of arithmetic units. A data bandwidth hierarchy, however, can bridge this gap by scaling the provided bandwidth across the levels of the storage hierarchy. The stream programming model enables media processing applications to exploit a data bandwidth hierarchy effectively. Media processing applications can naturally be expressed as a sequence of computation kernels that operate on data streams. This programming model exposes the locality and concurrency inherent in these applications and enables them to be mapped efficiently to the data bandwidth hierarchy. Stream programs are able to utilize inexperience local data bandwidth when possible and consume expensive global data bandwidth only when necessary. Stream Processor Architecture presents the architecture of the Imagine streaming media processor, which delivers a peak performance of 20 billion floating-point operations per second. Imagine efficiently supports 48 arithmetic units with a three-tiered data bandwidth hierarchy. At the base of the hierarchy, the streaming memory system employs memory access scheduling to maximize the sustained bandwidth of external DRAM. At the center of the hierarchy, the global stream register file enables streams of data to be recirculated directly from one computation kernel to the next without returning data to memory. Finally, local distributed register files that directly feed the arithmetic units enable temporary data to be stored locally so that it does not need to consume costly global register bandwidth. The bandwidth hierarchy enables Imagine to achieve up to 96% of the performance of a stream processor with infinite bandwidth from memory and the global register file.
Due to the decreasing production costs of IT systems, applications that had to be realised as expensive PCBs formerly, can now be realised as a system-on-chip. Furthermore, low cost broadband communication media for wide area communication as well as for the realisation of local distributed systems are available. Typically the market requires IT systems that realise a set of specific features for the end user in a given environment, so called embedded systems. Some examples for such embedded systems are control systems in cars, airplanes, houses or plants, information and communication devices like digital TV, mobile phones or autonomous systems like service- or edutainment robots. For the design of embedded systems the designer has to tackle three major aspects: The application itself including the man-machine interface, The (target) architecture of the system including all functional and non-functional constraints and, the design methodology including modelling, specification, synthesis, test and validation. The last two points are a major focus of this book. This book documents the high quality approaches and results that were presented at the International Workshop on Distributed and Parallel Embedded Systems (DIPES 2000), which was sponsored by the International Federation for Information Processing (IFIP), and organised by IFIP working groups WG10.3, WG10.4 and WG10.5. The workshop took place on October 18-19, 2000, in Schloss Eringerfeld near Paderborn, Germany. Architecture and Design of Distributed Embedded Systems is organised similar to the workshop. Chapters 1 and 4 (Methodology I and II) deal with different modelling and specification paradigms and the corresponding design methodologies. Generic system architectures for different classes of embedded systems are presented in Chapter 2. In Chapter 3 several design environments for the support of specific design methodologies are presented. Problems concerning test and validation are discussed in Chapter 5. The last two chapters include distribution and communication aspects (Chapter 6) and synthesis techniques for embedded systems (Chapter 7). This book is essential reading for computer science researchers and application developers."
Emphasizing leadership principles and practices, Antipatterns: Managing Software Organizations and People, Second Edition catalogs 49 business practices that are often precursors to failure. This updated edition of a bestseller not only illustrates bad management approaches, but also covers the bad work environments and cultural traits commonly found in IT, software development, and other business domains. For each antipattern, it describes the situation and symptoms, gives examples, and offers a refactoring solution. The authors, graduate faculty at Penn State University, avoid an overly scholarly style and infuse the text with entertaining sidebars, cartoons, stories, and jokes. They provide names for the antipatterns that are visual, humorous, and memorable. Using real-world anecdotes, they illustrate key concepts in an engaging manner. This updated edition sheds light on new management and environmental antipattems and includes a new chapter, six updated chapters, and new discussion questions. Topics covered include leadership principles, environmental antipatterns, group patterns, management antipatterns, and team leadership. Following introductory material on management theory and human behavior, the text catalogs the full range of management, cultural, and environmental antipatterns. It includes thought-provoking exercises that each describe a situation, ask which antipatterns are present, and explain how to refactor the situation. It provides time-tested advice to help you overcome bad practices through successful interaction with your clients, customers, peers, supervisors, and subordinates.
Transaction processing is an established technique for the concurrent and fault tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "legal" (i.e., logically correct) man ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod els, and we briefly outline how logical and temporal constraints may be ex pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules."
Nonlinear Assignment Problems (NAPs) are natural extensions of the classic Linear Assignment Problem, and despite the efforts of many researchers over the past three decades, they still remain some of the hardest combinatorial optimization problems to solve exactly. The purpose of this book is to provide in a single volume, major algorithmic aspects and applications of NAPs as contributed by leading international experts. The chapters included in this book are concerned with major applications and the latest algorithmic solution approaches for NAPs. Approximation algorithms, polyhedral methods, semidefinite programming approaches and heuristic procedures for NAPs are included, while applications of this problem class in the areas of multiple-target tracking in the context of military surveillance systems, of experimental high energy physics, and of parallel processing are presented. Audience: Researchers and graduate students in the areas of combinatorial optimization, mathematical programming, operations research, physics, and computer science.
Despite five decades of research, parallel computing remains an exotic, frontier technology on the fringes of mainstream computing. Its much-heralded triumph over sequential computing has yet to materialize. This is in spite of the fact that the processing needs of many signal processing applications continue to eclipse the capabilities of sequential computing. The culprit is largely the software development environment. Fundamental shortcomings in the development environment of many parallel computer architectures thwart the adoption of parallel computing. Foremost, parallel computing has no unifying model to accurately predict the execution time of algorithms on parallel architectures. Cost and scarce programming resources prohibit deploying multiple algorithms and partitioning strategies in an attempt to find the fastest solution. As a consequence, algorithm design is largely an intuitive art form dominated by practitioners who specialize in a particular computer architecture. This, coupled with the fact that parallel computer architectures rarely last more than a couple of years, makes for a complex and challenging design environment. To navigate this environment, algorithm designers need a road map, a detailed procedure they can use to efficiently develop high performance, portable parallel algorithms. The focus of this book is to draw such a road map. The Parallel Algorithm Synthesis Procedure can be used to design reusable building blocks of adaptable, scalable software modules from which high performance signal processing applications can be constructed. The hallmark of the procedure is a semi-systematic process for introducing parameters to control the partitioning andscheduling of computation and communication. This facilitates the tailoring of software modules to exploit different configurations of multiple processors, multiple floating-point units, and hierarchical memories. To showcase the efficacy of this procedure, the book presents three case studies requiring various degrees of optimization for parallel execution. This book can be used as a reference for algorithm designers or as a text for an advanced course on parallel programming.
Artificial Intelligence is entering the mainstream of com- puter applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the pro- cessing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an archi- tecture specialized for Prolog can achieve a ten-fold improve- ment in performance over conventional, general-purpose architec- tures. This book presents such an architecture for high perfor- mance execution of Prolog programs. The architecture is based on the abstract machine descrip- tion introduced by David H.D. Warren known as the Warren Abstract Machine (W AM). The execution model of the W AM is described and extended to provide a complete Instruction Set Architecture (lSA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design. The work described here represents one of the first efforts to implement the W AM model in hardware. The approach taken is that of direct implementation of the high level WAM instruction set in hardware resulting in a elSe style archi- tecture. |
You may like...
Practical TCP/IP and Ethernet Networking…
Deon Reynders, Edwin Wright
Paperback
R1,491
Discovery Miles 14 910
BTEC Nationals Information Technology…
Jenny Phillips, Alan Jarvis, …
Paperback
R1,018
Discovery Miles 10 180
Wireless Communication Networks…
Hailong Huang, Andrey V. Savkin, …
Paperback
R2,763
Discovery Miles 27 630
The System Designer's Guide to VHDL-AMS…
Peter J Ashenden, Gregory D. Peterson, …
Paperback
R2,281
Discovery Miles 22 810
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
|