0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (8)
  • R250 - R500 (38)
  • R500+ (3,106)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design

Meaning and Proscription in Formal Logic - Variations on the Propositional Logic of William T. Parry (Hardcover, 1st ed. 2017):... Meaning and Proscription in Formal Logic - Variations on the Propositional Logic of William T. Parry (Hardcover, 1st ed. 2017)
Thomas Macaulay Ferguson
R1,414 Discovery Miles 14 140 Ships in 18 - 22 working days

This book aids in the rehabilitation of the wrongfully deprecated work of William Parry, and is the only full-length investigation into Parry-type propositional logics. A central tenet of the monograph is that the sheer diversity of the contexts in which the mereological analogy emerges - its effervescence with respect to fields ranging from metaphysics to computer programming - provides compelling evidence that the study of logics of analytic implication can be instrumental in identifying connections between topics that would otherwise remain hidden. More concretely, the book identifies and discusses a host of cases in which analytic implication can play an important role in revealing distinct problems to be facets of a larger, cross-disciplinary problem. It introduces an element of constancy and cohesion that has previously been absent in a regrettably fractured field, shoring up those who are sympathetic to the worth of mereological analogy. Moreover, it generates new interest in the field by illustrating a wide range of interesting features present in such logics - and highlighting these features to appeal to researchers in many fields.

Communication Complexity and Parallel Computing (Hardcover, 1997 ed.): Juraj Hromkovic Communication Complexity and Parallel Computing (Hardcover, 1997 ed.)
Juraj Hromkovic
R1,595 Discovery Miles 15 950 Ships in 18 - 22 working days

The communication complexity of two-party protocols is an only 15 years old complexity measure, but it is already considered to be one of the fundamen tal complexity measures of recent complexity theory. Similarly to Kolmogorov complexity in the theory of sequential computations, communication complex ity is used as a method for the study of the complexity of concrete computing problems in parallel information processing. Especially, it is applied to prove lower bounds that say what computer resources (time, hardware, memory size) are necessary to compute the given task. Besides the estimation of the compu tational difficulty of computing problems the proved lower bounds are useful for proving the optimality of algorithms that are already designed. In some cases the knowledge about the communication complexity of a given problem may be even helpful in searching for efficient algorithms to this problem. The study of communication complexity becomes a well-defined indepen dent area of complexity theory. In addition to a strong relation to several funda mental complexity measures (and so to several fundamental problems of com plexity theory) communication complexity has contributed to the study and to the understanding of the nature of determinism, nondeterminism, and random ness in algorithmics. There already exists a non-trivial mathematical machinery to handle the communication complexity of concrete computing problems, which gives a hope that the approach based on communication complexity will be in strumental in the study of several central open problems of recent complexity theory."

Architectures for Baseband Signal Processing (Hardcover, 2014 ed.): Frank Kienle Architectures for Baseband Signal Processing (Hardcover, 2014 ed.)
Frank Kienle
R3,640 R3,380 Discovery Miles 33 800 Save R260 (7%) Ships in 10 - 15 working days

This book addresses challenges faced by both the algorithm designer and the chip designer, who need to deal with the ongoing increase of algorithmic complexity and required data throughput for today s mobile applications. The focus is on implementation aspects and implementation constraints of individual components that are needed in transceivers for current standards, such as UMTS, LTE, WiMAX and DVB-S2. The application domain is the so called outer receiver, which comprises the channel coding, interleaving stages, modulator, and multiple antenna transmission. Throughout the book, the focus is on advanced algorithms that are actually in use
in modern communications systems. Their basic principles are always derived with a focus on the resulting communications and implementation performance. As a result, this book serves as a valuable reference for two, typically disparate audiences in communication systems and hardware design."

Quality by Design for Electronics (Hardcover, 1996 ed.): W. Fleischammer Quality by Design for Electronics (Hardcover, 1996 ed.)
W. Fleischammer
R4,850 Discovery Miles 48 500 Ships in 18 - 22 working days

This book concentrates on the quality of electronic products. Electronics in general, including semiconductor technology and software, has become the key technology for wide areas of industrial production. In nearly all expanding branches of industry electronics, especially digital electronics, is involved. And the spread of electronic technology has not yet come to an end. This rapid development, coupled with growing competition and the shorter innovation cycle, have caused economic problems which tend to have adverse effects on quality. Therefore, good quality at low cost is a very attractive goal in industry today. The demand for better quality continues along with a demand for more studies in quality assurance. At the same time, many companies are experiencing a drop in profits just when better quality of their products is essential in order to survive against the competition. There have been many proposals in the past to improve quality without increase in cost, or to reduce cost for quality assurance without loss of quality. This book tries to summarize the practical content of many of these proposals and to give some advice, above all to the designer and manufacturer of electronic devices. It mainly addresses practically minded engineers and managers. It is probably of less interest to pure scientists. The book covers all aspects of quality assurance of components used in electronic devices. Integrated circuits (lCs) are considered to be the most important components because the degree of integration is still rising.

Advanced Topics in Term Rewriting (Hardcover, 2002 ed.): Enno Ohlebusch Advanced Topics in Term Rewriting (Hardcover, 2002 ed.)
Enno Ohlebusch
R1,639 Discovery Miles 16 390 Ships in 18 - 22 working days

Term rewriting techniques are applicable to various fields of computer science, including software engineering, programming languages, computer algebra, program verification, automated theorem proving and Boolean algebra. These powerful techniques can be successfully applied in all areas that demand efficient methods for reasoning with equations. One of the major problems encountered is the characterization of classes of rewrite systems that have a desirable property, like confluence or termination. In a system that is both terminating and confluent, every computation leads to a result that is unique, regardless of the order in which the rewrite rules are applied. This volume provides a comprehensive and unified presentation of termination and confluence, as well as related properties. Topics and features: *unified presentation and notation for important advanced topics *comprehensive coverage of conditional term-rewriting systems *state-of-the-art survey of modularity in term rewriting *presentation of unified framework for term and graph rewriting *up-to-date discussion of transformational methods for proving termination of logic programs, including the TALP system This unique book offers a comprehensive and unified view of the subject that is suitable for all computer scientists, program designers, and software engineers who study and use term rewriting techniques. Practitioners, researchers and professionals will find the book an essential and authoritative resource and guide for the latest developments and results in the field.

Separation Logic for High-level Synthesis (Hardcover, 1st ed. 2017): Felix Winterstein Separation Logic for High-level Synthesis (Hardcover, 1st ed. 2017)
Felix Winterstein
R3,214 Discovery Miles 32 140 Ships in 18 - 22 working days
Cryptography and Network Security (Hardcover): Marcelo Sampaio De Alencar Cryptography and Network Security (Hardcover)
Marcelo Sampaio De Alencar
R3,456 Discovery Miles 34 560 Ships in 9 - 17 working days
Microsystem Technology and Microrobotics (Hardcover, 1997 ed.): Sergej Fatikow, Ulrich Rembold Microsystem Technology and Microrobotics (Hardcover, 1997 ed.)
Sergej Fatikow, Ulrich Rembold
R4,239 Discovery Miles 42 390 Ships in 18 - 22 working days

Microsystem technology (MST) integrates very small (up to a few nanometers) mechanical, electronic, optical, and other components on a substrate to construct functional devices. These devices are used as intelligent sensors, actuators, and controllers for medical, automotive, household and many other purposes. This book is a basic introduction to MST for students, engineers, and scientists. It is the first of its kind to cover MST in its entirety. It gives a comprehensive treatment of all important parts of MST such as microfabrication technologies, microactuators, microsensors, development and testing of microsystems, and information processing in microsystems. It surveys products built to date and experimental products and gives a comprehensive view of all developments leading to MST devices and robots.

Interlinking of Computer Networks - Proceedings of the NATO Advanced Study Institute held at Bonas, France, August 28 -... Interlinking of Computer Networks - Proceedings of the NATO Advanced Study Institute held at Bonas, France, August 28 - September 8, 1978 (Hardcover, 1979 ed.)
K.G. Beauchamp
R5,405 Discovery Miles 54 050 Ships in 18 - 22 working days

This volume contains the papers presented at the NATO Advanced Study Institute on the Interlinking of Computer Networks held between August 28th and September 8th 1978 at Bonas, France. The development of computer networks has proceeded over the last few decades to the point where a number of scientific and commercial networks are firmly established - albeit using different philosophies of design and operation. Many of these networks are serving similar communities having the same basic computer needs and those communities where the computer resources are complementary. Consequently there is now a considerable interest in the possibility of linking computer networks to provide resource sharing over quite wide geographical distances. The purpose of the Institute organisers was to consider the problems that arise when this form of interlinking is attempted. The problems fall into three categories, namely technical problems, compatibility and management. Only within the last few years have the technical problems been understood sufficiently well to enable interlinking to take place. Consequently considerable value was given during the meeting to discussing the compatibility and management problems that require solution before x FOREWORD global interlinking becomes an accepted and cost effective operation. Existing computer networks were examined in depth and case-histories of their operations were presented by delegates drawn from the international community. The scope and detail of the papers presented should provide a valuable contribution to this emerging field and be useful to Communications Specialists and Managers as well as those concerned with Computer Operations and Development."

Process Algebra with Timing (Hardcover, 2002 ed.): J. C. M. Baeten, C.A. Middelburg Process Algebra with Timing (Hardcover, 2002 ed.)
J. C. M. Baeten, C.A. Middelburg
R1,574 Discovery Miles 15 740 Ships in 18 - 22 working days

Timing issues are of growing importance for the conceptualization and design of computer-based systems. Timing may simply be essential for the correct behaviour of a system, e.g. of a controller. Even if timing is not essential for the correct behaviour of a system, there may be good reasons to introduce it in such a way that suitable timing becomes relevant for the correct behaviour of a complex system. This book is unique in presenting four algebraic theories about processes, each dealing with timing from a different point of view, in a coherent and systematic way. The timing of actions is either relative or absolute and the underlying time scale is either discrete or continuous. All presented theories are extensions of the algebra of communicating processes. The book is essential reading for researchers and advanced students interested in timing issues in the context of the design and analysis of concurrent and communicating processes.

Protecting Chips Against Hold Time Violations Due to Variability (Hardcover, 2012): Gustavo Neuberger, Gilson Wirth, Ricardo... Protecting Chips Against Hold Time Violations Due to Variability (Hardcover, 2012)
Gustavo Neuberger, Gilson Wirth, Ricardo Reis
R2,635 Discovery Miles 26 350 Ships in 18 - 22 working days

With the development of Very-Deep Sub-Micron technologies, process variability is becoming increasingly important and is a very important issue in the design of complex circuits. Process variability is the statistical variation of process parameters, meaning that these parameters do not have always the same value, but become a random variable, with a given mean value and standard deviation. This effect can lead to several issues in digital circuit design.

The logical consequence of this parameter variation is that circuit characteristics, as delay and power, also become random variables. Because of the delay variability, not all circuits will now have the same performance, but some will be faster and some slower. However, the slowest circuits may be so slow that they will not be appropriate for sale. On the other hand, the fastest circuits that could be sold for a higher price can be very leaky, and also not very appropriate for sale. A main consequence of power variability is that the power consumption of some circuits will be different than expected, reducing reliability, average life expectancy and warranty of products. Sometimes the circuits will not work at all, due to reasons associated with process variations. At the end, these effects result in lower yield and lower profitability.

To understand these effects, it is necessary to study the consequences of variability in several aspects of circuit design, like logic gates, storage elements, clock distribution, and any other that can be affected by process variations. The main focus of this book will be storage elements.

Handbook on Enterprise Architecture (Hardcover, 2003 ed.): Peter Bernus, Laszlo Nemes, Gunter Schmidt Handbook on Enterprise Architecture (Hardcover, 2003 ed.)
Peter Bernus, Laszlo Nemes, Gunter Schmidt
R7,793 Discovery Miles 77 930 Ships in 18 - 22 working days

 This Handbook is about methods, tools and examples of how to architect an enterprise through considering all life cycle aspects of Enterprise Entities (such as individual enterprises, enterprise networks, virtual enterprises, projects and other complex systems including a mixture of automated and human processes). The book is based on ISO15704:2000, or the GERAM Framework (Generalised Enterprise Reference Architecture and Methodology) that generalises the requirements of Enterprise Reference Architectures. Various Architecture Frameworks (PERA, CIMOSA, Grai-GIM, Zachman, C4ISR/DoDAF) are shown in light of GERAM to allow a deeper understanding of their contributions and therefore their correct and knowledgeable use. The handbook addresses a wide variety of audience, and covers methods and tools necessary to design or redesign enterprises, as well as to structure the implementation into manageable projects. 

Post-Silicon Validation and Debug (Hardcover, 1st ed. 2019): Prabhat Mishra, Farimah Farahmandi Post-Silicon Validation and Debug (Hardcover, 1st ed. 2019)
Prabhat Mishra, Farimah Farahmandi
R4,008 Discovery Miles 40 080 Ships in 10 - 15 working days

This book provides a comprehensive coverage of System-on-Chip (SoC) post-silicon validation and debug challenges and state-of-the-art solutions with contributions from SoC designers, academic researchers as well as SoC verification experts. The readers will get a clear understanding of the existing debug infrastructure and how they can be effectively utilized to verify and debug SoCs.

Low Power Interconnect Design (Hardcover, 2012): Sandeep Saini Low Power Interconnect Design (Hardcover, 2012)
Sandeep Saini
R2,658 Discovery Miles 26 580 Ships in 18 - 22 working days

This book provides practical solutions for delay and power reduction for on-chip interconnects and buses. It provides an in depth description of the problem of signal delay and extra power consumption, possible solutions for delay and glitch removal, while considering the power reduction of the total system. Coverage focuses on use of the Schmitt Trigger as an alternative approach to buffer insertion for delay and power reduction in VLSI interconnects. In the last section of the book, various bus coding techniques are discussed to minimize delay and power in address and data buses.

Stream Processor Architecture (Hardcover, 2001 ed.): Scott Rixner Stream Processor Architecture (Hardcover, 2001 ed.)
Scott Rixner
R2,698 Discovery Miles 26 980 Ships in 18 - 22 working days

Media processing applications, such as three-dimensional graphics, video compression, and image processing, currently demand 10-100 billion operations per second of sustained computation. Fortunately, hundreds of arithmetic units can easily fit on a modestly sized 1cm2 chip in modern VLSI. The challenge is to provide these arithmetic units with enough data to enable them to meet the computation demands of media processing applications. Conventional storage hierarchies, which frequently include caches, are unable to bridge the data bandwidth gap between modern DRAM and tens to hundreds of arithmetic units. A data bandwidth hierarchy, however, can bridge this gap by scaling the provided bandwidth across the levels of the storage hierarchy. The stream programming model enables media processing applications to exploit a data bandwidth hierarchy effectively. Media processing applications can naturally be expressed as a sequence of computation kernels that operate on data streams. This programming model exposes the locality and concurrency inherent in these applications and enables them to be mapped efficiently to the data bandwidth hierarchy. Stream programs are able to utilize inexperience local data bandwidth when possible and consume expensive global data bandwidth only when necessary. Stream Processor Architecture presents the architecture of the Imagine streaming media processor, which delivers a peak performance of 20 billion floating-point operations per second. Imagine efficiently supports 48 arithmetic units with a three-tiered data bandwidth hierarchy. At the base of the hierarchy, the streaming memory system employs memory access scheduling to maximize the sustained bandwidth of external DRAM. At the center of the hierarchy, the global stream register file enables streams of data to be recirculated directly from one computation kernel to the next without returning data to memory. Finally, local distributed register files that directly feed the arithmetic units enable temporary data to be stored locally so that it does not need to consume costly global register bandwidth. The bandwidth hierarchy enables Imagine to achieve up to 96% of the performance of a stream processor with infinite bandwidth from memory and the global register file.

Architecture and Design of Distributed Embedded Systems - IFIP WG10.3/WG10.4/WG10.5 International Workshop on Distributed and... Architecture and Design of Distributed Embedded Systems - IFIP WG10.3/WG10.4/WG10.5 International Workshop on Distributed and Parallel Embedded Systems (DIPES 2000) October 18-19, 2000, Schloss Eringerfeld, Germany (Hardcover, 2001 ed.)
Bernd Kleinjohann
R4,141 Discovery Miles 41 410 Ships in 18 - 22 working days

Due to the decreasing production costs of IT systems, applications that had to be realised as expensive PCBs formerly, can now be realised as a system-on-chip. Furthermore, low cost broadband communication media for wide area communication as well as for the realisation of local distributed systems are available. Typically the market requires IT systems that realise a set of specific features for the end user in a given environment, so called embedded systems. Some examples for such embedded systems are control systems in cars, airplanes, houses or plants, information and communication devices like digital TV, mobile phones or autonomous systems like service- or edutainment robots. For the design of embedded systems the designer has to tackle three major aspects: The application itself including the man-machine interface, The (target) architecture of the system including all functional and non-functional constraints and, the design methodology including modelling, specification, synthesis, test and validation. The last two points are a major focus of this book. This book documents the high quality approaches and results that were presented at the International Workshop on Distributed and Parallel Embedded Systems (DIPES 2000), which was sponsored by the International Federation for Information Processing (IFIP), and organised by IFIP working groups WG10.3, WG10.4 and WG10.5. The workshop took place on October 18-19, 2000, in Schloss Eringerfeld near Paderborn, Germany. Architecture and Design of Distributed Embedded Systems is organised similar to the workshop. Chapters 1 and 4 (Methodology I and II) deal with different modelling and specification paradigms and the corresponding design methodologies. Generic system architectures for different classes of embedded systems are presented in Chapter 2. In Chapter 3 several design environments for the support of specific design methodologies are presented. Problems concerning test and validation are discussed in Chapter 5. The last two chapters include distribution and communication aspects (Chapter 6) and synthesis techniques for embedded systems (Chapter 7). This book is essential reading for computer science researchers and application developers."

Nonlinear Assignment Problems - Algorithms and Applications (Hardcover, 2001 ed.): Panos M. Pardalos, L.S. Pitsoulis Nonlinear Assignment Problems - Algorithms and Applications (Hardcover, 2001 ed.)
Panos M. Pardalos, L.S. Pitsoulis
R4,049 Discovery Miles 40 490 Ships in 18 - 22 working days

Nonlinear Assignment Problems (NAPs) are natural extensions of the classic Linear Assignment Problem, and despite the efforts of many researchers over the past three decades, they still remain some of the hardest combinatorial optimization problems to solve exactly. The purpose of this book is to provide in a single volume, major algorithmic aspects and applications of NAPs as contributed by leading international experts. The chapters included in this book are concerned with major applications and the latest algorithmic solution approaches for NAPs. Approximation algorithms, polyhedral methods, semidefinite programming approaches and heuristic procedures for NAPs are included, while applications of this problem class in the areas of multiple-target tracking in the context of military surveillance systems, of experimental high energy physics, and of parallel processing are presented. Audience: Researchers and graduate students in the areas of combinatorial optimization, mathematical programming, operations research, physics, and computer science.

Time-Constrained Transaction Management - Real-Time Constraints in Database Transaction Systems (Hardcover, 1996 ed.): Nandit... Time-Constrained Transaction Management - Real-Time Constraints in Database Transaction Systems (Hardcover, 1996 ed.)
Nandit R. Soparkar, Henry F. Korth, Abraham Silberschatz
R2,732 Discovery Miles 27 320 Ships in 18 - 22 working days

Transaction processing is an established technique for the concurrent and fault tolerant access of persistent data. While this technique has been successful in standard database systems, factors such as time-critical applications, emerg ing technologies, and a re-examination of existing systems suggest that the performance, functionality and applicability of transactions may be substan tially enhanced if temporal considerations are taken into account. That is, transactions should not only execute in a "legal" (i.e., logically correct) man ner, but they should meet certain constraints with regard to their invocation and completion times. Typically, these logical and temporal constraints are application-dependent, and we address some fundamental issues for the man agement of transactions in the presence of such constraints. Our model for transaction-processing is based on extensions to established mod els, and we briefly outline how logical and temporal constraints may be ex pressed in it. For scheduling the transactions, we describe how legal schedules differ from one another in terms of meeting the temporal constraints. Exist ing scheduling mechanisms do not differentiate among legal schedules, and are thereby inadequate with regard to meeting temporal constraints. This provides the basis for seeking scheduling strategies that attempt to meet the temporal constraints while continuing to produce legal schedules."

Loop Parallelization (Hardcover, 1994 ed.): Utpal Banerjee Loop Parallelization (Hardcover, 1994 ed.)
Utpal Banerjee
R4,110 Discovery Miles 41 100 Ships in 18 - 22 working days

Automatic transformation of a sequential program into a parallel form is a subject that presents a great intellectual challenge and promises a great practical award. There is a tremendous investment in existing sequential programs, and scientists and engineers continue to write their application programs in sequential languages (primarily in Fortran). The demand for higher speedups increases. The job of a restructuring compiler is to discover the dependence structure and the characteristics of the given machine. Much attention has been focused on the Fortran do loop. This is where one expects to find major chunks of computation that need to be performed repeatedly for different values of the index variable. Many loop transformations have been designed over the years, and several of them can be found in any parallelizing compiler currently in use in industry or at a university research facility. The book series on KappaLoop Transformations for Restructuring Compilerskappa provides a rigorous theory of loop transformations and dependence analysis. We want to develop the transformations in a consistent mathematical framework using objects like directed graphs, matrices, and linear equations. Then, the algorithms that implement the transformations can be precisely described in terms of certain abstract mathematical algorithms. The first volume, Loop Transformations for Restructuring Compilers: The Foundations, provided the general mathematical background needed for loop transformations (including those basic mathematical algorithms), discussed data dependence, and introduced the major transformations. The current volume, Loop Parallelization, builds a detailed theory of iteration-level loop transformations based on the material developed in the previous book.

A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architectures (Hardcover, 2003 ed.): Ian N. Dunn, Gerard... A Parallel Algorithm Synthesis Procedure for High-Performance Computer Architectures (Hardcover, 2003 ed.)
Ian N. Dunn, Gerard G.L. Meyer
R2,715 Discovery Miles 27 150 Ships in 18 - 22 working days

Despite five decades of research, parallel computing remains an exotic, frontier technology on the fringes of mainstream computing. Its much-heralded triumph over sequential computing has yet to materialize. This is in spite of the fact that the processing needs of many signal processing applications continue to eclipse the capabilities of sequential computing. The culprit is largely the software development environment. Fundamental shortcomings in the development environment of many parallel computer architectures thwart the adoption of parallel computing. Foremost, parallel computing has no unifying model to accurately predict the execution time of algorithms on parallel architectures. Cost and scarce programming resources prohibit deploying multiple algorithms and partitioning strategies in an attempt to find the fastest solution. As a consequence, algorithm design is largely an intuitive art form dominated by practitioners who specialize in a particular computer architecture. This, coupled with the fact that parallel computer architectures rarely last more than a couple of years, makes for a complex and challenging design environment.

To navigate this environment, algorithm designers need a road map, a detailed procedure they can use to efficiently develop high performance, portable parallel algorithms. The focus of this book is to draw such a road map. The Parallel Algorithm Synthesis Procedure can be used to design reusable building blocks of adaptable, scalable software modules from which high performance signal processing applications can be constructed. The hallmark of the procedure is a semi-systematic process for introducing parameters to control the partitioning andscheduling of computation and communication. This facilitates the tailoring of software modules to exploit different configurations of multiple processors, multiple floating-point units, and hierarchical memories. To showcase the efficacy of this procedure, the book presents three case studies requiring various degrees of optimization for parallel execution.

This book can be used as a reference for algorithm designers or as a text for an advanced course on parallel programming.

VHDL: A logic synthesis approach (Hardcover, 1997 ed.): D. Naylor, S. Jones VHDL: A logic synthesis approach (Hardcover, 1997 ed.)
D. Naylor, S. Jones
R4,199 Discovery Miles 41 990 Ships in 18 - 22 working days

This book is structured in a practical, example-driven, manner. The use of VHDL for constructing logic synthesisers is one of the aims of the book; the second is the application of the tools to the design process. Worked examples, questions and answers are provided together with do and don'ts of good practice. An appendix on logic design the source code are available free of charge over the Internet.

Theory of Digital Automata (Hardcover, 2013 ed.): Bohdan Borowik, Mykola Karpinskyy, Valery Lahno, Oleksandr Petrov Theory of Digital Automata (Hardcover, 2013 ed.)
Bohdan Borowik, Mykola Karpinskyy, Valery Lahno, Oleksandr Petrov
R4,145 R3,345 Discovery Miles 33 450 Save R800 (19%) Ships in 10 - 15 working days

This book serves a dual purpose: firstly to combine the treatment of circuits and digital electronics, and secondly, to establish a strong connection with the contemporary world of digital systems. The need for this approach arises from the observation that introducing digital electronics through a course in traditional circuit analysis is fast becoming obsolete. Our world has gone digital. Automata theory helps with the design of digital circuits such as parts of computers, telephone systems and control systems. A complete perspective is emphasized, because even the most elegant computer architecture will not function without adequate supporting circuits. The focus is on explaining the real-world implementation of complete digital systems. In doing so, the reader is prepared to immediately begin design and implementation work. This work serves as a bridge to take readers from the theoretical world to the everyday design world where solutions must be complete to be successful.

A High Performance Architecture for Prolog (Hardcover, 1990 ed.): T.P. Dobry A High Performance Architecture for Prolog (Hardcover, 1990 ed.)
T.P. Dobry
R2,777 Discovery Miles 27 770 Ships in 18 - 22 working days

Artificial Intelligence is entering the mainstream of com- puter applications and as techniques are developed and integrated into a wide variety of areas they are beginning to tax the pro- cessing power of conventional architectures. To meet this demand, specialized architectures providing support for the unique features of symbolic processing languages are emerging. The goal of the research presented here is to show that an archi- tecture specialized for Prolog can achieve a ten-fold improve- ment in performance over conventional, general-purpose architec- tures. This book presents such an architecture for high perfor- mance execution of Prolog programs. The architecture is based on the abstract machine descrip- tion introduced by David H.D. Warren known as the Warren Abstract Machine (W AM). The execution model of the W AM is described and extended to provide a complete Instruction Set Architecture (lSA) for Prolog known as the PLM. This ISA is then realized in a microarchitecture and finally in a hardware design. The work described here represents one of the first efforts to implement the W AM model in hardware. The approach taken is that of direct implementation of the high level WAM instruction set in hardware resulting in a elSe style archi- tecture.

Paraconsistent Intelligent-Based Systems - New Trends in the Applications of Paraconsistency (Hardcover, 2015 ed.): Jair Minoro... Paraconsistent Intelligent-Based Systems - New Trends in the Applications of Paraconsistency (Hardcover, 2015 ed.)
Jair Minoro Abe
R4,224 R3,423 Discovery Miles 34 230 Save R801 (19%) Ships in 10 - 15 working days

This book presents some of the latest applications of new theories based on the concept of paraconsistency and correlated topics in informatics, such as pattern recognition (bioinformatics), robotics, decision-making themes, and sample size. Each chapter is self-contained, and an introductory chapter covering the logic theoretical basis is also included. The aim of the text is twofold: to serve as an introductory text on the theories and applications of new logic, and as a textbook for undergraduate or graduate-level courses in AI. Today AI frequently has to cope with problems of vagueness, incomplete and conflicting (inconsistent) information. One of the most notable formal theories for addressing them is paraconsistent (paracomplete and non-alethic) logic.

The Architecture of Information - Architecture, Interaction Design and the Patterning of Digital Information (Hardcover):... The Architecture of Information - Architecture, Interaction Design and the Patterning of Digital Information (Hardcover)
Martyn Dade-Robertson
R5,481 Discovery Miles 54 810 Ships in 10 - 15 working days

This book looks at relationships between the organisation of physical objects in space and the organisation of ideas. Historical, philosophical, psychological and architectural knowledge are united to develop an understanding of the relationship between information and its representation. Despite its potential to break the mould, digital information has relied on metaphors from a pre-digital era. In particular, architectural ideas have pervaded discussions of digital information, from the urbanisation of cyberspace in science fiction, through to the adoption of spatial visualisations in the design of graphical user interfaces. This book tackles: * the historical importance of physical places to the organisation and expression of knowledge * the limitations of using the physical organisation of objects as the basis for systems of categorisation and taxonomy * the emergence of digital technologies and the 20th century new conceptual understandings of knowledge and its organisation * the concept of disconnecting storage of information objects from their presentation and retrieval * ideas surrounding semantic space' * the realities of the types of user interface which now dominate modern computing.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
An Introduction to Optimal Control…
Sebastian Anita, Viorel Arnautu, … Hardcover R1,535 Discovery Miles 15 350
Multilevel Strategic Interaction Game…
Eitan Altman, Konstantin Avrachenkov, … Hardcover R2,917 Discovery Miles 29 170
Oligopoly Dynamics - Models and Tools
Irina Sushko Hardcover R2,824 Discovery Miles 28 240
The Oxford Handbook of the Economics of…
Yann Bramoulle, Andrea Galeotti, … Hardcover R5,455 Discovery Miles 54 550
Extremum Seeking through Delays and PDEs
Tiago Roux Oliveira, Miroslav Krstic Hardcover R3,218 R2,903 Discovery Miles 29 030
Advances in Dynamic Games - Theory…
Pierre Cardaliaguet, Ross Cressman Hardcover R2,719 Discovery Miles 27 190
Inequality and Finance in Macrodynamics
Bettina Boekemeier, Alfred Greiner Hardcover R4,352 Discovery Miles 43 520
Hazardous Forecasts and Crisis Scenario…
Arnaud Clement-Grandcourt, Herve Fraysse Hardcover R2,151 Discovery Miles 21 510
Modelling and Simulation in Management…
Ioan Constantin Dima, Mariana Man Hardcover R5,280 R4,959 Discovery Miles 49 590
Comparing Fairness - Relative Criteria…
Roger A. McCain Hardcover R3,245 Discovery Miles 32 450

 

Partners