![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Programming languages
In Logic Programming, as in many other areas, Theory is often best tested by Application and attempted Application frequently necessitates advances in Theory, so both theoretical and practical work is essential for effective progress. This is clearly evident in the following papers presented to the sec ond UK Logic Programming Conference which was sponsored by the United Kingdom branch of the Association of Logic Programming and convened at Bristol.University in March 1990. This book contains 13 papers from that conference grouped under four head ings: Theory supporting practice motivating theory In this first group of papers, difficulties experienced in practical application of Prolog and in debugging Prolog programs have motivated work on extensions to the language and its development environment. Program development advances are represented by two papers on debugging and one on a development methodology for CLP programs. On the theoret ical side a Pure(r) logic language is proposed as well as extensions to make logic more effective for integrity checking in deductive databases. Applications The next group contains three papers. The first describers the use of Prolog to develop a Control Engineering workStation (CES). The second investigates the use of a logic programming based KBMS for developing a prototype Fi nancial Management Information System. In the last it is shown how a subset of prolog can provide a vehicle for the animation of Discrete Mathematics."
Object-Z is an object-oriented extension of the formal specification language Z. It adds to Z notions of classes and objects, and inheritance and polymorphism. By extending Z's semantic basis, it enables the specification of systems as collections of independent objects in which self and mutual referencing are possible. The Object-Z Specification Language presents a comprehensive description of Object-Z including discussions of semantic issues, definitions of all language constructs, type rules and other rules of usage, specification guidelines, and a full concrete syntax. It will enable you to confidently construct Object-Z specifications and is intended as a reference manual to keep by your side as you use and learn to use Object-Z. The Object-Z Specification Language is suitable as a textbook or as a secondary text for a graduate-level course, and as a reference for researchers and practitioners in industry.
Variational Object-Oriented Programming Beyond Classes and Inheritance presents an approach for improving the standard object-oriented programming model. The proposal is aimed at supporting a larger range of incremental behavior variations and thus promises to be more effective in mastering the complexity of today's software. The material presented in this book is interesting to both beginners and students or professionals with an advanced knowledge of object-oriented programming: * The first part of the book can be used as supplementary material for students and professionals being introduced to object-oriented programming. It provides them with a very concise description of the main concepts of object-oriented programming, which are presented from a conceptual point of view rather than related to the features of a particular object-oriented programming language. The description of the main concepts is a synthesis of considerations from several leading works in data abstraction and object-oriented technology. Parts of the book are currently used as supplementary material for teaching a graduate course on object-oriented design.* The book provides experienced programmers with a conceptual view of the relationship between object-oriented programming, data abstraction, and previous programming models that promotes a deep understanding of the essence of object-oriented programming. * The book presents a synthesis of both the main achievements and the main shortcomings of object-oriented programming with respect to supporting incremental programming and promoting software reuse. It illustrates the behavior variations that can be performed incrementally and those that are not supported properly; the workarounds currently used for dealing with the latter case are described. * Recent developments from ongoing research in object-oriented programming are presented, showing that the problems they deal with can actually be traced to some form of context-dependent behavior. The developments considered include design patterns, subject-oriented programming, adaptive programming, reflection, open implementations, and aspect-oriented programming.* Advanced students interested in language design are not only provided with a comprehensive informal description of the new model, but also with a formal model and the description of a prototype implementation of RONDO embedded into the Smalltalk-80 environment. This can serve as a basis for experimenting with new concepts or with modifications of the proposed model. * The last chapter of the book is particularly beneficial to the practitioners of object technology, since it deals with issues in maintaining reusable object-oriented systems.
Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library presents ARCH, a library built as an extension to MPI. ARCH relies on a small set of programming abstractions that allow the writing of well-structured multi-threaded parallel codes according to the object-oriented programming style. ARCH has been written with C++. The book describes the built-in classes, and illustrates their use through several template application cases in several fields of interest: Distributed Algorithms (global completion detection, distributed process serialization), Parallel Combinatorial Optimization (A* procedure), Parallel Image-Processing (segmentation by region growing). It shows how new application-level distributed data types - such as a distributed tree and a distributed graph - can be derived from the built-in classes. A feature of interest to readers is that both the library and the application codes used for illustration purposes are available via the Internet. The material can be downloaded for installation and personal parallel code development on the reader's computer system. ARCH can be run on Unix/Linux as well as Windows NT-based platforms. Current installations include the IBM-SP2, the CRAY-T3E, the Intel Paragon, PC-networks under Linux or Windows NT. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library is aimed at scientists who need to implement parallel/distributed algorithms requiring complicated local and/or distributed control structures. It can also benefit parallel/distributed program developers who wish to write codes in the object-oriented style. The author has been using ARCH for several years as a medium to teach parallel and network programming. Teachers can employ the library for the same purpose while students can use it for training. Although ARCH has been used so far in an academic environment, it will be an effective tool for professionals as well. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library is suitable as a secondary text for a graduate level course on Data Communications and Networks, Programming Languages, Algorithms and Computational Theory and Distributed Computing and as a reference for researchers and practitioners in industry.
Hardware description languages (HDL) such as VHDL and Verilog have found their way into almost every aspect of the design of digital hardware systems. Since their inception they gradually proved to be an essential part of modern design methodologies and design automation tools, ever exceeding their original goals of being description and simulation languages. Their use for automatic synthesis, formal proof, and testing are good examples. So far, HDLs have been mainly dealing with digital systems. However, integrated systems designed today require more and more analog parts such as A/D and D/A converters, phase locked loops, current mirrors, etc. The verification of the complete system therefore asks for the use of a single language. Using VHDL or Verilog to handle analog descriptions is possible, as it is shown in this book, but the real power is coming from true mixed-signal HDLs that integrate discrete and continuous semantics into a unified framework. Analog HDLs (AHDL) are considered here a subset of mixed-signal HDLs as they intend to provide the same level of features as HDLs do but with a scope limited to analog systems, possibly with limited support of discrete semantics. Analog and Mixed-Signal Hardware Description Languages covers several aspects related to analog and mixed-signal hardware description languages including: The use of a digital HDL for the description and the simulation of analog systems The emergence of extensions of existing standard HDLs that provide true analog and mixed-signal HDLs. The use of analog and mixed-signal HDLs for the development of behavioral models of analog (electronic) building blocks (operational amplifier, PLL) and for the design of microsystems that do not only involve electronic parts. The use of a front-end tool that eases the description task with the help of a graphical paradigm, yet generating AHDL descriptions automatically. Analog and Mixed-Signal Hardware Description Languages is the first book to show how to use these new hardware description languages in the design of electronic components and systems. It is necessary reading for researchers and designers working in electronic design.
Perspectives On Software Requirements presents perspectives on several current approaches to software requirements. Each chapter addresses a specific problem where the authors summarize their experiences and results to produce well-fit and traceable requirements. Chapters highlight familiar issues with recent results and experiences, which are accompanied by chapters describing well-tuned new methods for specific domains.
The implementation of object-oriented languages has been an active topic of research since the 1960s when the first Simula compiler was written. The topic received renewed interest in the early 1980s with the growing popularity of object-oriented programming languages such as c++ and Smalltalk, and got another boost with the advent of Java. Polymorphic calls are at the heart of object-oriented languages, and even the first implementation of Simula-67 contained their classic implementation via virtual function tables. In fact, virtual function tables predate even Simula-for example, Ivan Sutherland's Sketchpad drawing editor employed very similar structures in 1960. Similarly, during the 1970s and 1980s the implementers of Smalltalk systems spent considerable efforts on implementing polymorphic calls for this dynamically typed language where virtual function tables could not be used. Given this long history of research into the implementation of polymorphic calls, and the relatively mature standing it achieved over time, why, one might ask, should there be a new book in this field? The answer is simple. Both software and hardware have changed considerably in recent years, to the point where many assumptions underlying the original work in this field are no longer true. In particular, virtual function tables are no longer sufficient to implement polymorphic calls even for statically typed languages; for example, Java's interface calls cannot be implemented this way. Furthermore, today's processors are deeply pipelined and can execute instructions out-of order, making it difficult to predict the execution time of even simple code sequences.
Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation is devoted to a new paradigm for evolutionary computation, named estimation of distribution algorithms (EDAs). This new class of algorithms generalizes genetic algorithms by replacing the crossover and mutation operators with learning and sampling from the probability distribution of the best individuals of the population at each iteration of the algorithm. Working in such a way, the relationships between the variables involved in the problem domain are explicitly and effectively captured and exploited. This text constitutes the first compilation and review of the techniques and applications of this new tool for performing evolutionary computation. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation is clearly divided into three parts. Part I is dedicated to the foundations of EDAs. In this part, after introducing some probabilistic graphical models - Bayesian and Gaussian networks - a review of existing EDA approaches is presented, as well as some new methods based on more flexible probabilistic graphical models. A mathematical modeling of discrete EDAs is also presented. Part II covers several applications of EDAs in some classical optimization problems: the travelling salesman problem, the job scheduling problem, and the knapsack problem. EDAs are also applied to the optimization of some well-known combinatorial and continuous functions. Part III presents the application of EDAs to solve some problems that arise in the machine learning field: feature subset selection, feature weighting in K-NN classifiers, rule induction, partial abductive inference in Bayesian networks, partitional clustering, and the search for optimal weights in artificial neural networks. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation is a useful and interesting tool for researchers working in the field of evolutionary computation and for engineers who face real-world optimization problems. This book may also be used by graduate students and researchers in computer science. ... I urge those who are interested in EDAs to study this well-crafted book today.' David E. Goldberg, University of Illinois Champaign-Urbana.
This book comprises the refereed proceedings of the International Conferences, ASEA and DRBC 2012, held in conjunction with GST 2012 on Jeju Island, Korea, in November/December 2012. The papers presented were carefully reviewed and selected from numerous submissions and focus on the various aspects of advanced software engineering and its applications, and disaster recovery and business continuity.
The Art of Assembly Language Programming using PIC (R) Technology thoroughly covers assembly language as used in programming the PIC (R) Microcontroller (MCU). Using the minimal instruction set, characteristic of most PIC (R) products, the author elaborates on the nuances of how to execute loops. Fundamental design practices are presented based on Orr's Structured Systems Development using four logical control structures. These control structures are presented in Flowcharting, Warnier-Orr (R) diagrams, State Diagrams, Pseudocode, and an extended example using SysML (R). Basic math instructions of Add and Subtract are presented, along with a cursory presentation of advanced math routines provided as proven Microchip (R) utility Application Notes. Appendices are provided for completeness, especially for the advanced reader, including several Instruction Sets, ASCII character sets, Decimal-Binary-Hexadecimal conversion tables, and elaboration of ten 'Best Practices.' Two datasheets (one complete datasheet on the 10F20x series and one partial datasheet on the 16F88x series) are also provided in the Appendices to serve as an important reference, enabling the new embedded programmer to develop familiarity with the format of datasheets and the skills needed to assess the product datasheet for proper selection of a microcontroller family for any specific project. The Art of Assembly Language Programming Using PIC (R) Technology is written for an audience with a broad variety of skill levels, ranging from the absolute beginner completely new to embedded control to the embedded C programmer new to assembly language. With this book, you will be guided through the following areas: Symbols and terminology used by programmers and engineers in microcontroller applications Programming using assembly language through examples Familiarity with design and development practices Basics of mathematical knowledge in hexadecimal Resources for advanced mathematical functions Approaches to locate resources
Time is ubiquitous in information systems. Almost every enterprise faces the problem of its data becoming out of date. However, such data is often valu able, so it should be archived and some means to access it should be provided. Also, some data may be inherently historical, e.g., medical, cadastral, or ju dicial records. Temporal databases provide a uniform and systematic way of dealing with historical data. Many languages have been proposed for tem poral databases, among others temporal logic. Temporal logic combines ab stract, formal semantics with the amenability to efficient implementation. This chapter shows how temporal logic can be used in temporal database applica tions. Rather than presenting new results, we report on recent developments and survey the field in a systematic way using a unified formal framework [GHR94; Ch094]. The handbook [GHR94] is a comprehensive reference on mathematical foundations of temporal logic. In this chapter we study how temporal logic is used as a query and integrity constraint language. Consequently, model-theoretic notions, particularly for mula satisfaction, are of primary interest. Axiomatic systems and proof meth ods for temporal logic [GHR94] have found so far relatively few applications in the context of information systems. Moreover, one needs to bear in mind that for the standard linearly-ordered time domains temporal logic is not re cursively axiomatizable [GHR94]' so recursive axiomatizations are by necessity incomplete.
An open process of restandardization, conducted by the IEEE, has led to the definitions of the new VHDL standard. The changes make VHDL safer, more portable, and more powerful. VHDL also becomes bigger and more complete. The canonical simulator of VHDL is enriched by new mechanisms, the predefined environment is more complete, and the syntax is more regular and flexible. Discrepancies and known bugs of VHDL'87 have been fixed. However, the new VHDL'92 is compatible with VHDL'87, with some minor exceptions. This book presents the new VHDL'92 for the VHDL designer. New features ar explained and classified. Examples are provided, each new feature is given a rationale and its impact on design methodology, and performance is analysed. Where appropriate, pitfalls and traps are explained. The VHDL designer will quickly be able to find the feature needed to evaluate the benefits it brings, to modify previous VHDL'87 code to make it more efficient, more portable, and more flexible. VHDL'92 is the essential update for all VHDL designers and managers involved in electronic design.
This book is a minor revision of the thesis submitted in August 1996; no major changes have been made. However, I would like to take this opportunity to mention that since the thesis was written, discoveries have been made which would allow a substantial simplification and strengthening of the results in Chapters 3 and 6. In particular, it is now possible to model sums correctly in the category I as well as in GBP, which means that the definability results of Chapter 6 can be stated and proved at the intensional level, making them simpler and much closer in spirit to the original proofs of Abramsky, Jagadeesan, Malacaria, Hyland, Ong and Nickau [10,61,79]. This also leads quite straightforwardly to an understanding of call-by-value languages. Details of these improvements can be found in [14,73]. It is also worth mentioning that progress has been made on some of the topics suggested for future research in Chapter 7. In particular, fully abstract models have been found for various kinds of languages with local variables [8,13-16], and a fully complete games model of the polymorphic language System F has been constructed by Hughes [59]. Guy McCusker February 1998 Acknowledgements First of all, I must thank my supervisor, Samson Abramsky. It was he who first introduced me to game semantics and suggested avenues of research in the area; this book would certainly not exist were it not for him.
Introduction or Why I wrote this book N the fallof 1997 a dedicated troff user e-rnalled me the macros he used to typeset his books. 1took one look inside his fileand thought, "I can do I this;It'sjustcode. " Asan experiment1spent aweekand wrote a Cprogram and troff macros which formatted and typeset a membership directory for a scholarly society with approximately 2,000 members. When 1 was done, I could enter two commands, and my program and troff would convert raw membershipdata into 200 pages ofPostScriptin 35 seconds. Previously, it had taken me several days to prepare camera-readycopy for the directory using a word processor. For completeness 1sat down and tried to write 1EXmacros for the typesetting. 1failed. Although ninety-five percent of my macros worked, I was unable to prepare the columns the project required. As my frustration grew, 1began this book-mentally, in myhead-as an answer to the question, "Why is 'lEX so hard to learn?" Why use Tgx? Lest you accuse me of the old horse and cart problem, 1should address the question, "Why use 1EX at all?" before 1explain why 'lEX is hard. Iuse 'lEXfor the followingreasons. Itisstable, fast, free, and it uses ASCII. Ofcourse, the most important reason is: 'lEX does a fantastic job. Bystable, I mean it is not likely to change in the next 10 years (much less the next one or two), and it is free of bugs. Both ofthese are important.
A comprehensive first course in Scheme, covering all of its major features: abstraction, functional programming, data types, recursion, and semantic programming. Although the primary goal is to teach students to program in Scheme, this will be suitable for anyone taking a general programming principles course. Each chapter is divided into three sections: core, appendix , and problems. Most essential topics are covered in the core section, but it is assumed that most students will read the appendices and solve most of the problems - all of which require short Scheme procedures. As well as providing a thorough grounding in Scheme, the author discusses different programming paradigms in depth. An important theme throughout is that of "meta-programming", thus providing an insight into topics such as type-checking and overloading which might otherwise be missed.
Databaseprogrammingis the process ofdeveloping data-intensiveapplications which demand the access to large amounts of structured, persistent data. The primary tool required for implementing such applications is a database programming language, namely aformal language which is specialized in the definition and manipulationof relevant large-scale data. As such, a database programming language is expected to provide high-level data modeling capabilitiesas well as avarietyofconstructs which facilitatethehandlingofthespecifieddata. Inthis perspective, the aim of this book is: (i) to present the recent advances in database technologyfrom theviewpointofthe novel database paradigmsproposedfor the developmentofadvanced, non-standard, data-intensive applications, (ii) to focus specificallyon the relational approach, with considerableemphasis on the extensions proposed in the last decade, and (iii) to describe the extended relational database languageAlgres which is primarily the outcome of research work conducted by the authorsincooperationwithalargenumberofothercolleaguesandstudents. Furthermore, in orderto put the concepts presented in the book into practice, the reader is invited to experiment with the Algres system, afree copyofwhich can be requestedfromKluwerAcademicPublishers,ordirectlyfromtheauthors. Dependingonthespecific interest andbackgroundofthereader,thebookcanserve either:(1) to overview recent trends in databases, (2) to introduce in more detail the concepts and theory of the nested relational model, or (3) to present a complete advancedrelationallanguagewhichcanbefreelyusedforexperimentalpurposeswithin academicandresearchframeworks.
Software is difficult to develop, maintain, and reuse. Two factors that contribute to this difficulty are the lack of modular design and good program documentation. The first makes software changes more difficult to implement. The second makes programs more difficult to understand and to maintain. Formal Specification Techniques for Engineering Modular C Programs describes a novel approach to promoting program modularity. The book presents a formal specification language that promotes software modularity through the use of abstract data types, even though the underlying programming language may not have such support. This language is structured to allow useful information to be extracted from a specification, which is then used to perform consistency checks between the specification and its implementation. Formal Specification Techniques for Engineering Modular C Programs also describes a specification-driven, software re-engineering process model for improving existing programs. The aim of this process is to make existing programs easier to maintain and reuse while keeping their essential functionalities unchanged. Audience: Suitable as a secondary text for graduate level courses in software engineering, and as a reference for researchers and practitioners in industry.
FIELD has been a remarkably successful research project. The ideas first exhibited in the environment now form the basis for most of the current generation of programming environments, including Hewlett-Packard's Softbench, DEC's FUSE, Sun's Tooltalk, Lucid's Energize, and SGI's Codevision. FIELD pioneered the notion of broadcast messaging as a basis for tool integration. Moreover, many of the other tool concepts introduced in FIELD have made their way into these environments. Thus in discussing the FIELD environment, this book actually explains the inner workings of today's programming environments. The book will be valuable for those interested in the development of programming tools and environments, as well as serious users of programming environments. It will also be of interest to anyone undertaking a large software project, both by introducing the software tools needed to work on such a project and by demonstrating the concepts of message-based integration which can be applied to a variety of domains.
Quality of Communication-Based Systems presents the research results of students of the Graduiertenkolleg Communication-Based Systems' to an international community. To stimulate the scientific discussion, renowned experts have been invited to give their views on the research areas: Formal specification and mathematical foundations of distributed systems using process algebra, graph transformations, process calculi and temporal logics Performance evaluation, dependability modelling and analysis of real-time systems with different kinds of timed Petri-nets Specification and analysis of communication protocols Reliability, security and dependability in distributed systems Object orientation in distributed systems architecture Software development and concepts for distributed applications Computer network architecture and management Language concepts for distributed systems.
Developing correct and efficient software is far more complex for parallel and distributed systems than it is for sequential processors. Some of the reasons for this added complexity are: the lack of a universally acceptable parallel and distributed programming paradigm, the criticality of achieving high performance, and the difficulty of writing correct parallel and distributed programs. These factors collectively influence the current status of parallel and distributed software development tools efforts. Tools and Environments for Parallel and Distributed Systems addresses the above issues by describing working tools and environments, and gives a solid overview of some of the fundamental research being done worldwide. Topics covered in this collection are: mainstream program development tools, performance prediction tools and studies; debugging tools and research; and nontraditional tools. Audience: Suitable as a secondary text for graduate level courses in software engineering and parallel and distributed systems, and as a reference for researchers and practitioners in industry.
Scientific Data Analysis using Jython Scripting and Java presents practical approaches for data analysis using Java scripting based on Jython, a Java implementation of the Python language. The chapters essentially cover all aspects of data analysis, from arrays and histograms to clustering analysis, curve fitting, metadata and neural networks. A comprehensive coverage of data visualisation tools implemented in Java is also included. Written by the primary developer of the jHepWork data-analysis framework, the book provides a reliable and complete reference source laying the foundation for data-analysis applications using Java scripting. More than 250 code snippets (of around 10-20 lines each) written in Jython and Java, plus several real-life examples help the reader develop a genuine feeling for data analysis techniques and their programming implementation. This is the first data-analysis and data-mining book which is completely based on the Jython language, and opens doors to scripting using a fully multi-platform and multi-threaded approach. Graduate students and researchers will benefit from the information presented in this book.
Reversible grammar allows computational models to be built that are equally well suited for the analysis and generation of natural language utterances. This task can be viewed from very different perspectives by theoretical and computational linguists, and computer scientists. The papers in this volume present a broad range of approaches to reversible, bi-directional, and non-directional grammar systems that have emerged in recent years. This is also the first collection entirely devoted to the problems of reversibility in natural language processing. Most papers collected in this volume are derived from presentations at a workshop held at the University of California at Berkeley in the summer of 1991 organised under the auspices of the Association for Computational Linguistics. This book will be a valuable reference to researchers in linguistics and computer science with interests in computational linguistics, natural language processing, and machine translation, as well as in practical aspects of computability.
This book constitutes the refereed post-proceedings of the 9th IFIP International Conference on Network and Parallel Computing, NPC 2012, held in Gwangju, Korea, in September 2012. The 38 papers presented were carefully reviewed and selected from 136 submissions. The papers are organized in the following topical sections: algorithms, scheduling, analysis, and data mining; network architecture and protocol design; network security; paralel, distributed, and virtualization techniques; performance modeling, prediction, and tuning; resource management; ubiquitous communications and networks; and web, communication, and cloud computing. In addition, a total of 37 papers selected from five satellite workshops (ATIMCN, ATSME, Cloud&Grid, DATICS, and UMAS 2012) are included.
Vorwort In der Natur entwickelten sich die Echtzeitsysteme seit einigen 100 Mil- Honen Jahren. Tierische Nervensysteme haben zur Aufgabe, auf die Nachrichten aus der Umwelt die Steuerungsbefehle an die aktiven Or- gane zu geben. Dabei spielen zum Beispiel bedingte Reflexe eine wichtige Rolle. Vielleicht kann man die Entstehung des Menschen etwa zu der Zeit ansetzen, als sein sich allmahlich entwickelndes Gehirn Gedanken entwickelte, deren Bedeutung in vorausplanender Weise iiber die gerade vorliegende Situation hinausging. Das fiihrte schliesslich unter anderem zum heutigen Wissenschaftler, der seine Theorien und Systeme aufgrund langwieriger Uberlegungen aufbaut. Die Entwicklung der Computer ging im wesentlichen den umgekehrten Weg. Zunachst diente sie nur der Durchfiihrung "starrer" Programme, wie z.B. das erste programmgesteuerte Rechengerat Z3, das der Unterzeichner im Jahre 1941 vorfiihren konnte. Es folgte unter an- derem ein Spezialgerat zur Fliigelvermessung, das man als den ersten Prozessrechner bezeichnen kann. Es wurden etwa vierzig als Analog- Digital-Wandler arbeitende Messuhren yom Rechnerautomaten abgele- sen und im Rahmen eines Programms als Variable verarbeitet. Abel' auch das erfolgte noch in starrer Reihenfolge. Die echte Prozesssteuerung - heute auch Echtzeitsysteme genannt - erfordert aber ein Reagieren auf bestandig wechselnde Situationen.
After a slow and somewhat tentative beginning, machine vision systems are now finding widespread use in industry. So far, there have been four clearly discernible phases in their development, based upon the types of images processed and how that processing is performed: (1) Binary (two level) images, processing in software (2) Grey-scale images, processing in software (3) Binary or grey-scale images processed in fast, special-purpose hardware (4) Coloured/multi-spectral images Third-generation vision systems are now commonplace, although a large number of binary and software-based grey-scale processing systems are still being sold. At the moment, colour image processing is commercially much less significant than the other three and this situation may well remain for some time, since many industrial artifacts are nearly monochrome and the use of colour increases the cost of the equipment significantly. A great deal of colour image processing is a straightforward extension of standard grey-scale methods. Industrial applications of machine vision systems can also be sub divided, this time into two main areas, which have largely retained distinct identities: (i) Automated Visual Inspection (A VI) (ii) Robot Vision (RV) This book is about a fifth generation of industrial vision systems, in which this distinction, based on applications, is blurred and the processing is marked by being much smarter (i. e. more "intelligent") than in the other four generations." |
You may like...
The Business Knowledge Repository…
Jud Breslin, John McGann
Hardcover
Testing Object-Oriented Software - Life…
Imran Bashir, Amrit L. Goel
Hardcover
R1,527
Discovery Miles 15 270
Software Quality Control, Error…
Judith Clapp, Saul F. Stanten, …
Hardcover
R1,241
Discovery Miles 12 410
Parallel Algorithm Derivation and…
Robert Paige, J. H. Reif, …
Hardcover
R4,143
Discovery Miles 41 430
Emerging Technologies for Semantic Work…
Jorg Rech, Bjorn Decker, …
Hardcover
R4,591
Discovery Miles 45 910
Environmental Software Systems - IFIP…
Ralf Denzer, David A. Swayne, …
Hardcover
R5,342
Discovery Miles 53 420
|