![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming > Programming languages
This book constitutes the revised selected papers of the 9th International Symposium on Formal Aspects of Component Software, FACS 2012, held in Mountain View, CA, USA in September 2012. The 16 full papers presented were carefully reviewed and selected from 40 submissions. They cover topics such as formal models for software components and their interaction; formal aspects of services, service oriented architectures, business processes, and cloud computing; design and verification methods for software components and services; composition and deployment: models, calculi, languages; formal methods and modeling languages for components and services; model based and GUI based testing of components and services; models for QoS and other extra-functional properties (e.g., trust, compliance, security) of components and services; components for real-time, safety-critical, secure, and/or embedded systems; industrial or experience reports and case studies; update and reconfiguration of component and service architectures; component systems evolution and maintenance; autonomic components and self-managed applications; formal and rigorous approaches to software adaptation and self-adaptive systems.
Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node.
This book constitutes the refereed proceedings of the International Symposium on Logical Foundations of Computer Science, LFCS 2013, held in San Diego, CA, USA in January 2013. The volume presents 29 revised refereed papers carefully selected by the program committee. The scope of the Symposium is broad and includes constructive mathematics and type theory; logic, automata and automatic structures; computability and randomness; logical foundations of programming; logical aspects of computational complexity; logic programming and constraints; automated deduction and interactive theorem proving; logical methods in protocol and program verification; logical methods in program specification and extraction; domain theory logic; logical foundations of database theory; equational logic and term rewriting; lambda and combinatory calculi; categorical logic and topological semantics; linear logic; epistemic and temporal logics; intelligent and multiple agent system logics; logics of proof and justification; nonmonotonic reasoning; logic in game theory and social software; logic of hybrid systems; distributed system logics; mathematical fuzzy logic; system design logics; and other logics in computer science.
Behavioral Specifications of Businesses and Systems deals with the reading, writing and understanding of specifications. The papers presented in this book describe useful and sometimes elegant concepts, good practices (in programming and in specifications), and solid underlying theory that is of interest and importance to those who deal with increased complexity of business and systems. Most concepts have been successfully used in actual industrial projects, while others are from the forefront of research. Authors include practitioners, business thinkers, academics and applied mathematicians. These seemingly different papers address different aspects of a single problem - taming complexity. Behavioral Specifications of Businesses and Systems emphasizes simplicity and elegance in specifications without concentrating on particular methodologies, languages or tools. It shows how to handle complexity, and, specifically, how to succeed in understanding and specifying businesses and systems based upon precise and abstract concepts. It promotes reuse of such concepts, and of constructs based on them, without taking reuse for granted. Behavioral Specifications of Businesses and Systems is the second volume of papers based on a series of workshops held alongside ACM's annual conference on Object-Oriented Programming Systems Languages and Applications (OOPSLA) and European Conference on Object-Oriented Programming (ECOOP). The first volume, Object-Oriented Behavioral Specifications, edited by Haim Kilov and William Harvey, was published by Kluwer Academic Publishers in 1996.
The VLISP project showed how to produce a comprehensively verified implemen tation for a programming language, namely Scheme [4, 15). Some of the major elements in this verification were: * The proof was based on the Clinger-Rees denotational semantics of Scheme given in [15). Our goal was to produce a "warts-and-all" verification of a real language. With very few exceptions, we constrained ourselves to use the se mantic specification as published. The verification was intended to be rigorous, but. not. complet.ely formal, much in the style of ordinary mathematical discourse. Our goal was to verify the algorithms and data types used in the implementat.ion, not their embodiment. in code. See Section 2 for a more complete discussion ofthese issues. Our decision to be faithful to the published semantic specification led to the most difficult portions ofthe proofs; these are discussed in [13, Section 2.3-2.4). * Our implementation was based on the Scheme48 implementation of Kelsey and Rees [17). This implementation t.ranslates Scheme into an intermediate-level "byte code" language, which is interpreted by a virtual machine. The virtual machine is written in a subset of Scheme called PreScheme. The implementationissufficient.ly complete and efficient to allow it to bootstrap itself. We believe that this is the first. verified language implementation with these properties.
This tutorial volume includes revised and extended lecture notes of six long tutorials, five short tutorials, and one peer-reviewed participant contribution held at the 4th International Summer School on Generative and Transformational Techniques in Software Engineering, GTTSE 2011. The school presents the state of the art in software language engineering and generative and transformational techniques in software engineering with coverage of foundations, methods, tools, and case studies.
I love virtual machines (VMs) and I have done for a long time.If that makes me "sad" or an "anorak," so be it. I love them because they are so much fun, as well as being so useful. They have an element of original sin (writing assembly programs and being in control of an entire machine), while still being able to claim that one is being a respectable member of the community (being structured, modular, high-level, object-oriented, and so on). They also allow one to design machines of one's own, unencumbered by the restrictions of a starts optimising it for some physical particular processor (at least, until one processor or other). I have been building virtual machines, on and off, since 1980 or there abouts. It has always been something of a hobby for me; it has also turned out to be a technique of great power and applicability. I hope to continue working on them, perhaps on some of the ideas outlined in the last chapter (I certainly want to do some more work with register-based VMs and concur rency). I originally wanted to write the book from a purely semantic viewpoint."
TOOLS Eastern Europe 2002 was the third annual conference on the technology of object-oriented languages and systems. It was held in Eastern Europe, more specifically in Sofia, Bulgaria, from March 13 to 15. In my capacity of program chairman, I could count on the support from the Programming Technology Lab of the Vrije Universiteit Brussel to set up the technical program for this con- ference. We managed to assemble a first class international program committee composed of the following researchers: * Mehmet Aksit (Technische Hogeschool Twente, Netherlands) * Jan Bosch (Universiteit Groningen, Netherlands) * Gilad Bracha (Sun Microsystems, USA) * Shigeru Chiba (Tokyo Institute of Technology, Japan) * Pierre Cointe (Ecole des Mines de Nantes, France) * Serge Demeyer (Universitaire Instelling Antwerpen, Belgium) * Pavel Hruby (Navision, Denmark) * Mehdi Jazayeri (Technische Universitiit Wien, Austria) * Eric Jul (University of Copenhagen, Denmark) * Gerti Kappel (University of Linz, Austria) * Boris Magnusson (University of Lund, Sweden) * Daniela Mehandjiiska-Stavreva (Bond University, Australia) * Tom Mens (Vrije Universiteit Brussel, Belgium) * Christine Mingins (Monash University, Australia) * Ana Moreira (Universidade Nova de Lisboa, Portugal) * Oscar Nierstrasz (Universitiit Bern, Switzerland) * Walter Olthoff (DFKI, Germany) * Igor Pottosin (A. P. Ershov Institute of Informatics Systems, Russia) * Atanas Radenski (Winston-Salem State University, USA) Markku Sakkinen (University of Jyvilskyl!l. , Finland) * * Bran Selic (Rational, Canada) * Andrey Terehov (St.
Business Component-Based Software Engineering, an edited volume, aims to complement some other reputable books on CBSE, by stressing how components are built for large-scale applications, within dedicated development processes and for easy and direct combination. This book will emphasize these three facets and will offer a complete overview of some recent progresses. Projects and works explained herein will prompt graduate students, academics, software engineers, project managers and developers to adopt and to apply new component development methods gained from and validated by the authors. The authors of Business Component-Based Software Engineering are academic and professionals, experts in the field, who will introduce the state of the art on CBSE from their shared experience by working on the same projects. Business Component-Based Software Engineering is designed to meet the needs of practitioners and researchers in industry, and graduate-level students in Computer Science and Engineering.
The formal study of program behavior has become an essential ingredient in guiding the design of new computer architectures. Accurate characterization of applications leads to efficient design of high performing architectures. Quantitative and analytical characterization of workloads is important to understand and exploit the interesting features of workloads. This book includes ten chapters on various aspects of workload characterizati on. File caching characteristics of the industry-standard web-serving benchmark SPECweb99 are presented by Keller et al. in Chapter 1, while value locality of SPECJVM98 benchmarks are characterized by Rychlik et al. in Chapter 2. SPECJVM98 benchmarks are visited again in Chapter 3, where Tao et al. study the operating system activity in Java programs. In Chapter 4, KleinOsowski et al. describe how the SPEC2000 CPU benchmark suite may be adapted for computer architecture research and present the small, representative input data sets they created to reduce simulation time without compromising on accuracy. Their research has been recognized by the Standard Performance Evaluation Corporation (SPEC) and is listed on the official SPEC website, http://www. spec. org/osg/cpu2000/research/umnl. The main contribution of Chapter 5 is the proposal of a new measure called locality surface to characterize locality of reference in programs. Sorenson et al. describe how a three-dimensional surface can be used to represent both of programs. In Chapter 6, Thornock et al.
Practical Performance Modeling: Application of the MOSEL Language introduces the new and powerful performance and reliability modeling language MOSEL (MOdeling, Specification and Evaluation Language), developed at the University of Erlangen, Germany. MOSEL facilitates the performance and reliability modeling of a computer, communication, manufacturing or workflow management system in a very intuitive and simple way. The core of MOSEL consists of constructs to specify the possible states and state transitions of the system under consideration. This specification is very compact and easy to understand. With additional constructs, the interesting performance or reliability measures and graphical representations can be specified. With some experience, it is possible to write down the MOSEL description of a system immediately only by knowing the behavior of the system under study. There are no restrictions, unlike models using, for example, queueing networks, Petri nets or fault trees. MOSEL fulfills all the requirements for a universal modeling language. It is high level, system-oriented, and usable. It is open and can be integrated with many tools. By providing compilers, which translate descriptions specified in MOSEL into the tool-specific languages, all previously implemented tools with their different methods and algorithms (including simulation) can be used. Practical Performance Modeling: Application of the MOSEL Language provides an easy to understand but nevertheless complete introduction to system modeling using MOSEL and illustrates how easily MOSEL can be used for modeling real-life examples from the fields of computer, communication, and manufacturing systems. Practical Performance Modeling: Application of the MOSEL Language will be of interest to professionals and students in the fields of performance and reliability modeling in computer science, communication, and manufacturing. It is also well suited as a textbook for university courses covering performance and reliability modeling with practical applications.
Real-time computing systems are vital to a wide range of applications. For example, they are used in the control of nuclear reactors and automated manufacturing facilities, in controlling and tracking air traffic, and in communication systems. In recent years, real-time systems have also grown larger and become more critical. For instance, advanced aircraft such as the space shuttle must depend heavily on computer sys tems Carlow 84]. The centralized control of manufacturing facilities and assembly plants operated by robots are other examples at the heart of which lie embedded real-time systems. Military defense systems deployed in the air, on the ocean surface, land and underwater, have also been increasingly relying upon real-time systems for monitoring and operational safety purposes, and for retaliatory and containment measures. In telecommunications and in multi-media applications, real time characteristics are essential to maintain the integrity of transmitted data, audio and video signals. Many of these systems control, monitor or perform critical operations, and must respond quickly to emergency events in a wide range of embedded applications. They are therefore required to process tasks with stringent timing requirements and must perform these tasks in a way that these timing requirements are guaranteed to be met. Real-time scheduling al gorithms attempt to ensure that system timing behavior meets its specifications, but typically assume that tasks do not share logical or physical resources. Since resource-sharing cannot be eliminated, synchronization primitives must be used to ensure that resource consis tency constraints are not violated."
Updated for C11 Write powerful C programs...without becoming a technical expert! This book is the fastest way to get comfortable with C, one incredibly clear and easy step at a time. You'll learn all the basics: how to organize programs, store and display data, work with variables, operators, I/O, pointers, arrays, functions, and much more. C programming has neverbeen this simple! Who knew how simple C programming could be? This is today's best beginner's guide to writing C programs-and to learning skills you can use with practically any language. Its simple, practical instructions will help you start creating useful, reliable C code, from games to mobile apps. Plus, it's fully updated for the new C11 standard and today's free, open source tools! Here's a small sample of what you'll learn: * Discover free C programming tools for Windows, OS X, or Linux * Understand the parts of a C program and how they fit together * Generate output and display it on the screen * Interact with users and respond to their input * Make the most of variables by using assignments and expressions * Control programs by testing data and using logical operators * Save time and effort by using loops and other techniques * Build powerful data-entry routines with simple built-in functions * Manipulate text with strings * Store information, so it's easy to access and use * Manage your data with arrays, pointers, and data structures * Use functions to make programs easier to write and maintain * Let C handle all your program's math for you * Handle your computer's memory as efficiently as possible * Make programs more powerful with preprocessing directives
A broad-ranging survey of our current understanding of visual languages and their theoretical foundations. Its main focus is the definition, specification, and structural analysis of visual languages by grammars, logic, and algebraic methods and the use of these techniques in visual language implementation. Researchers in formal language theory, HCI, artificial intelligence, and computational linguistics will all find this an invaluable guide to the current state of research in the field.
Cooperating Heterogeneous Systems provides an in-depth introduction to the issues and techniques surrounding the integration and control of diverse and independent software components. Organizations increasingly rely upon diverse computer systems to perform a variety of knowledge-based tasks. This presents technical issues of interoperability and integration, as well as philosophical issues of how cooperation and interaction between computational entities is to be realized. Cooperating systems are systems that work together towards a common end. The concepts of cooperation must be realized in technically sound system architectures, having a uniform meta-layer between knowledge sources and the rest of the system. The layer consists of a family of interpreters, one for each knowledge source, and meta-knowledge. A system architecture to integrate and control diverse knowledge sources is presented. The architecture is based on the meta-level properties of the logic programming language Prolog. An implementation of the architecture is described, a Framework for Logic Programming Systems with Distributed Execution (FLiPSiDE). Knowledge-based systems play an important role in any up-to-date arsenal of decision support tools. The tremendous growth of computer communications infrastructure has made distributed computing a viable option, and often a necessity in geographically distributed organizations. It has become clear that to take knowledge-based systems to their next useful level, it is necessary to get independent knowledge-based systems to work together, much as we put together ad hoc work groups in our organizations to tackle complex problems. The book is for scientists and software engineers who have experience in knowledge-based systems and/or logic programming and seek a hands-on introduction to cooperating systems. Researchers investigating autonomous agents, distributed computation, and cooperating systems will find fresh ideas and new perspectives on well-established approaches to control, organization, and cooperation.
The representation of uncertainty is a central issue in Artificial Intelligence (AI) and is being addressed in many different ways. Each approach has its proponents, and each has had its detractors. However, there is now an in creasing move towards the belief that an eclectic approach is required to represent and reason under the many facets of uncertainty. We believe that the time is ripe for a wide ranging, yet accessible, survey of the main for malisms. In this book, we offer a broad perspective on uncertainty and approach es to managing uncertainty. Rather than provide a daunting mass of techni cal detail, we have focused on the foundations and intuitions behind the various schools. The aim has been to present in one volume an overview of the major issues and decisions to be made in representing uncertain knowl edge. We identify the central role of managing uncertainty to AI and Expert Systems, and provide a comprehensive introduction to the different aspects of uncertainty. We then describe the rationales, advantages and limitations of the major approaches that have been taken, using illustrative examples. The book ends with a review of the lessons learned and current research di rections in the field. The intended readership will include researchers and practitioners in volved in the design and implementation of Decision Support Systems, Ex pert Systems, other Knowledge-Based Systems and in Cognitive Science."
More than ever, FDL is the place for researchers, developers, industry designers, academia, and EDA tool companies to present and to learn about the latest scientific achievements, practical applications and users experiences in the domain of specification and design languages. FDL covers the modeling and design methods, and their latest supporting tools, for complex embedded systems, systems on chip, and heterogeneous systems. FDL 2009 is the twelfth in a series of events that were held all over Europe, in selected locations renowned for their Universities and Reseach Institutions as well as the importance of their industrial environment in Computer Science and Micro-electronics. In 2009, FDL was organized in the attractive south of France area of Sophia Antipolis. together with the DASIP (Design and Architectures for Signal and Image Processing) Conference and the SAME (Sophia Antipolis MicroElectronics ) Forum. All submitted papers were carefully reviewed to build a program with 27 full and 10 short contributions. From these, the Program Committee selected a shorter list, based on the evaluations of the reviewers, and the originality and relevance of the work that was presented at the Forum. The revised, and sometimes extended versions of these contributions constitute the chapters of this volume. Advances in Design Methods from Modeling Languages for Embedded Systems and SoC's presents extensions to standard specification and description languages, as well as new language-based design techniques and methodologies to solve the challenges raised by mixed signal and multi-processor systems on a chip. It is intended as a reference for researchers and lecturers, as well as a state of the art milestone for designers and CAD developers.
This uniquely authoritative and comprehensive handbook is the first to cover the vast field of formal languages, as well as its traditional and most recent applications to such diverse areas as linguistics, developmental biology, computer graphics, cryptology, molecular genetics, and programming languages. No other work comes even close to the scope of this one. The editors are extremely well-known theoretical computer scientists, and each individual topic is presented by the leading authorities in the particular field. The maturity of the field makes it possible to include a historical perspective in many presentations. The work is divided into three volumes, which may be purchased as a set.
In Logic Programming, as in many other areas, Theory is often best tested by Application and attempted Application frequently necessitates advances in Theory, so both theoretical and practical work is essential for effective progress. This is clearly evident in the following papers presented to the sec ond UK Logic Programming Conference which was sponsored by the United Kingdom branch of the Association of Logic Programming and convened at Bristol.University in March 1990. This book contains 13 papers from that conference grouped under four head ings: Theory supporting practice motivating theory In this first group of papers, difficulties experienced in practical application of Prolog and in debugging Prolog programs have motivated work on extensions to the language and its development environment. Program development advances are represented by two papers on debugging and one on a development methodology for CLP programs. On the theoret ical side a Pure(r) logic language is proposed as well as extensions to make logic more effective for integrity checking in deductive databases. Applications The next group contains three papers. The first describers the use of Prolog to develop a Control Engineering workStation (CES). The second investigates the use of a logic programming based KBMS for developing a prototype Fi nancial Management Information System. In the last it is shown how a subset of prolog can provide a vehicle for the animation of Discrete Mathematics."
Object-Z is an object-oriented extension of the formal specification language Z. It adds to Z notions of classes and objects, and inheritance and polymorphism. By extending Z's semantic basis, it enables the specification of systems as collections of independent objects in which self and mutual referencing are possible. The Object-Z Specification Language presents a comprehensive description of Object-Z including discussions of semantic issues, definitions of all language constructs, type rules and other rules of usage, specification guidelines, and a full concrete syntax. It will enable you to confidently construct Object-Z specifications and is intended as a reference manual to keep by your side as you use and learn to use Object-Z. The Object-Z Specification Language is suitable as a textbook or as a secondary text for a graduate-level course, and as a reference for researchers and practitioners in industry.
DSSSL (Document Style Semantics and Specification Language) is an ISO standard (ISO/IEC 10179: 1996) published in the year 1996. DSSSL is a standard of the SGML family (Standard Generalized Markup Language, ISO 8879:1986), whose aim is to establish a processing model for SGML documents. For a good understanding of the SGML standard, many books exist including Author's guide[BryanI988] and The SGML handbook[GoldfarbI990]. A DSSSL document is an SGML document, written with the same rules that guide any SGML document. The structure of a DSSSL document is explained in Chapter 2. DSSSL is based, in part, on scheme, a standard functional programming language. The DSSSL subset of scheme along with the procedures supported by DSSSL are explained in Chapter 3. The DSSSL standard starts with the supposition of a pre-existing SGML document, and offers a series of processes that can be performed on it: * Groves The first process that is performed on an SGML document in DSSSL is always the analysis of the document and the creation of a grove. The DSSSL standard shares many common characteristics with another standard of the SGML family, HyTime (ISO/IEC 10744). These standards were developed in parallel, and their developers designed a common data model, the grove, that would support the processing needs of each standard.
Variational Object-Oriented Programming Beyond Classes and Inheritance presents an approach for improving the standard object-oriented programming model. The proposal is aimed at supporting a larger range of incremental behavior variations and thus promises to be more effective in mastering the complexity of today's software. The material presented in this book is interesting to both beginners and students or professionals with an advanced knowledge of object-oriented programming: * The first part of the book can be used as supplementary material for students and professionals being introduced to object-oriented programming. It provides them with a very concise description of the main concepts of object-oriented programming, which are presented from a conceptual point of view rather than related to the features of a particular object-oriented programming language. The description of the main concepts is a synthesis of considerations from several leading works in data abstraction and object-oriented technology. Parts of the book are currently used as supplementary material for teaching a graduate course on object-oriented design.* The book provides experienced programmers with a conceptual view of the relationship between object-oriented programming, data abstraction, and previous programming models that promotes a deep understanding of the essence of object-oriented programming. * The book presents a synthesis of both the main achievements and the main shortcomings of object-oriented programming with respect to supporting incremental programming and promoting software reuse. It illustrates the behavior variations that can be performed incrementally and those that are not supported properly; the workarounds currently used for dealing with the latter case are described. * Recent developments from ongoing research in object-oriented programming are presented, showing that the problems they deal with can actually be traced to some form of context-dependent behavior. The developments considered include design patterns, subject-oriented programming, adaptive programming, reflection, open implementations, and aspect-oriented programming.* Advanced students interested in language design are not only provided with a comprehensive informal description of the new model, but also with a formal model and the description of a prototype implementation of RONDO embedded into the Smalltalk-80 environment. This can serve as a basis for experimenting with new concepts or with modifications of the proposed model. * The last chapter of the book is particularly beneficial to the practitioners of object technology, since it deals with issues in maintaining reusable object-oriented systems.
Software Visualization: From Theory to Practice was initially
selected as a special volume for "The Annals of Software
Engineering (ANSE) Journal," which has been discontinued. This
special edited volume, is the first to discuss software
visualization in the perspective of software engineering. It is a
collection of 14 chapters on software visualization, covering the
topics from theory to practical systems. The chapters are divided
into four Parts: Visual Formalisms, Human Factors, Architectural
Visualization, and Visualization in Practice. They cover a
comprehensive range of software visualization topics, including
Software Visualization: From Theory to Practice is designed to meet the needs of both an academic and a professional audience composed of researchers and software developers. This book is also suitable for senior undergraduate and graduate students in software engineering and computer science, as a secondary text or a reference.
Perspectives On Software Requirements presents perspectives on several current approaches to software requirements. Each chapter addresses a specific problem where the authors summarize their experiences and results to produce well-fit and traceable requirements. Chapters highlight familiar issues with recent results and experiences, which are accompanied by chapters describing well-tuned new methods for specific domains.
The implementation of object-oriented languages has been an active topic of research since the 1960s when the first Simula compiler was written. The topic received renewed interest in the early 1980s with the growing popularity of object-oriented programming languages such as c++ and Smalltalk, and got another boost with the advent of Java. Polymorphic calls are at the heart of object-oriented languages, and even the first implementation of Simula-67 contained their classic implementation via virtual function tables. In fact, virtual function tables predate even Simula-for example, Ivan Sutherland's Sketchpad drawing editor employed very similar structures in 1960. Similarly, during the 1970s and 1980s the implementers of Smalltalk systems spent considerable efforts on implementing polymorphic calls for this dynamically typed language where virtual function tables could not be used. Given this long history of research into the implementation of polymorphic calls, and the relatively mature standing it achieved over time, why, one might ask, should there be a new book in this field? The answer is simple. Both software and hardware have changed considerably in recent years, to the point where many assumptions underlying the original work in this field are no longer true. In particular, virtual function tables are no longer sufficient to implement polymorphic calls even for statically typed languages; for example, Java's interface calls cannot be implemented this way. Furthermore, today's processors are deeply pipelined and can execute instructions out-of order, making it difficult to predict the execution time of even simple code sequences. |
![]() ![]() You may like...
Advanced Visual Basic 6 - Power…
Matthew Curland, Gary Clarke
Paperback
R1,304
Discovery Miles 13 040
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R4,084
Discovery Miles 40 840
Java How to Program, Late Objects…
Paul Deitel, Harvey Deitel
Paperback
|