![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming
Addressing the open problem of engineering normative open systems using the multi-agent paradigm, normative open systems are explained as systems in which heterogeneous and autonomous entities and institutions coexist in a complex social and legal framework that can evolve to address the different and often conflicting objectives of the many stakeholders involved. Presenting a software engineering approach which covers both the analysis and design of these kinds of systems, and which deals with the open issues in the area, ROMAS (Regulated Open Multi-Agent Systems) defines a specific multi-agent architecture, meta-model, methodology and CASE tool. This CASE tool is based on Model-Driven technology and integrates the graphical design with the formal verification of some properties of these systems by means of model checking techniques. Utilizing tables to enhance reader insights into the most important requirements for designing normative open multi-agent systems, the book also provides a detailed and easy to understand description of the ROMAS approach and the advantages of using ROMAS. This method is illustrated with case studies, in which the reader may develop a comprehensive understanding of applying ROMAS to a given problem. The case studies are presented with illustrations of the developments. Reading this book will help readers to understand the increasing demand for normative open systems and their development requirements; understand how multi-agent systems approaches can be used to deal with the development of systems of this kind; to learn an easy to use and complete engineering method for large-scale and complex normative systems and to recognize how Model-Driven technology can be used to integrate the analysis, design, verification and implementation of multi-agent systems.
This book highlights the current challenges for engineers involved in product development and the associated changes in procedure they make necessary. Methods for systematically analyzing the requirements for safety and security mechanisms are described using examples of how they are implemented in software and hardware, and how their effectiveness can be demonstrated in terms of functional and design safety are discussed. Given today's new E-mobility and automated driving approaches, new challenges are arising and further issues concerning "Road Vehicle Safety" and "Road Traffic Safety" have to be resolved. To address the growing complexity of vehicle functions, as well as the increasing need to accommodate interdisciplinary project teams, previous development approaches now have to be reconsidered, and system engineering approaches and proven management systems need to be supplemented or wholly redefined. The book presents a continuous system development process, starting with the basic requirements of quality management and continuing until the release of a vehicle and its components for road use. Attention is paid to the necessary definition of the respective development item, the threat-, hazard- and risk analysis, safety concepts and their relation to architecture development, while the book also addresses the aspects of product realization in mechanics, electronics and software as well as for subsequent testing, verification, integration and validation phases. In November 2011, requirements for the Functional Safety (FuSa) of road vehicles were first published in ISO 26262. The processes and methods described here are intended to show developers how vehicle systems can be implemented according to ISO 26262, so that their compliance with the relevant standards can be demonstrated as part of a safety case, including audits, reviews and assessments.
This unique textbook/reference presents unified coverage of bioinformatics topics relating to both biological sequences and biological networks, providing an in-depth analysis of cutting-edge distributed algorithms, as well as of relevant sequential algorithms. In addition to introducing the latest algorithms in this area, more than fifteen new distributed algorithms are also proposed. Topics and features: reviews a range of open challenges in biological sequences and networks; describes in detail both sequential and parallel/distributed algorithms for each problem; suggests approaches for distributed algorithms as possible extensions to sequential algorithms, when the distributed algorithms for the topic are scarce; proposes a number of new distributed algorithms in each chapter, to serve as potential starting points for further research; concludes each chapter with self-test exercises, a summary of the key points, a comparison of the algorithms described, and a literature review.
This unique text/reference reviews the key principles and techniques in conceptual modelling which are of relevance to specialists in the field of cultural heritage. Information modelling tasks are a vital aspect of work and study in such disciplines as archaeology, anthropology, history, and architecture. Yet the concepts and methods behind information modelling are rarely covered by the training in cultural heritage-related fields. With the increasing popularity of the digital humanities, and the rapidly growing need to manage large and complex datasets, the importance of information modelling in cultural heritage is greater than ever before. To address this need, this book serves in the place of a course on software engineering, assuming no previous knowledge of the field. Topics and features: Presents a general philosophical introduction to conceptual modelling Introduces the basics of conceptual modelling, using the ConML language as an infrastructure Reviews advanced modelling techniques relating to issues of vagueness, temporality and subjectivity, in addition to such topics as metainformation and feature redefinition Proposes an ontology for cultural heritage supported by the Cultural Heritage Abstract Reference Model (CHARM), to enable the easy construction of conceptual models Describes various usage scenarios and applications of cultural heritage modelling, offering practical tips on how to use different techniques to solve real-world problems This interdisciplinary work is an essential primer for tutors and students (at both undergraduate and graduate level) in any area related to cultural heritage, including archaeology, anthropology, art, history, architecture, or literature. Cultural heritage managers, researchers, and professionals will also find this to be a valuable reference, as will anyone involved in database design, data management, or the conceptualization of cultural heritage in general. Dr. Cesar Gonzalez-Perez is a Staff Scientist at the Institute of Heritage Sciences (Incipit), within the Spanish National Research Council (CSIC), Santiago de Compostela, Spain.
Network Science is the emerging field concerned with the study of large, realistic networks. This interdisciplinary endeavor, focusing on the patterns of interactions that arise between individual components of natural and engineered systems, has been applied to data sets from activities as diverse as high-throughput biological experiments, online trading information, smart-meter utility supplies, and pervasive telecommunications and surveillance technologies. This unique text/reference provides a fascinating insight into the state of the art in network science, highlighting the commonality across very different areas of application and the ways in which each area can be advanced by injecting ideas and techniques from another. The book includes contributions from an international selection of experts, providing viewpoints from a broad range of disciplines. It emphasizes networks that arise in nature-such as food webs, protein interactions, gene expression, and neural connections-and in technology-such as finance, airline transport, urban development and global trade. Topics and Features: begins with a clear overview chapter to introduce this interdisciplinary field; discusses the classic network science of fixed connectivity structures, including empirical studies, mathematical models and computational algorithms; examines time-dependent processes that take place over networks, covering topics such as synchronisation, and message passing algorithms; investigates time-evolving networks, such as the World Wide Web and shifts in topological properties (connectivity, spectrum, percolation); explores applications of complex networks in the physical and engineering sciences, looking ahead to new developments in the field. Researchers and professionals from disciplines as varied as computer science, mathematics, engineering, physics, chemistry, biology, ecology, neuroscience, epidemiology, and the social sciences will all benefit from this topical and broad overview of current activities and grand challenges in the unfolding field of network science.
For years, Jack Flanagan has buried himself in the little town of Friendship, New York. Alcohol is a convenient way to banish the ghosts of the past, but it can't fill the void of loneliness. A serendipitous twist of fate has Jack dog-sitting Darla, an orphaned Golden Retriever, and he soon realizes the true nature of friendship. Jack and Darla form a close bond as they struggle to find inner peace over their individual losses. Yet the farmhouse where Jack is staying is anything but peaceful-it's Norman Rockwell on the outside and Salvador Dali within, as Jack continually fights the bottle's lure. His relationship with Kate, a spunky middle-aged waitress, forces Jack to confront his failed marriage, especially when Kate reveals secrets of her own. But it is the impish Darla who brings laughter at the most dismal of times and touches the hearts of those around her. Through Darla, Jack rethinks his life and realizes that it's never too late to change.
Multiple criteria decision aid (MCDA) methods are illustrated in this book through theoretical and computational techniques utilizing Python. Existing methods are presented in detail with a step by step learning approach. Theoretical background is given for TOPSIS, VIKOR, PROMETHEE, SIR, AHP, goal programming, and their variations. Comprehensive numerical examples are also discussed for each method in conjunction with easy to follow Python code. Extensions to multiple criteria decision making algorithms such as fuzzy number theory and group decision making are introduced and implemented through Python as well. Readers will learn how to implement and use each method based on the problem, the available data, the stakeholders involved, and the various requirements needed. Focusing on the practical aspects of the multiple criteria decision making methodologies, this book is designed for researchers, practitioners and advanced graduate students in the applied mathematics, information systems, operations research and business administration disciplines, as well as other engineers and scientists oriented in interdisciplinary research. Readers will greatly benefit from this book by learning and applying various MCDM/A methods. (Adiel Teixeira de Almeida, CDSID-Center for Decision System and Information Development, Universidade Federal de Pernambuco, Recife, Brazil) Promoting the development and application of multicriteria decision aid is essential to ensure more ethical and sustainable decisions. This book is a great contribution to this objective. It is a perfect blend of theory and practice, providing potential users and researchers with the theoretical bases of some of the best-known methods as well as with the computing tools needed to practice, to compare and to put these methods to use. (Jean-Pierre Brans, Vrije Universiteit Brussel, Brussels, Belgium) This book is intended for researchers, practitioners and students alike in decision support who wish to familiarize themselves quickly and efficiently with multicriteria decision aiding algorithms. The proposed approach is original, as it presents a selection of methods from the theory to the practical implementation in Python, including a detailed example. This will certainly facilitate the learning of these techniques, and contribute to their effective dissemination in applications. (Patrick Meyer, IMT Atlantique, Lab-STICC, Univ. Bretagne Loire, Brest, France)
The Internet has become the major form of map delivery. The current presentation of maps is based on the use of online services. This session examines developments related to online methods of map delivery, particularly Application Programmer Interfaces (APIs) and MapServices in general, including Google Maps API and similar services. Map mashups have had a major impact on how spatial information is presented. The advantage of using a major online mapping site is that the maps represent a common and recognizable representation of the world. Overlaying features on top of these maps provides a frame of reference for the map user. A particular advantage for thematic mapping is the ability to spatially reference thematic data.
This book provides a snapshot of the current state-of-the-art in the fields of mobile and wireless technology, security and applications. The proceedings of the 2nd International Conference on Mobile and Wireless Technology (ICMWT2015), it represents the outcome of a unique platform for researchers and practitioners from academia and industry to share cutting-edge developments in the field of mobile and wireless science technology, including those working on data management and mobile security. The contributions presented here describe the latest academic and industrial research from the international mobile and wireless community. The scope covers four major topical areas: mobile and wireless networks and applications; security in mobile and wireless technology; mobile data management and applications; and mobile software. The book will be a valuable reference for current researchers in academia and industry, and a useful resource for graduate-level students working on mobile and wireless technology.
The book 'BiLBIQ: A biologically inspired Robot with walking and rolling locomotion' deals with implementing a locomotion behavior observed in the biological archetype Cebrennus villosus to a robot prototype whose structural design needs to be developed. The biological sample is investigated as far as possible and compared to other evolutional solutions within the framework of nature's inventions. Current achievements in robotics are examined and evaluated for their relation and relevance to the robot prototype in question. An overview of what is state of the art in actuation ensures the choice of the hardware available and most suitable for this project. Through a constant consideration of the achievement of two fundamentally different ways of locomotion with one and the same structure, a robot design is developed and constructed taking hardware constraints into account. The development of a special leg structure that needs to resemble and replace body elements of the biological archetype is a special challenge to be dealt with. Finally a robot prototype was achieved, which is able to walk and roll - inspired by the spider Cebrennus villosus.
This book deals with the problem of finding suitable languages that can represent specific classes of Petri nets, the most studied and widely accepted model for distributed systems. Hence, the contribution of this book amounts to the alphabetization of some classes of distributed systems. The book also suggests the need for a generalization of Turing computability theory. It is important for graduate students and researchers engaged with the concurrent semantics of distributed communicating systems. The author assumes some prior knowledge of formal languages and theoretical computer science.
Grids, P2P and Services Computing, the 12th volume of the CoreGRID series, is based on the CoreGrid ERCIM Working Group Workshop on Grids, P2P and Service Computing in Conjunction with EuroPar 2009. The workshop will take place August 24th, 2009 in Delft, The Netherlands. Grids, P2P and Services Computing, an edited volume contributed by well-established researchers worldwide, will focus on solving research challenges for Grid and P2P technologies. Topics of interest include: Service Level Agreement, Data & Knowledge Management, Scheduling, Trust and Security, Network Monitoring and more. Grids are a crucial enabling technology for scientific and industrial development. This book also includes new challenges related to service-oriented infrastructures. Grids, P2P and Services Computing is designed for a professional audience composed of researchers and practitioners within the Grid community industry. This volume is also suitable for advanced-level students in computer science.
Genetic programming (GP) is a popular heuristic methodology of program synthesis with origins in evolutionary computation. In this generate-and-test approach, candidate programs are iteratively produced and evaluated. The latter involves running programs on tests, where they exhibit complex behaviors reflected in changes of variables, registers, or memory. That behavior not only ultimately determines program output, but may also reveal its `hidden qualities' and important characteristics of the considered synthesis problem. However, the conventional GP is oblivious to most of that information and usually cares only about the number of tests passed by a program. This `evaluation bottleneck' leaves search algorithm underinformed about the actual and potential qualities of candidate programs. This book proposes behavioral program synthesis, a conceptual framework that opens GP to detailed information on program behavior in order to make program synthesis more efficient. Several existing and novel mechanisms subscribing to that perspective to varying extent are presented and discussed, including implicit fitness sharing, semantic GP, co-solvability, trace convergence analysis, pattern-guided program synthesis, and behavioral archives of subprograms. The framework involves several concepts that are new to GP, including execution record, combined trace, and search driver, a generalization of objective function. Empirical evidence gathered in several presented experiments clearly demonstrates the usefulness of behavioral approach. The book contains also an extensive discussion of implications of the behavioral perspective for program synthesis and beyond.
The first course in software engineering is the most critical. Education must start from an understanding of the heart of software development, from familiar ground that is common to all software development endeavors. This book is an in-depth introduction to software engineering that uses a systematic, universal kernel to teach the essential elements of all software engineering methods. This kernel, Essence, is a vocabulary for defining methods and practices. Essence was envisioned and originally created by Ivar Jacobson and his colleagues, developed by Software Engineering Method and Theory (SEMAT) and approved by The Object Management Group (OMG) as a standard in 2014. Essence is a practice-independent framework for thinking and reasoning about the practices we have and the practices we need. Essence establishes a shared and standard understanding of what is at the heart of software development. Essence is agnostic to any particular method, lifecycle independent, programming language independent, concise, scalable, extensible, and formally specified. Essence frees the practices from their method prisons. The first part of the book describes Essence, the essential elements to work with, the essential things to do and the essential competencies you need when developing software. The other three parts describe more and more advanced use cases of Essence. Using real but manageable examples, it covers the fundamentals of Essence and the innovative use of serious games to support software engineering. It also explains how current practices such as user stories, use cases, Scrum, and micro-services can be described using Essence, and illustrates how their activities can be represented using the Essence notions of cards and checklists. The fourth part of the book offers a vision how Essence can be scaled to support large, complex systems engineering. Essence is supported by an ecosystem developed and maintained by a community of experienced people worldwide. From this ecosystem, professors and students can select what they need and create their own way of working, thus learning how to create ONE way of working that matches the particular situation and needs.
This book describes new algorithms and ideas for making effective decisions under constraints, including applications in control engineering, manufacturing (how to optimally determine the production level), econometrics (how to better predict stock market behavior), and environmental science and geosciences (how to combine data of different types). It also describes general algorithms and ideas that can be used in other application areas. The book presents extended versions of selected papers from the annual International Workshops on Constraint Programming and Decision Making (CoProd'XX) from 2013 to 2016. These workshops, held in the US (El Paso, Texas) and in Europe (Wurzburg, Germany, and Uppsala, Sweden), have attracted researchers and practitioners from all over the world. It is of interest to practitioners who benefit from the new techniques, to researchers who want to extend the ideas from these papers to new application areas and/or further improve the corresponding algorithms, and to graduate students who want to learn more - in short, to anyone who wants to make more effective decisions under constraints.
The information infrastructure - comprising computers, embedded devices, networks and software systems - is vital to operations in every sector: inf- mation technology, telecommunications, energy, banking and ?nance, tra- portation systems, chemicals, agriculture and food, defense industrial base, public health and health care, national monuments and icons, drinking water and water treatment systems, commercial facilities, dams, emergency services, commercial nuclear reactors, materials and waste, postal and shipping, and government facilities. Global business and industry, governments, indeed - ciety itself, cannot function if major components of the critical information infrastructure are degraded, disabled or destroyed. This book, Critical Infrastructure Protection III, is the third volume in the annualseriesproducedbyIFIP WorkingGroup11.10onCriticalInfrastructure Protection, an active international community of scientists, engineers, prac- tioners and policy makers dedicated to advancing research, development and implementation e?orts related to critical infrastructure protection. The book presents original research results and innovative applications in the area of infrastructure protection. Also, it highlights the importance of weaving s- ence, technology and policy in crafting sophisticated, yet practical, solutions that will help secure information, computer and network assets in the various critical infrastructure sectors. This volume contains seventeen edited papers from the Third Annual IFIP Working Group 11.10 International Conference on Critical Infrastructure P- tection, held at Dartmouth College, Hanover, New Hampshire, March 23-25, 2009. The papers were refereed by members of IFIP Working Group 11.10 and other internationally-recognized experts in critical infrastructure protection.
To deal with the flexible architectures and evolving functionalities of complex modern systems, the agent metaphor and agent-based computing are often the most appropriate software design approach. As a result, a broad range of special-purpose design processes has been developed in the last several years to tackle the challenges of these specific application domains. In this context, in early 2012 the IEEE-FIPA Design Process Documentation Template SC0097B was defined, which facilitates the representation of design processes and method fragments through the use of standardized templates, thus supporting the creation of easily sharable repositories and facilitating the composition of new design processes. Following this standardization approach, this book gathers the documentations of some of the best-known agent-oriented design processes. After an introductory section, describing the goal of the book and the existing IEEE FIPA standard for design process documentation, thirteen processes (including the widely known Open UP, the de facto standard in object-oriented software engineering) are documented by their original creators or other well-known scientists working in the field. As a result, this is the first work to adopt a standard, unified descriptive approach for documenting different processes, making it much easier to study the individual processes, to rigorously compare them, and to apply them in industrial projects.While there are a few books on the market describing the individual agent-oriented design processes, none of them presents all the processes, let alone in the same format. With this handbook, for the first time, researchers as well as professional software developers looking for an overview as well as for detailed and standardized descriptions of design processes will find a comprehensive presentation of the most important agent-oriented design processes, which will be an invaluable resource when developing solutions in various application areas.
A principal aim of computer graphics is to generate images that look as real as photographs. Realistic computer graphics imagery has however proven to be quite challenging to produce, since the appearance of materials arises from complicated physical processes that are difficult to analytically model and simulate, and image-based modeling of real material samples is often impractical due to the high-dimensional space of appearance data that needs to be acquired. This book presents a general framework based on the inherent coherency in the appearance data of materials to make image-based appearance modeling more tractable. We observe that this coherence manifests itself as low-dimensional structure in the appearance data, and by identifying this structure we can take advantage of it to simplify the major processes in the appearance modeling pipeline. This framework consists of two key components, namely the coherence structure and the accompanying reconstruction method to fully recover the low-dimensional appearance data from sparse measurements. Our investigation of appearance coherency has led to three major forms of low-dimensional coherence structure and three types of coherency-based reconstruction upon which our framework is built. This coherence-based approach can be comprehensively applied to all the major elements of image-based appearance modeling, from data acquisition of real material samples to user-assisted modeling from a photograph, from synthesis of volumes to editing of material properties, and from efficient rendering algorithms to physical fabrication of objects. In this book we present several techniques built on this coherency framework to handle various appearance modeling tasks both for surface reflections and subsurface scattering, the two primary physical components that generate material appearance. We believe that coherency-based appearance modeling will make it easier and more feasible for practitioners to bring computer graphics imagery to life. This book is aimed towards readers with an interest in computer graphics. In particular, researchers, practitioners and students will benefit from this book by learning about the underlying coherence in appearance structure and how it can be utilized to improve appearance modeling. The specific techniques presented in our manuscript can be of value to anyone who wishes to elevate the realism of their computer graphics imagery. For understanding this book, an elementary background in computer graphics is assumed, such as from an introductory college course or from practical experience with computer graphics.
The development of software system with acceptable level of reliability and quality within available time frame and budget becomes a challenging objective. This objective could be achieved to some extent through early prediction of number of faults present in the software, which reduces the cost of development as it provides an opportunity to make early corrections during development process. The book presents an early software reliability prediction model that will help to grow the reliability of the software systems by monitoring it in each development phase, i.e. from requirement phase to testing phase. Different approaches are discussed in this book to tackle this challenging issue. An important approach presented in this book is a model to classify the modules into two categories (a) fault-prone and (b) not fault-prone. The methods presented in this book for assessing expected number of faults present in the software, assessing expected number of faults present at the end of each phase and classification of software modules in fault-prone or no fault-prone category are easy to understand, develop and use for any practitioner. The practitioners are expected to gain more information about their development process and product reliability, which can help to optimize the resources used.
In his rich and varied career as a mathematician, computer scientist, and educator, Jacob T. Schwartz wrote seminal works in analysis, mathematical economics, programming languages, algorithmics, and computational geometry. In this volume of essays, his friends, students, and collaborators at the Courant Institute of Mathematical Sciences present recent results in some of the fields that Schwartz explored: quantum theory, the theory and practice of programming, program correctness and decision procedures, dextrous manipulation in Robotics, motion planning, and genomics. In addition to presenting recent results in these fields, these essays illuminate the astonishingly productive trajectory of a brilliant and original scientist and thinker.
"Computer and Information Sciences" is a unique and comprehensive review of advanced technology and research in the field of Information Technology. It provides an up to date snapshot of research in Europe and the Far East (Hong Kong, Japan and China) in the most active areas of information technology, including Computer Vision, Data Engineering, Web Engineering, Internet Technologies, Bio-Informatics and System Performance Evaluation Methodologies.
Software Life Cycle Models. Objectoriented Concepts and Modeling. Formal Specification and Verification. Design Methodologies and Specifications. Programming and Coding. Programming Tools. Declarative Programming. Automatic Program Synthesis and Reuse. Program Verification and Testing. Software Maintenance. Advanced Programming Environments. Other Selected Topics. Index.
Web developers and page authors who use JavaServer Pages (JSP) know
that it is much easier and efficient to implement web pages without
reinventing the wheel each time. In order to shave valuable time
from their development schedules, those who work with JSP have
created, debugged, and used custom tags a set of programmable
actions that provide dynamic behavior to static pages paving the
way towards a more common, standard approach to using Java
technology for web development. The biggest boost to this effort
however has only recently arrived in the form of a standard set of
tag libraries, known as the JSTL, which now provides a wide range
of functionality and gives web page authors a much more simplified
approach to implementing dynamic, Java-based web sites.
Collaboration among individuals - from users to developers - is central to modern software engineering. It takes many forms: joint activity to solve common problems, negotiation to resolve conflicts, creation of shared definitions, and both social and technical perspectives impacting all software development activity. The difficulties of collaboration are also well documented. The grand challenge is not only to ensure that developers in a team deliver effectively as individuals, but that the whole team delivers more than just the sum of its parts. The editors of this book have assembled an impressive selection of authors, who have contributed to an authoritative body of work tackling a wide range of issues in the field of collaborative software engineering. The resulting volume is divided into four parts, preceded by a general editorial chapter providing a more detailed review of the domain of collaborative software engineering. Part 1 is on "Characterizing Collaborative Software Engineering," Part 2 examines various "Tools and Techniques," Part 3 addresses organizational issues, and finally Part 4 contains four examples of "Emerging Issues in Collaborative Software Engineering." As a result, this book delivers a comprehensive state-of-the-art overview and empirical results for researchers in academia and industry in areas like software process management, empirical software engineering, and global software development. Practitioners working in this area will also appreciate the detailed descriptions and reports which can often be used as guidelines to improve their daily work.
William J. Karnowski is a construction worker by day and poet by night. His spirit is married to the earth. He worked as a laborer, a mason tender, finisher, gandydancer, therapy aide, boat builder, ironworker, draftsman, and now owns a construction company with his brother Dave. "I thought to myself, "Self, if the geese can go south, then, why can't we?" It never did take me very long to make a decision, especially if it involved a motorcycle." Bill has traveled the length of the Oregon Trail, the Santa Fe Trail, and to the Great Smokies and back in the sports car that he built. He built his house, makes his furniture, and writes poetry on his farm at Laclede, Kansas. "I find it is satisfying to get my hands and brain involved in everything I do." "Check it out. I twist a few tails along the way." |
You may like...
Hardware Accelerator Systems for…
Shiho Kim, Ganesh Chandra Deka
Hardcover
R3,950
Discovery Miles 39 500
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R3,940
Discovery Miles 39 400
Java How to Program, Late Objects…
Paul Deitel, Harvey Deitel
Paperback
News Search, Blogs and Feeds - A Toolkit
Lars Vage, Lars Iselid
Paperback
R1,332
Discovery Miles 13 320
Introducing Delphi Programming - Theory…
John Barrow, Linda Miller, …
Paperback
(1)R785 Discovery Miles 7 850
|