![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Data structures
This book contains some selected papers from the International Conference on Extreme Learning Machine 2015, which was held in Hangzhou, China, December 15-17, 2015. This conference brought together researchers and engineers to share and exchange R&D experience on both theoretical studies and practical applications of the Extreme Learning Machine (ELM) technique and brain learning. This book covers theories, algorithms ad applications of ELM. It gives readers a glance of the most recent advances of ELM.
This book presents advances and innovations in grouping genetic algorithms, enriched with new and unique heuristic optimization techniques. These algorithms are specially designed for solving industrial grouping problems where system entities are to be partitioned or clustered into efficient groups according to a set of guiding decision criteria. Examples of such problems are: vehicle routing problems, team formation problems, timetabling problems, assembly line balancing, group maintenance planning, modular design, and task assignment. A wide range of industrial grouping problems, drawn from diverse fields such as logistics, supply chain management, project management, manufacturing systems, engineering design and healthcare, are presented. Typical complex industrial grouping problems, with multiple decision criteria and constraints, are clearly described using illustrative diagrams and formulations. The problems are mapped into a common group structure that can conveniently be used as an input scheme to specific variants of grouping genetic algorithms. Unique heuristic grouping techniques are developed to handle grouping problems efficiently and effectively. Illustrative examples and computational results are presented in tables and graphs to demonstrate the efficiency and effectiveness of the algorithms. Researchers, decision analysts, software developers, and graduate students from various disciplines will find this in-depth reader-friendly exposition of advances and applications of grouping genetic algorithms an interesting, informative and valuable resource.
This book addresses agent-based computing, concentrating in particular on evolutionary multi-agent systems (EMAS), which have been developed since 1996 at the AGH University of Science and Technology in Cracow, Poland. It provides the relevant background information on and a detailed description of this computing paradigm, along with key experimental results. Readers will benefit from the insightful discussion, which primarily concerns the efficient implementation of computing frameworks for developing EMAS and similar computing systems, as well as a detailed formal model. Theoretical deliberations demonstrating that computing with EMAS always helps to find the optimal solution are also included, rounding out the coverage.
This edited volume on computational intelligence algorithms-based applications includes work presented at the International Conference on Computational Intelligence, Communications, and Business Analytics (CICBA 2017). It provides the latest research findings on the significance of computational intelligence and related application areas. It also introduces various computation platforms involving evolutionary algorithms, fuzzy logic, swarm intelligence, artificial neural networks and several other tools for solving real-world problems. It also discusses various tools that are hybrids of more than one solution framework, highlighting the theoretical aspects as well as various real-world applications.
Problem solving is an essential part of every scientific discipline. It has two components: (1) problem identification and formulation, and (2) the solution to the formulated problem. One can solve a problem on its own using ad hoc techniques or by following techniques that have produced efficient solutions to similar problems. This requires the understanding of various algorithm design techniques, how and when to use them to formulate solutions, and the context appropriate for each of them.Algorithms: Design Techniques and Analysis advocates the study of algorithm design by presenting the most useful techniques and illustrating them with numerous examples - emphasizing on design techniques in problem solving rather than algorithms topics like searching and sorting. Algorithmic analysis in connection with example algorithms are explored in detail. Each technique or strategy is covered in its own chapter through numerous examples of problems and their algorithms.Readers will be equipped with problem solving tools needed in advanced courses or research in science and engineering.
This timely text/reference presents a comprehensive review of the workflow scheduling algorithms and approaches that are rapidly becoming essential for a range of software applications, due to their ability to efficiently leverage diverse and distributed cloud resources. Particular emphasis is placed on how workflow-based automation in software-defined cloud centers and hybrid IT systems can significantly enhance resource utilization and optimize energy efficiency. Topics and features: describes dynamic workflow and task scheduling techniques that work across multiple (on-premise and off-premise) clouds; presents simulation-based case studies, and details of real-time test bed-based implementations; offers analyses and comparisons of a broad selection of static and dynamic workflow algorithms; examines the considerations for the main parameters in projects limited by budget and time constraints; covers workflow management systems, workflow modeling and simulation techniques, and machine learning approaches for predictive workflow analytics. This must-read work provides invaluable practical insights from three subject matter experts in the cloud paradigm, which will empower IT practitioners and industry professionals in their daily assignments. Researchers and students interested in next-generation software-defined cloud environments will also greatly benefit from the material in the book.
This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is investigated for compression of Light Field images based on the HEVC technology. A new linear prediction method using sparse constraints is also described, enabling improved coding performance of the HEVC standard, particularly for images with complex textures based on repeated structures. Finally, the authors present a new, generalized intra-prediction framework for the HEVC standard, which unifies the directional prediction methods used in the current video compression standards, with linear prediction methods using sparse constraints. Experimental results for the compression of natural images are provided, demonstrating the advantage of the unified prediction framework over the traditional directional prediction modes used in HEVC standard.
This treatise presents an integrated perspective on the interplay of set theory and graph theory, providing an extensive selection of examples that highlight how methods from one theory can be used to better solve problems originated in the other. Features: explores the interrelationships between sets and graphs and their applications to finite combinatorics; introduces the fundamental graph-theoretical notions from the standpoint of both set theory and dyadic logic, and presents a discussion on set universes; explains how sets can conveniently model graphs, discussing set graphs and set-theoretic representations of claw-free graphs; investigates when it is convenient to represent sets by graphs, covering counting and encoding problems, the random generation of sets, and the analysis of infinite sets; presents excerpts of formal proofs concerning graphs, whose correctness was verified by means of an automated proof-assistant; contains numerous exercises, examples, definitions, problems and insight panels.
This volume collects contributions written by different experts in honor of Prof. Jaime Munoz Masque. It covers a wide variety of research topics, from differential geometry to algebra, but particularly focuses on the geometric formulation of variational calculus; geometric mechanics and field theories; symmetries and conservation laws of differential equations, and pseudo-Riemannian geometry of homogeneous spaces. It also discusses algebraic applications to cryptography and number theory. It offers state-of-the-art contributions in the context of current research trends. The final result is a challenging panoramic view of connecting problems that initially appear distant.
In this work we plan to revise the main techniques for enumeration algorithms and to show four examples of enumeration algorithms that can be applied to efficiently deal with some biological problems modelled by using biological networks: enumerating central and peripheral nodes of a network, enumerating stories, enumerating paths or cycles, and enumerating bubbles. Notice that the corresponding computational problems we define are of more general interest and our results hold in the case of arbitrary graphs. Enumerating all the most and less central vertices in a network according to their eccentricity is an example of an enumeration problem whose solutions are polynomial and can be listed in polynomial time, very often in linear or almost linear time in practice. Enumerating stories, i.e. all maximal directed acyclic subgraphs of a graph G whose sources and targets belong to a predefined subset of the vertices, is on the other hand an example of an enumeration problem with an exponential number of solutions, that can be solved by using a non trivial brute-force approach. Given a metabolic network, each individual story should explain how some interesting metabolites are derived from some others through a chain of reactions, by keeping all alternative pathways between sources and targets. Enumerating cycles or paths in an undirected graph, such as a protein-protein interaction undirected network, is an example of an enumeration problem in which all the solutions can be listed through an optimal algorithm, i.e. the time required to list all the solutions is dominated by the time to read the graph plus the time required to print all of them. By extending this result to directed graphs, it would be possible to deal more efficiently with feedback loops and signed paths analysis in signed or interaction directed graphs, such as gene regulatory networks. Finally, enumerating mouths or bubbles with a source s in a directed graph, that is enumerating all the two vertex-disjoint directed paths between the source s and all the possible targets, is an example of an enumeration problem in which all the solutions can be listed through a linear delay algorithm, meaning that the delay between any two consecutive solutions is linear, by turning the problem into a constrained cycle enumeration problem. Such patterns, in a de Bruijn graph representation of the reads obtained by sequencing, are related to polymorphisms in DNA- or RNA-seq data.
This book explains the most prominent and some promising new, general techniques that combine metaheuristics with other optimization methods. A first introductory chapter reviews the basic principles of local search, prominent metaheuristics, and tree search, dynamic programming, mixed integer linear programming, and constraint programming for combinatorial optimization purposes. The chapters that follow present five generally applicable hybridization strategies, with exemplary case studies on selected problems: incomplete solution representations and decoders; problem instance reduction; large neighborhood search; parallel non-independent construction of solutions within metaheuristics; and hybridization based on complete solution archives. The authors are among the leading researchers in the hybridization of metaheuristics with other techniques for optimization, and their work reflects the broad shift to problem-oriented rather than algorithm-oriented approaches, enabling faster and more effective implementation in real-life applications. This hybridization is not restricted to different variants of metaheuristics but includes, for example, the combination of mathematical programming, dynamic programming, or constraint programming with metaheuristics, reflecting cross-fertilization in fields such as optimization, algorithmics, mathematical modeling, operations research, statistics, and simulation. The book is a valuable introduction and reference for researchers and graduate students in these domains.
Transactions are a concept related to the logical database as seen from the perspective of database application programmers: a transaction is a sequence of database actions that is to be executed as an atomic unit of work. The processing of transactions on databases is a well- established area with many of its foundations having already been laid in the late 1970s and early 1980s. The unique feature of this textbook is that it bridges the gap between the theory of transactions on the logical database and the implementation of the related actions on the underlying physical database. The authors relate the logical database, which is composed of a dynamically changing set of data items with unique keys, and the underlying physical database with a set of fixed-size data and index pages on disk. Their treatment of transaction processing builds on the "do-redo-undo" recovery paradigm, and all methods and algorithms presented are carefully designed to be compatible with this paradigm as well as with write-ahead logging, steal-and-no-force buffering, and fine-grained concurrency control. Chapters 1 to 6 address the basics needed to fully appreciate transaction processing on a centralized database system within the context of our transaction model, covering topics like ACID properties, database integrity, buffering, rollbacks, isolation, and the interplay of logical locks and physical latches. Chapters 7 and 8 present advanced features including deadlock-free algorithms for reading, inserting and deleting tuples, while the remaining chapters cover additional advanced topics extending on the preceding foundational chapters, including multi-granular locking, bulk actions, versioning, distributed updates, and write-intensive transactions. This book is primarily intended as a text for advanced undergraduate or graduate courses on database management in general or transaction processing in particular.
This book presents two practical physical attacks. It shows how attackers can reveal the secret key of symmetric as well as asymmetric cryptographic algorithms based on these attacks, and presents countermeasures on the software and the hardware level that can help to prevent them in the future. Though their theory has been known for several years now, since neither attack has yet been successfully implemented in practice, they have generally not been considered a serious threat. In short, their physical attack complexity has been overestimated and the implied security threat has been underestimated. First, the book introduces the photonic side channel, which offers not only temporal resolution, but also the highest possible spatial resolution. Due to the high cost of its initial implementation, it has not been taken seriously. The work shows both simple and differential photonic side channel analyses. Then, it presents a fault attack against pairing-based cryptography. Due to the need for at least two independent precise faults in a single pairing computation, it has not been taken seriously either. Based on these two attacks, the book demonstrates that the assessment of physical attack complexity is error-prone, and as such cryptography should not rely on it. Cryptographic technologies have to be protected against all physical attacks, whether they have already been successfully implemented or not. The development of countermeasures does not require the successful execution of an attack but can already be carried out as soon as the principle of a side channel or a fault attack is sufficiently understood.
This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts of the standard, insight into how it was developed, and in-depth discussion of algorithms and architectures for its implementation."
This book provides formal and informal definitions and taxonomies for self-aware computing systems, and explains how self-aware computing relates to many existing subfields of computer science, especially software engineering. It describes architectures and algorithms for self-aware systems as well as the benefits and pitfalls of self-awareness, and reviews much of the latest relevant research across a wide array of disciplines, including open research challenges. The chapters of this book are organized into five parts: Introduction, System Architectures, Methods and Algorithms, Applications and Case Studies, and Outlook. Part I offers an introduction that defines self-aware computing systems from multiple perspectives, and establishes a formal definition, a taxonomy and a set of reference scenarios that help to unify the remaining chapters. Next, Part II explores architectures for self-aware computing systems, such as generic concepts and notations that allow a wide range of self-aware system architectures to be described and compared with both isolated and interacting systems. It also reviews the current state of reference architectures, architectural frameworks, and languages for self-aware systems. Part III focuses on methods and algorithms for self-aware computing systems by addressing issues pertaining to system design, like modeling, synthesis and verification. It also examines topics such as adaptation, benchmarks and metrics. Part IV then presents applications and case studies in various domains including cloud computing, data centers, cyber-physical systems, and the degree to which self-aware computing approaches have been adopted within those domains. Lastly, Part V surveys open challenges and future research directions for self-aware computing systems. It can be used as a handbook for professionals and researchers working in areas related to self-aware computing, and can also serve as an advanced textbook for lecturers and postgraduate students studying subjects like advanced software engineering, autonomic computing, self-adaptive systems, and data-center resource management. Each chapter is largely self-contained, and offers plenty of references for anyone wishing to pursue the topic more deeply.
This book explores the future of cyber technologies and cyber operations which will influence advances in social media, cyber security, cyber physical systems, ethics, law, media, economics, infrastructure, military operations and other elements of societal interaction in the upcoming decades. It provides a review of future disruptive technologies and innovations in cyber security. It also serves as a resource for wargame planning and provides a strategic vision of the future direction of cyber operations. It informs military strategist about the future of cyber warfare. Written by leading experts in the field, chapters explore how future technical innovations vastly increase the interconnectivity of our physical and social systems and the growing need for resiliency in this vast and dynamic cyber infrastructure. The future of social media, autonomy, stateless finance, quantum information systems, the internet of things, the dark web, space satellite operations, and global network connectivity is explored along with the transformation of the legal and ethical considerations which surround them. The international challenges of cyber alliances, capabilities, and interoperability is challenged with the growing need for new laws, international oversight, and regulation which informs cybersecurity studies. The authors have a multi-disciplinary scope arranged in a big-picture framework, allowing both deep exploration of important topics and high level understanding of the topic. Evolution of Cyber Technologies and Operations to 2035 is as an excellent reference for professionals and researchers working in the security field, or as government and military workers, economics, law and more. Students will also find this book useful as a reference guide or secondary text book.
With the growing popularity of "big data", the potential value of personal data has attracted more and more attention. Applications built on personal data can create tremendous social and economic benefits. Meanwhile, they bring serious threats to individual privacy. The extensive collection, analysis and transaction of personal data make it difficult for an individual to keep the privacy safe. People now show more concerns about privacy than ever before. How to make a balance between the exploitation of personal information and the protection of individual privacy has become an urgent issue. In this book, the authors use methodologies from economics, especially game theory, to investigate solutions to the balance issue. They investigate the strategies of stakeholders involved in the use of personal data, and try to find the equilibrium. The book proposes a user-role based methodology to investigate the privacy issues in data mining, identifying four different types of users, i.e. four user roles, involved in data mining applications. For each user role, the authors discuss its privacy concerns and the strategies that it can adopt to solve the privacy problems. The book also proposes a simple game model to analyze the interactions among data provider, data collector and data miner. By solving the equilibria of the proposed game, readers can get useful guidance on how to deal with the trade-off between privacy and data utility. Moreover, to elaborate the analysis on data collector's strategies, the authors propose a contract model and a multi-armed bandit model respectively. The authors discuss how the owners of data (e.g. an individual or a data miner) deal with the trade-off between privacy and utility in data mining. Specifically, they study users' strategies in collaborative filtering based recommendation system and distributed classification system. They built game models to formulate the interactions among data owners, and propose learning algorithms to find the equilibria.
This book presents a comprehensive study of different tools and techniques available to perform network forensics. Also, various aspects of network forensics are reviewed as well as related technologies and their limitations. This helps security practitioners and researchers in better understanding of the problem, current solution space, and future research scope to detect and investigate various network intrusions against such attacks efficiently. Forensic computing is rapidly gaining importance since the amount of crime involving digital systems is steadily increasing. Furthermore, the area is still underdeveloped and poses many technical and legal challenges. The rapid development of the Internet over the past decade appeared to have facilitated an increase in the incidents of online attacks. There are many reasons which are motivating the attackers to be fearless in carrying out the attacks. For example, the speed with which an attack can be carried out, the anonymity provided by the medium, nature of medium where digital information is stolen without actually removing it, increased availability of potential victims and the global impact of the attacks are some of the aspects. Forensic analysis is performed at two different levels: Computer Forensics and Network Forensics. Computer forensics deals with the collection and analysis of data from computer systems, networks, communication streams and storage media in a manner admissible in a court of law. Network forensics deals with the capture, recording or analysis of network events in order to discover evidential information about the source of security attacks in a court of law. Network forensics is not another term for network security. It is an extended phase of network security as the data for forensic analysis are collected from security products like firewalls and intrusion detection systems. The results of this data analysis are utilized for investigating the attacks. Network forensics generally refers to the collection and analysis of network data such as network traffic, firewall logs, IDS logs, etc. Technically, it is a member of the already-existing and expanding the field of digital forensics. Analogously, network forensics is defined as "The use of scientifically proved techniques to collect, fuses, identifies, examine, correlate, analyze, and document digital evidence from multiple, actively processing and transmitting digital sources for the purpose of uncovering facts related to the planned intent, or measured success of unauthorized activities meant to disrupt, corrupt, and or compromise system components as well as providing information to assist in response to or recovery from these activities." Network forensics plays a significant role in the security of today's organizations. On the one hand, it helps to learn the details of external attacks ensuring similar future attacks are thwarted. Additionally, network forensics is essential for investigating insiders' abuses that constitute the second costliest type of attack within organizations. Finally, law enforcement requires network forensics for crimes in which a computer or digital system is either being the target of a crime or being used as a tool in carrying a crime. Network security protects the system against attack while network forensics focuses on recording evidence of the attack. Network security products are generalized and look for possible harmful behaviors. This monitoring is a continuous process and is performed all through the day. However, network forensics involves post mortem investigation of the attack and is initiated after crime notification. There are many tools which assist in capturing data transferred over the networks so that an attack or the malicious intent of the intrusions may be investigated. Similarly, various network forensic frameworks are proposed in the literature.
With the proliferation of Software-as-a-Service (SaaS) offerings, it is becoming increasingly important for individual SaaS providers to operate their services at a low cost. This book investigates SaaS from the perspective of the provider and shows how operational costs can be reduced by using "multi tenancy," a technique for consolidating a large number of customers onto a small number of servers. Specifically, the book addresses multi tenancy on the database level, focusing on in-memory column databases, which are the backbone of many important new enterprise applications. For efficiently implementing multi tenancy in a farm of databases, two fundamental challenges must be addressed, (i) workload modeling and (ii) data placement. The first involves estimating the (shared) resource consumption for multi tenancy on a single in-memory database server. The second consists in assigning tenants to servers in a way that minimizes the number of required servers (and thus costs) based on the assumed workload model. This step also entails replicating tenants for performance and high availability. This book presents novel solutions to both problems.
From finance to artificial intelligence, genetic algorithms are a powerful tool with a wide array of applications. But you don't need an exotic new language or framework to get started; you can learn about genetic algorithms in a language you're already familiar with. Join us for an in-depth look at the algorithms, techniques, and methods that go into writing a genetic algorithm. From introductory problems to real-world applications, you'll learn the underlying principles of problem solving using genetic algorithms. Evolutionary algorithms are a unique and often overlooked subset of machine learning and artificial intelligence. Because of this, most of the available resources are outdated or too academic in nature, and none of them are made with Elixir programmers in mind. Start from the ground up with genetic algorithms in a language you are familiar with. Discover the power of genetic algorithms through simple solutions to challenging problems. Use Elixir features to write genetic algorithms that are concise and idiomatic. Learn the complete life cycle of solving a problem using genetic algorithms. Understand the different techniques and fine-tuning required to solve a wide array of problems. Plan, test, analyze, and visualize your genetic algorithms with real-world applications. Open your eyes to a unique and powerful field - without having to learn a new language or framework. What You Need: You'll need a macOS, Windows, or Linux distribution with an up-to-date Elixir installation.
This book presents essential studies and applications in the context of sliding mode control, highlighting the latest findings from interdisciplinary theoretical studies, ranging from computational algorithm development to representative applications. Readers will learn how to easily tailor the techniques to accommodate their ad hoc applications. To make the content as accessible as possible, the book employs a clear route in each paper, moving from background to motivation, to quantitative development (equations), and lastly to case studies/illustrations/tutorials (simulations, experiences, curves, tables, etc.). Though primarily intended for graduate students, professors and researchers from related fields, the book will also benefit engineers and scientists from industry.
Evolutionary algorithms constitute a class of well-known algorithms, which are designed based on the Darwinian theory of evolution and Mendelian theory of heritage. They are partly based on random and partly based on deterministic principles. Due to this nature, it is challenging to predict and control its performance in solving complex nonlinear problems. Recently, the study of evolutionary dynamics is focused not only on the traditional investigations but also on the understanding and analyzing new principles, with the intention of controlling and utilizing their properties and performances toward more effective real-world applications. In this book, based on many years of intensive research of the authors, is proposing novel ideas about advancing evolutionary dynamics towards new phenomena including many new topics, even the dynamics of equivalent social networks. In fact, it includes more advanced complex networks and incorporates them with the CMLs (coupled map lattices), which are usually used for spatiotemporal complex systems simulation and analysis, based on the observation that chaos in CML can be controlled, so does evolution dynamics. All the chapter authors are, to the best of our knowledge, originators of the ideas mentioned above and researchers on evolutionary algorithms and chaotic dynamics as well as complex networks, who will provide benefits to the readers regarding modern scientific research on related subjects.
This book presents physical-layer security as a promising paradigm for achieving the information-theoretic secrecy required for wireless networks. It explains how wireless networks are extremely vulnerable to eavesdropping attacks and discusses a range of security techniques including information-theoretic security, artificial noise aided security, security-oriented beamforming, and diversity assisted security approaches. It also provides an overview of the cooperative relaying methods for wireless networks such as orthogonal relaying, non-orthogonal relaying, and relay selection.Chapters explore the relay-selection designs for improving wireless secrecy against eavesdropping in time-varying fading environments and a joint relay and jammer selection for wireless physical-layer security, where a relay is used to assist the transmission from the source to destination and a friendly jammer is employed to transmit an artificial noise for confusing the eavesdropper. Additionally, the security-reliability tradeoff (SRT) is mathematically characterized for wireless communications and two main relay-selection schemes, the single-relay and multi-relay selection, are devised for the wireless SRT improvement. In the single-relay selection, only the single best relay is chosen for assisting the wireless transmission, while the multi-relay selection invokes multiple relays for simultaneously forwarding the source transmission to the destination.Physical-Layer Security for Cooperative Relay Networks is designed for researchers and professionals working with networking or wireless security. Advanced-level students interested in networks, wireless, or privacy will also find this book a useful resource.
In this monograph we introduce and examine four new temporal logic formalisms that can be used as specification languages for the automated verification of the reliability of hardware and software designs with respect to a desired behavior. The work is organized in two parts. In the first part two logics for computations, the graded computation tree logic and the computation tree logic with minimal model quantifiers are discussed. These have proved to be useful in describing correct executions of monolithic closed systems. The second part focuses on logics for strategies, strategy logic and memoryful alternating-time temporal logic, which have been successfully applied to formalize several properties of interactive plays in multi-entities systems modeled as multi-agent games. |
You may like...
Bristol Short Story Prize Anthology, v…
Valerie O'Riordan, Ian Madden, …
Paperback
R285
Discovery Miles 2 850
Visualizing Information Using SVG and…
Vladimir Geroimenko, Chaomei Chen
Hardcover
R4,056
Discovery Miles 40 560
Web Services - Concepts, Methodologies…
Information Reso Management Association
Hardcover
R8,957
Discovery Miles 89 570
|