![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Data structures
This book, entitled Advances in Spatial Data Handling, is a compendium of papers resulting from the International Symposium on Spatial Data Handling (SDH), held in Ottawa, Canada, July 9-12, 2002. The SDH conference series has been organised as one of the main activities of the International Geographical Union (IGU) since it was first started in Zurich in 1984. In the late 1990's the IGU Commission of Geographic Information Systems was discontinued and a study group was formed to succeed it in 1997. Much like the IGU Commission, the objectives of the Study Group are to create a network of people and research centres addressing geographical information science and to facilitate exchange of information. The International Symposium on Spatial Data Handling, which is the most important activity of the IGU Study Group, has, throughout its 18 year history been highly regarded as one of the most important GIS conferences in the world.
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
Data mining deals with finding patterns in data that are by
user-definition, interesting and valid. It is an interdisciplinary
area involving databases, machine learning, pattern recognition,
statistics, visualization and others. Independently, data mining and decision support are well-developed research areas, but until now there has been no systematic attempt to integrate them. Data Mining and Decision Support: Integration and Collaboration, written by leading researchers in the field, presents a conceptual framework, plus the methods and tools for integrating the two disciplines and for applying this technology to business problems in a collaborative setting.
The Second International Workshop on Cooperative Internet Computing (CIC2002) has brought together researchers, academics, and industry practitioners who are involved and interested in the development of advanced and emerging cooperative computing technologies. Cooperative computing is an important computing paradigm to enable different parties to work together towards a pre defined non-trivial goal. It encompasses important technological areas like computer supported cooperative work, workflow, computer assisted design and concurrent programming. As technologies continue to advance and evolve, there is an increasing need to research and develop new classes of middlewares and applications to leverage on the combined benefits of Internet and web to provide users and programmers with highly interactive and robust cooperative computing environment. It is the aim of this forum to promote close interactions and exchange of ideas among researchers, academics and practitioners on the state-of-the art researches in all of these exciting areas. We have partnered with Kluwer Acedamic Press this year to bring to you a book compilation of the papers that were presented at the CIC2002 workshop. The importance of the research area is reflected both in the quality and quantity of the submitted papers, where each paper was reviewed by at least three PC members. As a result, we were able to only accept 14 papers for full presentation at the workshop, while having to reject several excellent papers due to the limitations of the program schedule.
This book constitutes the refereed proceedings of the 11th International Conference on Cryptology and Network Security, CANS 2012, held in Darmstadt, Germany, in December 2012. The 22 revised full papers, presented were carefully reviewed and selected from 99 submissions. The papers are organized in topical sections on cryptanalysis; network security; cryptographic protocols; encryption; and s-box theory.
The last few years have seen a great increase in the amount of data available to scientists, yet many of the techniques used to analyse this data cannot cope with such large datasets. Therefore, strategies need to be employed as a pre-processing step to reduce the number of objects or measurements whilst retaining important information. Spectral dimensionality reduction is one such tool for the data processing pipeline. Numerous algorithms and improvements have been proposed for the purpose of performing spectral dimensionality reduction, yet there is still no gold standard technique. This book provides a survey and reference aimed at advanced undergraduate and postgraduate students as well as researchers, scientists, and engineers in a wide range of disciplines. Dimensionality reduction has proven useful in a wide range of problem domains and so this book will be applicable to anyone with a solid grounding in statistics and computer science seeking to apply spectral dimensionality to their work.
This book constitutes the thoroughly refereed post-conference proceedings of the 10th European Workshop, EuroPKI 2013, held in Egham, UK, in September 2013. The 11 revised full papers presented together with 1 invited talk were carefully selected from 20 submissions. The papers are organized in topical sections such as authorization and delegation, certificates management, cross certification, interoperability, key management, legal issues, long-time archiving, time stamping, trust management, trusted computing, ubiquitous scenarios and Web services security.
Spyware and Adware introduces detailed, organized, technical information exclusively on spyware and adware, including defensive techniques. This book not only brings together current sources of information on spyware and adware but also looks at the future direction of this field. Spyware and Adware is a reference book designed for researchers and professors in computer science, as well as a secondary text for advanced-level students. This book is also suitable for practitioners in industry.
This book illustrates how models of complex systems are built up and provides indispensable mathematical tools for studying their dynamics. This second edition includes more recent research results and many new and improved worked out examples and exercises.
This book is a result of the ISD'99, Eight International Conference on Infonnation Systems Development-Methods and Tools, Theory, and Practice held August 11-13, 1999 in Boise, Idaho, USA. The purpose of this conference was to address the issues facing academia and industry when specifying, developing, managing, and improving infonnation systems. ISD'99 consisted not only of the technical program represented in these Proceedings, but also of plenary sessions on product support and content management systems for the Internet environment, workshop on a new paradigm for successful acquisition of infonnation systems, and a panel discussion on current pedagogical issues in systems analysis and design. The selection of papers for ISD'99 was carried out by the International Program Committee. Papers presented during the conference and printed in this volume have been selected from submissions after fonnal double-blind reviewing process and have been revised by their authors based on the recommendations of reviewers. Papers were judged according to their originality, relevance, and presentation quality. All papers were judged purely on their own merits, independently of other submissions. We would like to thank the authors of papers accepted for ISD'99 who all made gallant efforts to provide us with electronic copies of their manuscripts confonning to common guidelines. We thank them for thoughtfully responding to reviewers comments and carefully preparing their final contributions. We thank Daryl Jones, provost of Boise State University and William Lathen, dean, College of Business and Economics, for their support and encouragement.
This book constitutes the refereed proceedings of the 8th Information Retrieval Societies Conference, AIRS 2012, held in Tianjin, China, in December 2012. The 22 full papers and 26 poster presentations included in this volume were carefully reviewed and selected from 77 submissions. They are organized in topical sections named: IR models; evaluation and user studies; NLP for IR; machine learning and data mining; social media; IR applications; multimedia IT and indexing; collaborative and federated search; and the poster session.
Variational Object-Oriented Programming Beyond Classes and Inheritance presents an approach for improving the standard object-oriented programming model. The proposal is aimed at supporting a larger range of incremental behavior variations and thus promises to be more effective in mastering the complexity of today's software. The material presented in this book is interesting to both beginners and students or professionals with an advanced knowledge of object-oriented programming: * The first part of the book can be used as supplementary material for students and professionals being introduced to object-oriented programming. It provides them with a very concise description of the main concepts of object-oriented programming, which are presented from a conceptual point of view rather than related to the features of a particular object-oriented programming language. The description of the main concepts is a synthesis of considerations from several leading works in data abstraction and object-oriented technology. Parts of the book are currently used as supplementary material for teaching a graduate course on object-oriented design.* The book provides experienced programmers with a conceptual view of the relationship between object-oriented programming, data abstraction, and previous programming models that promotes a deep understanding of the essence of object-oriented programming. * The book presents a synthesis of both the main achievements and the main shortcomings of object-oriented programming with respect to supporting incremental programming and promoting software reuse. It illustrates the behavior variations that can be performed incrementally and those that are not supported properly; the workarounds currently used for dealing with the latter case are described. * Recent developments from ongoing research in object-oriented programming are presented, showing that the problems they deal with can actually be traced to some form of context-dependent behavior. The developments considered include design patterns, subject-oriented programming, adaptive programming, reflection, open implementations, and aspect-oriented programming.* Advanced students interested in language design are not only provided with a comprehensive informal description of the new model, but also with a formal model and the description of a prototype implementation of RONDO embedded into the Smalltalk-80 environment. This can serve as a basis for experimenting with new concepts or with modifications of the proposed model. * The last chapter of the book is particularly beneficial to the practitioners of object technology, since it deals with issues in maintaining reusable object-oriented systems.
The purpose of this monograph is to provide the mathematically literate reader with an accessible introduction to the theory of quantum computing algorithms, one component of a fascinating and rapidly developing area which involves topics from physics, mathematics, and computer science. The author briefly describes the historical context of quantum computing and provides the motivation, notation, and assumptions appropriate for quantum statics, a non-dynamical, finite dimensional model of quantum mechanics. This model is then used to define and illustrate quantum logic gates and representative subroutines required for quantum algorithms. A discussion of the basic algorithms of Simon and of Deutsch and Jozsa sets the stage for the presentation of Grover's search algorithm and Shor's factoring algorithm, key algorithms which crystallized interest in the practicality of quantum computers. A group theoretic abstraction of Shor's algorithms completes the discussion of algorithms.The last third of the book briefly elaborates the need for error- correction capabilities and then traces the theory of quantum error- correcting codes from the earliest examples to an abstract formulation in Hilbert space. This text is a good self-contained introductory resource for newcomers to the field of quantum computing algorithms, as well as a useful self-study guide for the more specialized scientist, mathematician, graduate student, or engineer. Readers interested in following the ongoing developments of quantum algorithms will benefit particularly from this presentation of the notation and basic theory.
A foreword is usually prepared by someone who knows the author or who knows enough to provide additional insight on the purpose of the work. When asked to write this foreword, I had no problem with what I wanted to say about the work or the author. I did, however, wonder why people read a foreword. It is probably of value to know the background of the writer of a book; it is probably also of value to know the background of the individual who is commenting on the work. I consider myself a good friend of the author, and when I was asked to write a few words I felt honored to provide my view of Ray Prasad, his expertise, and the contribution that he has made to our industry. This book is about the industry, its technology, and its struggle to learn and compete in a global market bursting with new ideas to satisfy a voracious appetite for new and innovative electronic products. I had the good fortune to be there at the beginning (or almost) and have witnessed the growth and excitement in the opportunities and challenges afforded the electronic industries' engineering and manufacturing talents. In a few years my involve ment will span half a century.
The ability of storing, managing, and giving access to the huge quantity of data collected by astronomical observatories is one of the major challenges of modern astronomy. At the same time, the growing complexity of data systems implies a change of concepts: the scientist has to manipulate data as well as information. Recent developments of the WorldWideWeb' bring interesting answers to these problems. The book presents a wide selection of databases, archives, data centers, and information systems. Clear and up-to-date descriptions are included, together with their scientific context and motivations. Audience: This volume provides an essential tool for astronomers, librarians, data specialists and computer engineers.
This book constitutes the refereed proceedings of the 12th International Conference on Cryptology in India, INDOCRYPT 2011, held in Chennai, India, in December 2011. The 22 revised full papers presented together with the abstracts of 3 invited talks and 3 tutorials were carefully reviewed and selected from 127 submissions. The papers are organized in topical sections on side-channel attacks, secret-key cryptography, hash functions, pairings, and protocols.
Mining Very Large Databases with Parallel Processing addresses the problem of large-scale data mining. It is an interdisciplinary text, describing advances in the integration of three computer science areas, namely `intelligent' (machine learning-based) data mining techniques, relational databases and parallel processing. The basic idea is to use concepts and techniques of the latter two areas - particularly parallel processing - to speed up and scale up data mining algorithms. The book is divided into three parts. The first part presents a comprehensive review of intelligent data mining techniques such as rule induction, instance-based learning, neural networks and genetic algorithms. Likewise, the second part presents a comprehensive review of parallel processing and parallel databases. Each of these parts includes an overview of commercially-available, state-of-the-art tools. The third part deals with the application of parallel processing to data mining. The emphasis is on finding generic, cost-effective solutions for realistic data volumes. Two parallel computational environments are discussed, the first excluding the use of commercial-strength DBMS, and the second using parallel DBMS servers. It is assumed that the reader has a knowledge roughly equivalent to a first degree (BSc) in accurate sciences, so that (s)he is reasonably familiar with basic concepts of statistics and computer science. The primary audience for Mining Very Large Databases with Parallel Processing is industry data miners and practitioners in general, who would like to apply intelligent data mining techniques to large amounts of data. The book will also be of interest to academic researchers and postgraduate students, particularly database researchers, interested in advanced, intelligent database applications, and artificial intelligence researchers interested in industrial, real-world applications of machine learning.
This book is about Granular Computing (GC) - an emerging conceptual and of information processing. As the name suggests, GC concerns computing paradigm processing of complex information entities - information granules. In essence, information granules arise in the process of abstraction of data and derivation of knowledge from information. Information granules are everywhere. We commonly use granules of time (seconds, months, years). We granulate images; millions of pixels manipulated individually by computers appear to us as granules representing physical objects. In natural language, we operate on the basis of word-granules that become crucial entities used to realize interaction and communication between humans. Intuitively, we sense that information granules are at the heart of all our perceptual activities. In the past, several formal frameworks and tools, geared for processing specific information granules, have been proposed. Interval analysis, rough sets, fuzzy sets have all played important role in knowledge representation and processing. Subsequently, information granulation and information granules arose in numerous application domains. Well-known ideas of rule-based systems dwell inherently on information granules. Qualitative modeling, being one of the leading threads of AI, operates on a level of information granules. Multi-tier architectures and hierarchical systems (such as those encountered in control engineering), planning and scheduling systems all exploit information granularity. We also utilize information granules when it comes to functionality granulation, reusability of information and efficient ways of developing underlying information infrastructures.
Database and Application Security XV provides a forum for original research results, practical experiences, and innovative ideas in database and application security. With the rapid growth of large databases and the application systems that manage them, security issues have become a primary concern in business, industry, government and society. These concerns are compounded by the expanding use of the Internet and wireless communication technologies. This volume covers a wide variety of topics related to security and privacy of information in systems and applications, including: * Access control models; * Role and constraint-based access control; * Distributed systems; * Information warfare and intrusion detection; * Relational databases; * Implementation issues; * Multilevel systems; * New application areas including XML. Database and Application Security XV contains papers, keynote addresses, and panel discussions from the Fifteenth Annual Working Conference on Database and Application Security, organized by the International Federation for Information Processing (IFIP) Working Group 11.3 and held July 15-18, 2001 in Niagara on the Lake, Ontario, Canada.
This book comprises the refereed proceedings of the two International Conference on Green and Smart Technology, GST 2012, and on Sensor and Its Applications, SIA 2012, held in Jeju Island, Korea, in November/December 2012. The papers presented were carefully reviewed and selected from numerous submissions and focus on the various aspects of green and smart technology with sensor applications.
This monograph is a slightly revised version of my PhD thesis [86], com pleted in the Department of Computer Science at the University of Edin burgh in June 1988, with an additional chapter summarising more recent developments. Some of the material has appeared in the form of papers [50,88]. The underlying theme of the monograph is the study of two classical problems: counting the elements of a finite set of combinatorial structures, and generating them uniformly at random. In their exact form, these prob lems appear to be intractable for many important structures, so interest has focused on finding efficient randomised algorithms that solve them ap proxim~ly, with a small probability of error. For most natural structures the two problems are intimately connected at this level of approximation, so it is natural to study them together. At the heart of the monograph is a single algorithmic paradigm: sim ulate a Markov chain whose states are combinatorial structures and which converges to a known probability distribution over them. This technique has applications not only in combinatorial counting and generation, but also in several other areas such as statistical physics and combinatorial optimi sation. The efficiency of the technique in any application depends crucially on the rate of convergence of the Markov chain.
Since their invention in the late seventies, public key cryptosystems have become an indispensable asset in establishing private and secure electronic communication, and this need, given the tremendous growth of the Internet, is likely to continue growing. Elliptic curve cryptosystems represent the state of the art for such systems. Elliptic Curves and Their Applications to Cryptography: An Introduction provides a comprehensive and self-contained introduction to elliptic curves and how they are employed to secure public key cryptosystems. Even though the elegant mathematical theory underlying cryptosystems is considerably more involved than for other systems, this text requires the reader to have only an elementary knowledge of basic algebra. The text nevertheless leads to problems at the forefront of current research, featuring chapters on point counting algorithms and security issues. The Adopted unifying approach treats with equal care elliptic curves over fields of even characteristic, which are especially suited for hardware implementations, and curves over fields of odd characteristic, which have traditionally received more attention. Elliptic Curves and Their Applications: An Introduction has been used successfully for teaching advanced undergraduate courses. It will be of greatest interest to mathematicians, computer scientists, and engineers who are curious about elliptic curve cryptography in practice, without losing the beauty of the underlying mathematics.
Early and accurate fault detection and diagnosis for modern
chemical plants can minimise downtime, increase the safety of plant
operations, and reduce manufacturing costs. The process-monitoring
techniques that have been most effective in practice are based on
models constructed almost entirely from process data. The goal of
the book is to present the theoretical background and practical
techniques for data-driven process monitoring. Process-monitoring
techniques presented include: Principal component analysis; Fisher
discriminant analysis; Partial least squares; Canonical variate
analysis.
Exploration of Visual Data presents latest research efforts in the area of content-based exploration of image and video data. The main objective is to bridge the semantic gap between high-level concepts in the human mind and low-level features extractable by the machines. The two key issues emphasized are "content-awareness" and "user-in-the-loop". The authors provide a comprehensive review on algorithms for visual feature extraction based on color, texture, shape, and structure, and techniques for incorporating such information to aid browsing, exploration, search, and streaming of image and video data. They also discuss issues related to the mixed use of textual and low-level visual features to facilitate more effective access of multimedia data. Exploration of Visual Data provides state-of-the-art materials on the topics of content-based description of visual data, content-based low-bitrate video streaming, and latest asymmetric and nonlinear relevance feedback algorithms, which to date are unpublished.
This book constitutes the refereed proceedings of the International Conferences on Security Technology, SecTech 2012, on Control and Automation, CA 2012, and CES-CUBE 2012, the International Conference on Circuits, Control, Communication, Electricity, Electronics, Energy, System, Signal and Simulation; all held in conjunction with GST 2012 on Jeju Island, Korea, in November/December 2012. The papers presented were carefully reviewed and selected from numerous submissions and focus on the various aspects of security technology, and control and automation, and circuits, control, communication, electricity, electronics, energy, system, signal and simulation. |
You may like...
Comprehensive Metaheuristics…
S. Ali Mirjalili, Amir Hossein Gandomi
Paperback
R3,956
Discovery Miles 39 560
Computational Intelligence for Machine…
Rajshree Srivastava, Pradeep Kumar Mallick, …
Hardcover
R3,875
Discovery Miles 38 750
|