![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This study, written in the context of its first publication in 1970, discusses and documents the invasion of privacy by the corporation and the social institution in the search for efficiency in information processing. Discussing areas such as the impact of the computer on administration, privacy and the storage on information, the authors assess the technical and social feasibility of constructing integrated data banks to cover the details of populations. The book was hugely influential both in terms of scholarship and legislation, and the years following saw the introduction of the Data Protection Act of 1984, which was then consolidated by the Act of 1998. The topics under discussion remain of great concern to the public in our increasingly web-based world, ensuring the continued relevance of this title to academics and students with an interest in data protection and public privacy.
The need for efficient content-based image retrieval has increased tremendously in areas such as biomedicine, military, commerce, education, and Web image classification and searching. In the biomedical domain, content-based image retrieval can be used in patient digital libraries, clinical diagnosis, searching of 2-D electrophoresis gels, and pathology slides. Integrated Region-Based Image Retrieval presents a wavelet-based approach for feature extraction, combined with integrated region matching. An image in the database, or a portion of an image, is represented by a set of regions, roughly corresponding to objects, which are characterized by color, texture, shape, and location. A measure for the overall similarity between images is developed as a region-matching scheme that integrates properties of all the regions in the images. The advantage of using this "soft matching" is that it makes the metric robust to poor segmentation, an important property that previous research has not solved. Integrated Region-Based Image Retrieval demonstrates an experimental image retrieval system called SIMPLIcity (Semantics-sensitive Integrated Matching for Picture LIbraries). This system validates these methods on various image databases, proving that such methods perform much better and much faster than existing ones. The system is exceptionally robust to image alterations such as intensity variation, sharpness variation, intentional distortions, cropping, shifting, and rotation. These features are extremely important to biomedical image databases since visual features in the query image are not exactly the same as the visual features in the images in the database. Integrated Region-Based ImageRetrieval is an excellent reference for researchers in the fields of image retrieval, multimedia, computer vision and image processing.
Covering some of the most cutting-edge research on the delivery and retrieval of interactive multimedia content, this volume of specially chosen contributions provides the most updated perspective on one of the hottest contemporary topics. The material represents extended versions of papers presented at the 11th International Workshop on Image Analysis for Multimedia Interactive Services, a vital international forum on this fast-moving field. Logically organized in discrete sections that approach the subject from its various angles, the content deals in turn with content analysis, motion and activity analysis, high-level descriptors and video retrieval, 3-D and multi-view, and multimedia delivery. The chapters cover the finest detail of emerging techniques such as the use of high-level audio information in improving scene segmentation and the use of subjective logic for forensic visual surveillance. On content delivery, the book examines both images and video, focusing on key subjects including an efficient pre-fetching strategy for JPEG 2000 image sequences. Further contributions look at new methodologies for simultaneous block reconstruction and provide a trellis-based algorithm for faster motion-vector decision making.
Multimedia Mining: A Highway to Intelligent Multimedia Documents brings together experts in digital media content analysis, state-of-art data mining and knowledge discovery in multimedia database systems, knowledge engineers and domain experts from diverse applied disciplines. Multimedia documents are ubiquitous and often required, if not essential, in many applications today. This phenomenon has made multimedia documents widespread and extremely large. There are tools for managing and searching within these collections, but the need for tools to extract hidden useful knowledge embedded within multimedia objects is becoming pressing and central for many decision-making applications. The tools needed today are tools for discovering relationships between objects or segments within multimedia document components, such as classifying images based on their content, extracting patterns in sound, categorizing speech and music, and recognizing and tracking objects in video streams.
This book examines the techniques and applications involved in the Web Mining, Web Personalization and Recommendation and Web Community Analysis domains, including a detailed presentation of the principles, developed algorithms, and systems of the research in these areas. The applications of web mining, and the issue of how to incorporate web mining into web personalization and recommendation systems are also reviewed. Additionally, the volume explores web community mining and analysis to find the structural, organizational and temporal developments of web communities and reveal the societal sense of individuals or communities. The volume will benefit both academic and industry communities interested in the techniques and applications of web search, web data management, web mining and web knowledge discovery, as well as web community and social network analysis.
also in: THE KLUWER INTERNATIONAL SERIES ON ASIAN STUDIES IN COMPUTER AND INFORMATION SCIENCE, Volume 2
The volume "Fuzziness in Database Management Systems" is a highly informative, well-organized and up-to-date collection of contributions authored by many of the leading experts in its field. Among the contributors are the editors, Professors Patrick Bose and Janusz Kacprzyk, both of whom are known internationally. The book is like a movie with an all-star cast. The issue of fuzziness in database management systems has a long history. It begins in 1968 and 1971, when I spent my sabbatical leaves at the IBM Research Laboratory in San Jose, California, as a visiting scholar. During these periods I was associated with Dr. E.F. Codd, the father of relational models of database systems, and came in contact with the developers ofiBMs System Rand SQL. These associations and contacts at a time when the methodology of relational models of data was in its formative stages, made me aware of the basic importance of such models and the desirability of extending them to fuzzy database systems and fuzzy query languages. This perception was reflected in my 1973 ffiM report which led to the paper on the concept of a linguistic variable and later to the paper on the meaning representation language PRUF (Possibilistic Relational Universal Fuzzy). More directly related to database issues during that period were the theses of my students V. Tahani, J. Yang, A. Bolour, M. Shen and R. Sheng, and many subsequent reports by both graduate and undergraduate students at Berkeley.
Cryptography, secret writing, is enjoying a scientific renaissance following the seminal discovery in 1977 of public-key cryptography and applications in computers and communications. This book gives a broad overview of public-key cryptography - its essence and advantages, various public-key cryptosystems, and protocols - as well as a comprehensive introduction to classical cryptography and cryptoanalysis. The second edition has been revised and enlarged especially in its treatment of cryptographic protocols. From a review of the first edition: "This is a comprehensive review ... there can be no doubt that this will be accepted as a standard text. At the same time, it is clearly and entertainingly written ... and can certainly stand alone." Alex M. Andrew, Kybernetes, March 1992
Real-time computer systems are very often subject to dependability requirements because of their application areas. Fly-by-wire airplane control systems, control of power plants, industrial process control systems and others are required to continue their function despite faults. Fault-tolerance and real-time requirements thus constitute a kind of natural combination in process control applications. Systematic fault-tolerance is based on redundancy, which is used to mask failures of individual components. The problem of replica determinism is thereby to ensure that replicated components show consistent behavior in the absence of faults. It might seem trivial that, given an identical sequence of inputs, replicated computer systems will produce consistent outputs. Unfortunately, this is not the case. The problem of replica non-determinism and the presentation of its possible solutions is the subject of Fault-Tolerant Real-Time Systems: The Problem of Replica Determinism. The field of automotive electronics is an important application area of fault-tolerant real-time systems. Systems like anti-lock braking, engine control, active suspension or vehicle dynamics control have demanding real-time and fault-tolerance requirements. These requirements have to be met even in the presence of very limited resources since cost is extremely important. Because of its interesting properties Fault-Tolerant Real-Time Systems gives an introduction to the application area of automotive electronics. The requirements of automotive electronics are a topic of discussion in the remainder of this work and are used as a benchmark to evaluate solutions to the problem of replica determinism.
Data Mining Methods for Knowledge Discovery provides an introduction to the data mining methods that are frequently used in the process of knowledge discovery. This book first elaborates on the fundamentals of each of the data mining methods: rough sets, Bayesian analysis, fuzzy sets, genetic algorithms, machine learning, neural networks, and preprocessing techniques. The book then goes on to thoroughly discuss these methods in the setting of the overall process of knowledge discovery. Numerous illustrative examples and experimental findings are also included. Each chapter comes with an extensive bibliography. Data Mining Methods for Knowledge Discovery is intended for senior undergraduate and graduate students, as well as a broad audience of professionals in computer and information sciences, medical informatics, and business information systems.
Earth date, August 11, 1997 "Beam me up Scottie!" "We cannot do it! This is not Star Trek's Enterprise. This is early years Earth." True, this is not yet the era of Star Trek, we cannot beam captain James T. Kirk or captain Jean Luc Pickard or an apple or anything else anywhere. What we can do though is beam information about Kirk or Pickard or an apple or an insurance agent. We can beam a record of a patient, the status of an engine, a weather report. We can beam this information anywhere, to mobile workers, to field engineers, to a track loading apples, to ships crossing the Oceans, to web surfers. We have reached a point where the promise of information access anywhere and anytime is close to realization. The enabling technology, wireless networks, exists; what remains to be achieved is providing the infrastructure and the software to support the promise. Universal access and management of information has been one of the driving forces in the evolution of computer technology. Central computing gave the ability to perform large and complex computations and advanced information manipulation. Advances in networking connected computers together and led to distributed computing. Web technology and the Internet went even further to provide hyper-linked information access and global computing. However, restricting access stations to physical location limits the boundary of the vision.
Handbook of Economic Expectations discusses the state-of-the-art in the collection, study and use of expectations data in economics, including the modelling of expectations formation and updating, as well as open questions and directions for future research. The book spans a broad range of fields, approaches and applications using data on subjective expectations that allows us to make progress on fundamental questions around the formation and updating of expectations by economic agents and their information sets. The information included will help us study heterogeneity and potential biases in expectations and analyze impacts on behavior and decision-making under uncertainty.
Very little has been written to address the emerging trends in social software and technology. With these technologies and applications being relatively new and evolving rapidly, research is wide open in these fields. Social Software and Web 2.0 Technology Trends fills this critical research need, providing an overview of the current state of Web 2.0 technologies and their impact on organizations and educational institutions. Written for academicians and practicing managers, this estimable book presents business applications as well as implementations for institutions of higher education with numerous examples of how these technologies are currently being used. Delivering authoritative insights to a rapidly evolving domain of technology application, this book is an invaluable resource for both academic libraries and for classroom instruction.
Multivariate data analysis is a central tool whenever several variables need to be considered at the same time. The present book explains a powerful and versatile way to analyse data tables, suitable also for researchers without formal training in statistics. This method for extracting useful information from data is demonstrated for various types of quality assessment, ranging from human quality perception via industrial quality monitoring to health quality and its molecular basis. Key features include:
The book is written with ISO certified businesses and laboratories in mind, to enhance Total Quality Management (TQM). As yet there are no clear guidelines for realistic data analysis of quality in complex systems - this volume bridges the gap.
The advances of live cell video imaging and high-throughput technologies for functional and chemical genomics provide unprecedented opportunities to understand how biological processes work in subcellularand multicellular systems. The interdisciplinary research field of Video Bioinformatics is defined by BirBhanu as the automated processing, analysis, understanding, data mining, visualization, query-basedretrieval/storage of biological spatiotemporal events/data and knowledge extracted from dynamic imagesand microscopic videos. Video bioinformatics attempts to provide a deeper understanding of continuousand dynamic life processes.Genome sequences alone lack spatial and temporal information, and video imaging of specific moleculesand their spatiotemporal interactions, using a range of imaging methods, are essential to understandhow genomes create cells, how cells constitute organisms, and how errant cells cause disease. The bookexamines interdisciplinary research issues and challenges with examples that deal with organismal dynamics,intercellular and tissue dynamics, intracellular dynamics, protein movement, cell signaling and softwareand databases for video bioinformatics.Topics and Features* Covers a set of biological problems, their significance, live-imaging experiments, theory andcomputational methods, quantifiable experimental results and discussion of results.* Provides automated methods for analyzing mild traumatic brain injury over time, identifying injurydynamics after neonatal hypoxia-ischemia and visualizing cortical tissue changes during seizureactivity as examples of organismal dynamics* Describes techniques for quantifying the dynamics of human embryonic stem cells with examplesof cell detection/segmentation, spreading and other dynamic behaviors which are important forcharacterizing stem cell health* Examines and quantifies dynamic processes in plant and fungal systems such as cell trafficking,growth of pollen tubes in model systems such as Neurospora Crassa and Arabidopsis* Discusses the dynamics of intracellular molecules for DNA repair and the regulation of cofilintransport using video analysis* Discusses software, system and database aspects of video bioinformatics by providing examples of5D cell tracking by FARSIGHT open source toolkit, a survey on available databases and software,biological processes for non-verbal communications and identification and retrieval of moth imagesThis unique text will be of great interest to researchers and graduate students of Electrical Engineering,Computer Science, Bioengineering, Cell Biology, Toxicology, Genetics, Genomics, Bioinformatics, ComputerVision and Pattern Recognition, Medical Image Analysis, and Cell Molecular and Developmental Biology.The large number of example applications will also appeal to application scientists and engineers.Dr. Bir Bhanu is Distinguished Professor of Electrical & C omputer Engineering, Interim Chair of theDepartment of Bioengineering, Cooperative Professor of Computer Science & Engineering, and MechanicalEngineering and the Director of the Center for Research in Intelligent Systems, at the University of California,Riverside, California, USA.Dr. Prue Talbot is Professor of Cell Biology & Neuroscience and Director of the Stem Cell Center and Core atthe University of California Riverside, California, USA.
st This volume contains the proceedings of two conferences held as part of the 21 IFIP World Computer Congress in Brisbane, Australia, 20-23 September 2010. th The first part of the book presents the proceedings of DIPES 2010, the 7 IFIP Conference on Distributed and Parallel Embedded Systems. The conference, int- duced in a separate preface by the Chairs, covers a range of topics from specification and design of embedded systems through to dependability and fault tolerance. rd The second part of the book contains the proceedings of BICC 2010, the 3 IFIP Conference on Biologically-Inspired Collaborative Computing. The conference is concerned with emerging techniques from research areas such as organic computing, autonomic computing and self-adaptive systems, where inspiraton for techniques - rives from exhibited behaviour in nature and biology. Such techniques require the use of research developed by the DIPES community in supporting collaboration over multiple systems. We hope that the combination of the two proceedings will add value for the reader and advance our related work.
Semantics, Web services, and Web processes promise better re-use, universal interoperability and integration. Semantics has been recognized as the primary tool to address the challenges of a broad spectrum of heterogeneity and for improving automation through machine understandable descriptions. Semantic Web Services, Processes and Applications brings contributions from researchers who study, explore and understand the semantic enabling of all phases of semantic Web processes. This encompasses design, annotation, discovery, choreography and composition. Also this book presents fundamental capabilities and techniques associated with ontological modeling or services, annotation, matching and mapping, and reasoning. This is complemented by discussion of applications in e-Government and bioinformatics. Special bulk rates are available for course adoption through Publishing Editor.
In the chapters in Part I of this textbook the author introduces the fundamental ideas of artificial intelligence and computational intelligence. In Part II he explains key AI methods such as search, evolutionary computing, logic-based reasoning, knowledge representation, rule-based systems, pattern recognition, neural networks, and cognitive architectures. Finally, in Part III, he expands the context to discuss theories of intelligence in philosophy and psychology, key applications of AI systems, and the likely future of artificial intelligence. A key feature of the author's approach is historical and biographical footnotes, stressing the multidisciplinary character of the field and its pioneers. The book is appropriate for advanced undergraduate and graduate courses in computer science, engineering, and other applied sciences, and the appendices offer short formal, mathematical models and notes to support the reader.
Decision diagrams (DDs) are data structures for efficient (time/space) representations of large discrete functions. In addition to their wide application in engineering practice, DDs are now a standard part of many CAD systems for logic design and a basis for severe signal processing algorithms. "Spectral Interpretation of Decision Diagrams" derives from attempts to classify and uniformly interpret DDs through spectral interpretation methods, relating them to different Fourier-series-like functional expressions for discrete functions and a group-theoretic approach to DD optimization. The book examines DDs found in literature and engineering practice and provides insights into relationships between DDs and different polynomial or spectral expressions for representation of discrete functions. In addition, it offers guidelines and criteria for selection of the most suitable representation in terms of space and time complexity. The work complements theory with numerous illustrative examples from practice. Moreover, the importance of DD representations to the verification and testing of arithmetic circuits is addressed, as well as problems related to various signal processing tasks.
Information Systems and Data Compression presents a uniform approach and methodology for designing intelligent information systems. A framework for information concepts is introduced for various types of information systems such as communication systems, information storage systems and systems for simplifying structured information. The book introduces several new concepts and presents a novel interpretation of a wide range of topics in communications, information storage, and information compression. Numerous illustrations for designing information systems for compression of digital data and images are used throughout the book.
This thesis primarily focuses on how to carry out intelligent sensing and understand the high-dimensional and low-quality visual information. After exploring the inherent structures of the visual data, it proposes a number of computational models covering an extensive range of mathematical topics, including compressive sensing, graph theory, probabilistic learning and information theory. These computational models are also applied to address a number of real-world problems including biometric recognition, stereo signal reconstruction, natural scene parsing, and SAR image processing.
This book presents a specific and unified approach to Knowledge Discovery and Data Mining, termed IFN for Information Fuzzy Network methodology. Data Mining (DM) is the science of modelling and generalizing common patterns from large sets of multi-type data. DM is a part of KDD, which is the overall process for Knowledge Discovery in Databases. The accessibility and abundance of information today makes this a topic of particular importance and need. The book has three main parts complemented by appendices as well as software and project data that are accessible from the book's web site (http: //www.eng.tau.ac.iV-maimonlifn-kdg ). Part I (Chapters 1-4) starts with the topic of KDD and DM in general and makes reference to other works in the field, especially those related to the information theoretic approach. The remainder of the book presents our work, starting with the IFN theory and algorithms. Part II (Chapters 5-6) discusses the methodology of application and includes case studies. Then in Part III (Chapters 7-9) a comparative study is presented, concluding with some advanced methods and open problems. The IFN, being a generic methodology, applies to a variety of fields, such as manufacturing, finance, health care, medicine, insurance, and human resources. The appendices expand on the relevant theoretical background and present descriptions of sample projects (including detailed results)."
Information and communication technology (ICT) is permeating all aspects of service management; in the public sector, ICT is improving the capacity of government agencies to provide a wide array of innovative services that benefit citizens. E-Government is emerging as a multidisciplinary field of research based initially on empirical insights from practice. Efforts to theoretically anchor the field have opened perspectives from multiple research domains, as demonstrated in Practical Studies in E-Government. In this volume, the editors and contributors consider the evolution of the e-government field from both practical and research perspectives. Featuring in-depth case studies of initiatives in eight countries, the book deals with such technology-oriented issues as interoperability, prototyping, data quality, and advanced interfaces, and management-oriented issues as e-procurement, e-identification, election results verification, and information privacy. The book features best practices, tools for measuring and improving performance, and analytical methods for researchers.
This book assembles contributions from computer scientists and librarians that altogether encompass the complete range of tools, tasks and processes needed to successfully preserve the cultural heritage of the Web. It combines the librarian 's application knowledge with the computer scientist 's implementation knowledge, and serves as a standard introduction for everyone involved in keeping alive the immense amount of online information. |
You may like...
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
|