![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
The need for efficient content-based image retrieval has increased tremendously in areas such as biomedicine, military, commerce, education, and Web image classification and searching. In the biomedical domain, content-based image retrieval can be used in patient digital libraries, clinical diagnosis, searching of 2-D electrophoresis gels, and pathology slides. Integrated Region-Based Image Retrieval presents a wavelet-based approach for feature extraction, combined with integrated region matching. An image in the database, or a portion of an image, is represented by a set of regions, roughly corresponding to objects, which are characterized by color, texture, shape, and location. A measure for the overall similarity between images is developed as a region-matching scheme that integrates properties of all the regions in the images. The advantage of using this "soft matching" is that it makes the metric robust to poor segmentation, an important property that previous research has not solved. Integrated Region-Based Image Retrieval demonstrates an experimental image retrieval system called SIMPLIcity (Semantics-sensitive Integrated Matching for Picture LIbraries). This system validates these methods on various image databases, proving that such methods perform much better and much faster than existing ones. The system is exceptionally robust to image alterations such as intensity variation, sharpness variation, intentional distortions, cropping, shifting, and rotation. These features are extremely important to biomedical image databases since visual features in the query image are not exactly the same as the visual features in the images in the database. Integrated Region-Based ImageRetrieval is an excellent reference for researchers in the fields of image retrieval, multimedia, computer vision and image processing.
Covering some of the most cutting-edge research on the delivery and retrieval of interactive multimedia content, this volume of specially chosen contributions provides the most updated perspective on one of the hottest contemporary topics. The material represents extended versions of papers presented at the 11th International Workshop on Image Analysis for Multimedia Interactive Services, a vital international forum on this fast-moving field. Logically organized in discrete sections that approach the subject from its various angles, the content deals in turn with content analysis, motion and activity analysis, high-level descriptors and video retrieval, 3-D and multi-view, and multimedia delivery. The chapters cover the finest detail of emerging techniques such as the use of high-level audio information in improving scene segmentation and the use of subjective logic for forensic visual surveillance. On content delivery, the book examines both images and video, focusing on key subjects including an efficient pre-fetching strategy for JPEG 2000 image sequences. Further contributions look at new methodologies for simultaneous block reconstruction and provide a trellis-based algorithm for faster motion-vector decision making.
Multimedia Mining: A Highway to Intelligent Multimedia Documents brings together experts in digital media content analysis, state-of-art data mining and knowledge discovery in multimedia database systems, knowledge engineers and domain experts from diverse applied disciplines. Multimedia documents are ubiquitous and often required, if not essential, in many applications today. This phenomenon has made multimedia documents widespread and extremely large. There are tools for managing and searching within these collections, but the need for tools to extract hidden useful knowledge embedded within multimedia objects is becoming pressing and central for many decision-making applications. The tools needed today are tools for discovering relationships between objects or segments within multimedia document components, such as classifying images based on their content, extracting patterns in sound, categorizing speech and music, and recognizing and tracking objects in video streams.
also in: THE KLUWER INTERNATIONAL SERIES ON ASIAN STUDIES IN COMPUTER AND INFORMATION SCIENCE, Volume 2
The volume "Fuzziness in Database Management Systems" is a highly informative, well-organized and up-to-date collection of contributions authored by many of the leading experts in its field. Among the contributors are the editors, Professors Patrick Bose and Janusz Kacprzyk, both of whom are known internationally. The book is like a movie with an all-star cast. The issue of fuzziness in database management systems has a long history. It begins in 1968 and 1971, when I spent my sabbatical leaves at the IBM Research Laboratory in San Jose, California, as a visiting scholar. During these periods I was associated with Dr. E.F. Codd, the father of relational models of database systems, and came in contact with the developers ofiBMs System Rand SQL. These associations and contacts at a time when the methodology of relational models of data was in its formative stages, made me aware of the basic importance of such models and the desirability of extending them to fuzzy database systems and fuzzy query languages. This perception was reflected in my 1973 ffiM report which led to the paper on the concept of a linguistic variable and later to the paper on the meaning representation language PRUF (Possibilistic Relational Universal Fuzzy). More directly related to database issues during that period were the theses of my students V. Tahani, J. Yang, A. Bolour, M. Shen and R. Sheng, and many subsequent reports by both graduate and undergraduate students at Berkeley.
Real-time computer systems are very often subject to dependability requirements because of their application areas. Fly-by-wire airplane control systems, control of power plants, industrial process control systems and others are required to continue their function despite faults. Fault-tolerance and real-time requirements thus constitute a kind of natural combination in process control applications. Systematic fault-tolerance is based on redundancy, which is used to mask failures of individual components. The problem of replica determinism is thereby to ensure that replicated components show consistent behavior in the absence of faults. It might seem trivial that, given an identical sequence of inputs, replicated computer systems will produce consistent outputs. Unfortunately, this is not the case. The problem of replica non-determinism and the presentation of its possible solutions is the subject of Fault-Tolerant Real-Time Systems: The Problem of Replica Determinism. The field of automotive electronics is an important application area of fault-tolerant real-time systems. Systems like anti-lock braking, engine control, active suspension or vehicle dynamics control have demanding real-time and fault-tolerance requirements. These requirements have to be met even in the presence of very limited resources since cost is extremely important. Because of its interesting properties Fault-Tolerant Real-Time Systems gives an introduction to the application area of automotive electronics. The requirements of automotive electronics are a topic of discussion in the remainder of this work and are used as a benchmark to evaluate solutions to the problem of replica determinism.
Data Mining Methods for Knowledge Discovery provides an introduction to the data mining methods that are frequently used in the process of knowledge discovery. This book first elaborates on the fundamentals of each of the data mining methods: rough sets, Bayesian analysis, fuzzy sets, genetic algorithms, machine learning, neural networks, and preprocessing techniques. The book then goes on to thoroughly discuss these methods in the setting of the overall process of knowledge discovery. Numerous illustrative examples and experimental findings are also included. Each chapter comes with an extensive bibliography. Data Mining Methods for Knowledge Discovery is intended for senior undergraduate and graduate students, as well as a broad audience of professionals in computer and information sciences, medical informatics, and business information systems.
The process of developing predictive models includes many stages. Most resources focus on the modeling algorithms but neglect other critical aspects of the modeling process. This book describes techniques for finding the best representations of predictors for modeling and for nding the best subset of predictors for improving model performance. A variety of example data sets are used to illustrate the techniques along with R programs for reproducing the results.
Earth date, August 11, 1997 "Beam me up Scottie!" "We cannot do it! This is not Star Trek's Enterprise. This is early years Earth." True, this is not yet the era of Star Trek, we cannot beam captain James T. Kirk or captain Jean Luc Pickard or an apple or anything else anywhere. What we can do though is beam information about Kirk or Pickard or an apple or an insurance agent. We can beam a record of a patient, the status of an engine, a weather report. We can beam this information anywhere, to mobile workers, to field engineers, to a track loading apples, to ships crossing the Oceans, to web surfers. We have reached a point where the promise of information access anywhere and anytime is close to realization. The enabling technology, wireless networks, exists; what remains to be achieved is providing the infrastructure and the software to support the promise. Universal access and management of information has been one of the driving forces in the evolution of computer technology. Central computing gave the ability to perform large and complex computations and advanced information manipulation. Advances in networking connected computers together and led to distributed computing. Web technology and the Internet went even further to provide hyper-linked information access and global computing. However, restricting access stations to physical location limits the boundary of the vision.
st This volume contains the proceedings of two conferences held as part of the 21 IFIP World Computer Congress in Brisbane, Australia, 20-23 September 2010. th The first part of the book presents the proceedings of DIPES 2010, the 7 IFIP Conference on Distributed and Parallel Embedded Systems. The conference, int- duced in a separate preface by the Chairs, covers a range of topics from specification and design of embedded systems through to dependability and fault tolerance. rd The second part of the book contains the proceedings of BICC 2010, the 3 IFIP Conference on Biologically-Inspired Collaborative Computing. The conference is concerned with emerging techniques from research areas such as organic computing, autonomic computing and self-adaptive systems, where inspiraton for techniques - rives from exhibited behaviour in nature and biology. Such techniques require the use of research developed by the DIPES community in supporting collaboration over multiple systems. We hope that the combination of the two proceedings will add value for the reader and advance our related work.
Decision diagrams (DDs) are data structures for efficient (time/space) representations of large discrete functions. In addition to their wide application in engineering practice, DDs are now a standard part of many CAD systems for logic design and a basis for severe signal processing algorithms. "Spectral Interpretation of Decision Diagrams" derives from attempts to classify and uniformly interpret DDs through spectral interpretation methods, relating them to different Fourier-series-like functional expressions for discrete functions and a group-theoretic approach to DD optimization. The book examines DDs found in literature and engineering practice and provides insights into relationships between DDs and different polynomial or spectral expressions for representation of discrete functions. In addition, it offers guidelines and criteria for selection of the most suitable representation in terms of space and time complexity. The work complements theory with numerous illustrative examples from practice. Moreover, the importance of DD representations to the verification and testing of arithmetic circuits is addressed, as well as problems related to various signal processing tasks.
This book presents a specific and unified approach to Knowledge Discovery and Data Mining, termed IFN for Information Fuzzy Network methodology. Data Mining (DM) is the science of modelling and generalizing common patterns from large sets of multi-type data. DM is a part of KDD, which is the overall process for Knowledge Discovery in Databases. The accessibility and abundance of information today makes this a topic of particular importance and need. The book has three main parts complemented by appendices as well as software and project data that are accessible from the book's web site (http: //www.eng.tau.ac.iV-maimonlifn-kdg ). Part I (Chapters 1-4) starts with the topic of KDD and DM in general and makes reference to other works in the field, especially those related to the information theoretic approach. The remainder of the book presents our work, starting with the IFN theory and algorithms. Part II (Chapters 5-6) discusses the methodology of application and includes case studies. Then in Part III (Chapters 7-9) a comparative study is presented, concluding with some advanced methods and open problems. The IFN, being a generic methodology, applies to a variety of fields, such as manufacturing, finance, health care, medicine, insurance, and human resources. The appendices expand on the relevant theoretical background and present descriptions of sample projects (including detailed results)."
The book examines patterns of participation in human rights treaties. International relations theory is divided on what motivates states to participate in treaties, specifically human rights treaties. Instead of examining the specific motivations, this dissertation examines patterns of participation. In doing so, it attempts to match theoretical expectations of state behavior with participation. This book provides significant evidence that there are multiple motivations that lead states to participate in human rights treaties.
Information and communication technology (ICT) is permeating all aspects of service management; in the public sector, ICT is improving the capacity of government agencies to provide a wide array of innovative services that benefit citizens. E-Government is emerging as a multidisciplinary field of research based initially on empirical insights from practice. Efforts to theoretically anchor the field have opened perspectives from multiple research domains, as demonstrated in Practical Studies in E-Government. In this volume, the editors and contributors consider the evolution of the e-government field from both practical and research perspectives. Featuring in-depth case studies of initiatives in eight countries, the book deals with such technology-oriented issues as interoperability, prototyping, data quality, and advanced interfaces, and management-oriented issues as e-procurement, e-identification, election results verification, and information privacy. The book features best practices, tools for measuring and improving performance, and analytical methods for researchers.
This book assembles contributions from computer scientists and librarians that altogether encompass the complete range of tools, tasks and processes needed to successfully preserve the cultural heritage of the Web. It combines the librarian 's application knowledge with the computer scientist 's implementation knowledge, and serves as a standard introduction for everyone involved in keeping alive the immense amount of online information.
This comprehensive book offers a full picture of the cutting edge technologies in the area of "Multimedia Retrieval and Management". It addresses graduate students and scientists in electrical engineering and in computer science as well as system designers, engineers, programmers and other technical managers in the IT industries. The book provides a complete set of theories and technologies necessary for a profound introduction to the field. It includes multimedia low-level feature extraction and high-level semantic description in addition to multimedia authentication and watermarking, and the most up-to-date MPEG-7 standard. A broad range of practical applications is covered, e.g., digital libraries, medical images, biometrics, human palm-print and face-for-security, living plants data management and video-on-demand service.
Fundamentals of Information Systems contains articles from the 7th International Workshop on Foundations of Models and Languages for Data and Objects (FoMLaDO '98), which was held in Timmel, Germany. These articles capture various aspects of database and information systems theory: identification as a primitive of database models deontic action programs marked nulls in queries topological canonization in spatial databases complexity of search queries complexity of Web queries attribute grammars for structured document queries hybrid multi-level concurrency control efficient navigation in persistent object stores formal semantics of UML reengineering of object bases and integrity dependence . Fundamentals of Information Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Fuzzy Databases: Modeling, Design and Implementation focuses on some semantic aspects which have not been studied in previous works and extends the EER model with fuzzy capabilities. The exposed model is called FuzzyEER model, and some of the studied extensions are: fuzzy attributes, fuzzy aggregations and different aspects on specializations, such as fuzzy degrees, fuzzy constraints, etc. All these fuzzy extensions offer greater expressiveness in conceptual design. This book, while providing a global and integrated view of fuzzy database constructions, serves as an introduction to fuzzy logic, fuzzy databases and fuzzy modeling in databases.
Data compression is now indispensable to products and services of many industries including computers, communications, healthcare, publishing and entertainment. This invaluable resource introduces this area to information system managers and others who need to understand how it is changing the world of digital systems. For those who know the technology well, it reveals what happens when data compression is used in real-world applications and provides guidance for future technology development.
This book introduces quantitative intertextuality, a new approach to the algorithmic study of information reuse in text, sound and images. Employing a variety of tools from machine learning, natural language processing, and computer vision, readers will learn to trace patterns of reuse across diverse sources for scholarly work and practical applications. The respective chapters share highly novel methodological insights in order to guide the reader through the basics of intertextuality. In Part 1, "Theory", the theoretical aspects of intertextuality are introduced, leading to a discussion of how they can be embodied by quantitative methods. In Part 2, "Practice", specific quantitative methods are described to establish a set of automated procedures for the practice of quantitative intertextuality. Each chapter in Part 2 begins with a general introduction to a major concept (e.g., lexical matching, sound matching, semantic matching), followed by a case study (e.g., detecting allusions to a popular television show in tweets, quantifying sound reuse in Romantic poetry, identifying influences in fan faction by thematic matching), and finally the development of an algorithm that can be used to reveal parallels in the relevant contexts. Because this book is intended as a "gentle" introduction, the emphasis is often on simple yet effective algorithms for a given matching task. A set of exercises is included at the end of each chapter, giving readers the chance to explore more cutting-edge solutions and novel aspects to the material at hand. Additionally, the book's companion website includes software (R and C++ library code) and all of the source data for the examples in the book, as well as supplemental content (slides, high-resolution images, additional results) that may prove helpful for exploring the different facets of quantitative intertextuality that are presented in each chapter. Given its interdisciplinary nature, the book will appeal to a broad audience. From practitioners specializing in forensics to students of cultural studies, readers with diverse backgrounds (e.g., in the social sciences, natural language processing, or computer vision) will find valuable insights.
Database and database systems have become an essential part of everyday life, such as in banking activities, online shopping, or reservations of airline tickets and hotels. These trends place more demands on the capabilities of future database systems, which need to evolve into decision making systems based on data from multiple sources with varying reliability. In this book a model for the next generation of database systems is presented. It is demonstrated how to quantize favorable and unfavorable qualitative facts so that they can be stored and processed efficiently, as well as how to use the reliability of the contributing sources in our decision makings. The concept of a confidence index set (ciset), is introduced in order to mathematically model the above issues. A simple introduction to relational database systems is given allowing anyone with no background in database theory to appreciate the further contents of this work, especially the extended relational operations and semantics of the ciset relational database model.
The IFIP World Computer Congress (WCC) is one of the most important conferences in the area of computer science at the worldwide level and it has a federated structure, which takes into account the rapidly growing and expanding interests in this area. Informatics is rapidly changing and becoming more and more connected to a number of human and social science disciplines. Human-computer interaction is now a mature and still dynamically evolving part of this area, which is represented in IFIP by the Technical Committee 13 on HCI. In this WCC edition it was interesting and useful to have again a Symposium on Human-Computer Interaction in order to p- sent and discuss a number of contributions in this field. There has been increasing awareness among designers of interactive systems of the importance of designing for usability, but we are still far from having products that are really usable, and usability can mean different things depending on the app- cation domain. We are all aware that too many users of current technology often feel frustrated because computer systems are not compatible with their abilities and needs in existing work practices. As designers of tomorrow's technology, we have the - sponsibility of creating computer artifacts that would permit better user experience with the various computing devices, so that users may enjoy more satisfying expe- ences with information and communications technologies.
Searching for Semantics: Data Mining, Reverse Engineering Stefano Spaccapietra Fred M aryanski Swiss Federal Institute of Technology University of Connecticut Lausanne, Switzerland Storrs, CT, USA REVIEW AND FUTURE DIRECTIONS In the last few years, database semantics research has turned sharply from a highly theoretical domain to one with more focus on practical aspects. The DS- 7 Working Conference held in October 1997 in Leysin, Switzerland, demon strated the more pragmatic orientation of the current generation of leading researchers. The papers presented at the meeting emphasized the two major areas: the discovery of semantics and semantic data modeling. The work in the latter category indicates that although object-oriented database management systems have emerged as commercially viable prod ucts, many fundamental modeling issues require further investigation. Today's object-oriented systems provide the capability to describe complex objects and include techniques for mapping from a relational database to objects. However, we must further explore the expression of information regarding the dimensions of time and space. Semantic models possess the richness to describe systems containing spatial and temporal data. The challenge of in corporating these features in a manner that promotes efficient manipulation by the subject specialist still requires extensive development."
Concurrency in Dependable Computing focuses on concurrency related issues in the area of dependable computing. Failures of system components, be hardware units or software modules, can be viewed as undesirable events occurring concurrently with a set of normal system events. Achieving dependability therefore is closely related to, and also benefits from, concurrency theory and formalisms. This beneficial relationship appears to manifest into three strands of work. Application level structuring of concurrent activities. Concepts such as atomic actions, conversations, exception handling, view synchrony, etc., are useful in structuring concurrent activities so as to facilitate attempts at coping with the effects of component failures. Replication induced concurrency management. Replication is a widely used technique for achieving reliability. Replica management essentially involves ensuring that replicas perceive concurrent events identically. Application of concurrency formalisms for dependability assurance. Fault-tolerant algorithms are harder to verify than their fault-free counterparts due to the fact that the impact of component faults at each state need to be considered in addition to valid state transitions. CSP, Petri nets, CCS are useful tools to specify and verify fault-tolerant designs and protocols. Concurrency in Dependable Computing explores many significant issues in all three strands. To this end, it is composed as a collection of papers written by authors well-known in their respective areas of research. To ensure quality, the papers are reviewed by a panel of at least three experts in the relevant area. |
You may like...
Handbook of Logic in Computer Science…
S. Abramsky, Dov M. Gabbay, …
Hardcover
R11,864
Discovery Miles 118 640
Optical Properties of Phosphate and…
Ritesh L. Kohale, Vijay B Pawade, …
Paperback
R4,171
Discovery Miles 41 710
How to Read Bridges - A crash course…
Edward Denison, Ian Stewart
Paperback
R356
Discovery Miles 3 560
|