![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
Fuzzy Database Modeling with XML aims to provide a single record of current research and practical applications in the fuzzy databases. This volume is the outgrowth of research the author has conducted in recent years. Fuzzy Database Modeling with XML introduces state-of-the-art information to the database research, while at the same time serving the information technology professional faced with a non-traditional application that defeats conventional approaches. The research on fuzzy conceptual models and fuzzy object-oriented databases is receiving increasing attention, in addition to fuzzy relational database models. With rapid advances in network and internet techniques as well, the databases have been applied under the environment of distributed information systems. It is essential in this case to integrate multiple fuzzy database systems. Since databases are commonly employed to store and manipulate XML data, additional requirements are necessary to model fuzzy information with XML. Secondly, this book maps fuzzy XML model to the fuzzy databases. Very few efforts at investigating these issues have thus far occurred. Fuzzy Database Modeling with XML is designed for a professional audience of researchers and practitioners in industry. This book is also suitable for graduate-level students in computer science.
Relational databases hold data, right? They indeed do, but to think of a database as nothing more than a container for data is to miss out on the profound power that underlies relational technology. Use the expressive power of mathematics to precisely specify designs and business rules. Communicate effectively about design using the universal language of mathematics. Develop and write complex SQL statements with confidence. Avoid pitfalls and problems from common relational bugaboos such as null values and duplicate rows. The math that you learn in this book will put you above the level of understanding of most database professionals today. You'll better understand the technology and be able to apply it more effectively. You'll avoid data anomalies like redundancy and inconsistency. Understanding what's in this book will take your mastery of relational technology to heights you may not have thought possible.
This book explains efficient solutions for segmenting the intensity levels of different types of multilevel images. The authors present hybrid soft computing techniques, which have advantages over conventional soft computing solutions as they incorporate data heterogeneity into the clustering/segmentation procedures. This is a useful introduction and reference for researchers and graduate students of computer science and electronics engineering, particularly in the domains of image processing and computational intelligence.
The area of similarity searching is a very hot topic for both research and c- mercial applications. Current data processing applications use data with c- siderably less structure and much less precise queries than traditional database systems. Examples are multimedia data like images or videos that offer query by example search, product catalogs that provide users with preference based search, scientific data records from observations or experimental analyses such as biochemical and medical data, or XML documents that come from hetero- neous data sources on the Web or in intranets and thus does not exhibit a global schema. Such data can neither be ordered in a canonical manner nor meani- fully searched by precise database queries that would return exact matches. This novel situation is what has given rise to similarity searching, also - ferred to as content based or similarity retrieval. The most general approach to similarity search, still allowing construction of index structures, is modeled in metric space. In this book. Prof. Zezula and his co authors provide the first monograph on this topic, describing its theoretical background as well as the practical search tools of this innovative technology.
E-commerce systems involve a complex interaction between Web Based
Internet related software, application software and databases. It
is clear that the success of e-commerce systems is going to be
dependent not only on the technology of these systems but also on
the quality of the underlying databases and supporting processes.
Whilst databases have achieved considerable success in the wider
marketplace, the main research effort has been on tools and
techniques for high volume but based on relatively simplistic
record management. The modern advanced e-commerce systems require a
paradigm shift to allow the meaningful representation and
manipulation of complex business information on the Web and
Internet. This requires the development of new methodologies,
environments and tools to allow one to easily understand the
underlying structure to facilitate access, manipulation and
modification of such information. An essential characteristic to
gain understanding and interoperability is a clearly defined
semantics for e-commerce systems and databases.
Data structures and algorithms are presented at the college level in a highly accessible format that presents material with one-page displays in a way that will appeal to both teachers and students. The thirteen chapters cover: Models of Computation, Lists, Induction and Recursion, Trees, Algorithm Design, Hashing, Heaps, Balanced Trees, Sets Over a Small Universe, Graphs, Strings, Discrete Fourier Transform, Parallel Computation. Key features: * Complicated concepts are expressed clearly in a single page with minimal notation and without the "clutter" of the syntax of a particular programming language; algorithms are presented with self-explanatory "pseudo-code." * Chapters 1-4 focus on elementary concepts, the exposition unfolding at a slower pace. Sample exercises with solutions are provided. Sections that may be skipped for an introductory course are starred. Requires only some basic mathematics background and some computer programming experience. * Chapters 5-13 progress at a faster pace. The material is suitable for undergraduates or first-year graduates who need only review Chapters 1-4. * Chapters 1-4. This book may be used for a one-semester introductory course (based on Chapters 1-4 and portions of the chapters on algorithm design, hashing, and graph algorithms) and for a one-semester advanced course that starts at Chapter 5. A yearlong course may be based on the entire book. * Sorting, often perceived as rather technical, is not treated as a separate chapter, but is used in many examples (including bubble sort, merge sort, tree sort, heap sort, quick sort, and several parallel algorithms). Also, lower bounds on sorting by comparisons are included with thepresentation of heaps in the context of lower bounds for comparison-based structures. * Chapter 13 on parallel models of computation is something of a mini-book itself, and a good way to end a course. Although it is not clear what parallel architectures will prevail in the future, the idea is to further teach fundamental concepts in the design of algorithms by exploring classic models of parallel computation, including the PRAM, generic PRAM simulation, HC/CCC/Butterfly, the mesh, and parallel hardware area-time tradeoffs (with many examples). Apart from classroom use, this book serves as a good reference on the subject of data structures and algorithms. Its page-at-a-time format makes it easy to review material that the reader has studied in the past.
Responsive Computer Systems: Steps Towards Fault-Tolerant Real-Time Systems provides an extensive treatment of the most important issues in the design of modern Responsive Computer Systems. It lays the groundwork for a more comprehensive model that allows critical design issues to be treated in ways that more traditional disciplines of computer research have inhibited. It breaks important ground in the development of a fruitful, modern perspective on computer systems as they are currently developing and as they may be expected to develop over the next decade. Audience: An interesting and important road map to some of the most important emerging issues in computing, suitable as a secondary text for graduate level courses on responsive computer systems and as a reference for industrial practitioners.
Information-Statistical Data Mining: Warehouse Integration with
Examples of Oracle Basics is written to introduce basic concepts,
advanced research techniques, and practical solutions of data
warehousing and data mining for hosting large data sets and EDA.
This book is unique because it is one of the few in the forefront
that attempts to bridge statistics and information theory through a
concept of patterns.
This book contains the papers presented and discussed at the conference that was held in May/June 1997, in Philadelphia, Pennsylvania, USA, and that was sponsored by Working Group 8.2 of the International Federation for Information Processing. IFIP established 8.2 as a group concerned with the interaction of information systems and the organization. Information Systems and Qualitative Research is essential reading for professionals and students working in information systems in a business environment, such as systems analysts, developers and designers, data administrators, and senior executives in all business areas that use information technology, as well as consultants in the fields of information systems, management, and quality management.
This book presents cutting edge research on the new ethical challenges posed by biomedical Big Data technologies and practices. 'Biomedical Big Data' refers to the analysis of aggregated, very large datasets to improve medical knowledge and clinical care. The book describes the ethical problems posed by aggregation of biomedical datasets and re-use/re-purposing of data, in areas such as privacy, consent, professionalism, power relationships, and ethical governance of Big Data platforms. Approaches and methods are discussed that can be used to address these problems to achieve the appropriate balance between the social goods of biomedical Big Data research and the safety and privacy of individuals. Seventeen original contributions analyse the ethical, social and related policy implications of the analysis and curation of biomedical Big Data, written by leading experts in the areas of biomedical research, medical and technology ethics, privacy, governance and data protection. The book advances our understanding of the ethical conundrums posed by biomedical Big Data, and shows how practitioners and policy-makers can address these issues going forward.
Researchers have come to rely on this thesaurus to locate precise terms from the controlled vocabulary used to index the ERIC database. This, the first print edition in more than 5 years, contains a total of 10,773 vocabulary terms with 206 descriptors and 210 use references that are new to this edition. A popular and widely used reference tool for sets of education-related terms established and updated by ERIC lexicographers to assist searchers in defining, narrowing, and broadening their search strategies. The Introduction to the "Thesaurus" contains helpful information about ERIC indexing rules, deleted and invalid descriptors, and useful parts of the descriptor entry, such as the date the term was added and the number of times it has been used.
Real-Time Systems in Mechatronic Applications brings together in one place important contributions and up-to-date research results in this fast moving area. Real-Time Systems in Mechatronic Applications serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
With the explosive growth of Multimedia Applications, the ability
to index/retrieve multimedia objects in an efficient way is
challenging to both researchers and practitioners. A major data
type stored and managed by these applications is the representation
of two dimensional (2D) objects. Objects contain many features
(e.g., color, texture, and shape) that have meaningful semantics.
From those features, shape is an important feature that conforms
with the way human beings interpret and interact with the real
world objects. The shape representation of objects can therefore be
used for their indexing, retrieval and as similarity measure. The
object databases can be queried and searched for different
purposes. For example, a CAD application for manufacturing
industrial parts might intend to reduce the cost of building new
industrial parts by searching for reusable existing parts in a
database. Regarding an alternative trademark registry application,
one might need to ensure that a new registered trademark is
sufficiently distinctive from the existing marks by searching the
database. Therefore, one of the important functionalities required
by all these applications is the capability to find objects in a
database that match a given object.
The book provides a comprehensive investigation of the performance and problems of the TCP/IP protocol stack, when data is transmitted over GSM, GPRS and UMTS. It gives an introduction to the protocols used for Internet access today, and also the Wireless Application Protocol (WAP). The basics of GSM, GPRS and UMTS are given, which are necessary for understanding the main topic, TCP performance over GSM, GPRS and UMTS. We describe at length the problems that TCP has when operating over a mobile radio link, and what has been proposed to remedy these problems. We derive the optimum TCP packet length for maximum data throughput on wireless networks, analytically and by simulation. Results on the throughput and various other parameters of TCP over mobile networks are given. This book gives valuable advice to network operators and application programmers to maximize data throughput, and which protocols, transmission modes, and coding schemes to use and which to avoid.
Traditionally, scientific fields have defined boundaries, and scientists work on research problems within those boundaries. However, from time to time those boundaries get shifted or blurred to evolve new fields. For instance, the original goal of computer vision was to understand a single image of a scene, by identifying objects, their structure, and spatial arrangements. This has been referred to as image understanding. Recently, computer vision has gradually been making the transition away from understanding single images to analyzing image sequences, or video Video understanding deals with understanding of video understanding. sequences, e.g., recognition of gestures, activities, facial expressions, etc. The main shift in the classic paradigm has been from the recognition of static objects in the scene to motion-based recognition of actions and events. Video understanding has overlapping research problems with other fields, therefore blurring the fixed boundaries. Computer graphics, image processing, and video databases have obvi ous overlap with computer vision. The main goal of computer graphics is to generate and animate realistic looking images, and videos. Re searchers in computer graphics are increasingly employing techniques from computer vision to generate the synthetic imagery. A good exam pIe of this is image-based rendering and modeling techniques, in which geometry, appearance, and lighting is derived from real images using computer vision techniques. Here the shift is from synthesis to analy sis followed by synthesis. Image processing has always overlapped with computer vision because they both inherently work directly with images."
Document Computing: Technologies for Managing Electronic Document Collections discusses the important aspects of document computing and recommends technologies and techniques for document management, with an emphasis on the processes that are appropriate when computers are used to create, access, and publish documents. This book includes descriptions of the nature of documents, their components and structure, and how they can be represented; examines how documents are used and controlled; explores the issues and factors affecting design and implementation of a document management strategy; and gives a detailed case study. The analysis and recommendations are grounded in the findings of the latest research. Document Computing: Technologies for Managing Electronic Document Collections brings together concepts, research, and practice from diverse areas including document computing, information retrieval, librarianship, records management, and business process re-engineering. It will be of value to anyone working in these areas, whether as a researcher, a developer, or a user. Document Computing: Technologies for Managing Electronic Document Collections can be used for graduate classes in document computing and related fields, by developers and integrators of document management systems and document management applications, and by anyone wishing to understand the processes of document management.
This book presents a unified collection of concepts, tools, and techniques that constitute the most important technology available today for the design and implementation of information systems. The framework adopted for this integration goal is the one offered by the relational model of data, its applica tions, and implementations in multiuser and distributed environments. The topics presented in the book include conceptual modeling of application environments using the relational model, formal properties of that model, and tools such as relational languages which go with it, techniques for the logical and physical design of relational database systems and their imple mentations. The book attempts to develop an integrated methodology for addressing all these issues on the basis of the relational approach and various research and practical developments related to that approach. This book is the only one available today that presents such an inte gration. The diversity of approaches to data models, to logical and physical database design, to database application programming, and to use and imple mentation of database systems calls for a common framework for all of them. It has become difficult to study modern database technology with out such a unified approach to a diversity of results developed during the vigorous growth of the database area in recent years, let alone to teach a course on the subject."
Information Organization and Databases: Foundations of Data Organization provides recent developments of information organization technologies that have become crucial not only for data mining applications and information visualization, but also for treatment of semistructured data, spatio-temporal data and multimedia data that are not necessarily stored in conventional DBMSs. Information Organization and Databases: Foundations of Data Organization presents: semistructured data addressing XML, query languages and integrity constraints, focusing on advanced technologies for organizing web data for effective retrieval; multimedia database organization emphasizing video data organization and data structures for similarity retrieval; technologies for data mining and data warehousing; index organization and efficient query processing issues; spatial data access and indexing; organizing and retrieval of WWW and hypermedia. Information Organization and Databases: Foundations of Data Organization is a resource for database practitioners, database researchers, designers and administrators of multimedia information systems, and graduate-level students in the area of information retrieval and/or databases wishing to keep abreast of advances in the information organization technologies.
TRACK 1: Innovative Applications in the Public Sector The integration of multimedia based applications and the information superhighway fundamentally concerns the creation of a communication technology to support the ac tivities of people. Communication is a profoundly social activity involving interactions among groups or individuals, common standards of exchange, and national infrastruc tures to support telecommunications activities. The contributions of the invited speakers and others in this track begin to explore the social dimension of communication within the context of integrated, information systems for the public sector. Interactions among businesses and households are described by Ralf Strauss through the development within a real community of a "wired city" with information and electronic services provided by the latest telecommunications technologies. A more specific type of interaction between teacher and student forms the basis of education. John Tiffin demonstrates how virtual classrooms can be used to augment the educational process. Carl Loeffler presents yet another perspective on interaction through the integration of A-life and agent technologies to investigate the dynamics of complex behaviors within networked simulation environments. Common standards for communication in the form of electronic documents or CSCW (Computer Supported Cooperative Work), according to Roland Traunmiiller, provide en abling technologies for a paradigm shift in the management of organizations. As pointed out by William Olle, the impact of standardization work on the future of information technology depends critically upon the interoperability of software systems."
Recently, a new set of software development techniques has become available, collectively termed Aspect-Oriented Software Development (AOSD). This aims to support the modularization of systemic properties (also referred to as crosscutting concerns) and their subsequent composition with the other parts of a system. Rashid focuses on the use of Aspect-Oriented Programming (AOP) techniques to modularize otherwise broadly scoped features in database systems, such as the evolution or the versioning model, to improve their customizability, extensibility and maintainability. He shows how the use of AOP can transform the way we develop, use and maintain database systems. He also discusses how database systems can support AOP by providing a means for the storage and retrieval of aspects. "Aspect-Oriented Database Systems" shows the possible synergy between AOP and database systems, and is of particular interest to researchers, graduate students and software developers in database systems and applications.
This volume contains a selection of papers that focus on the state-of the-art in formal specification and verification of real-time computing systems. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D. C. A companion volume by the title Foundations of Real-Time Computing: Scheduling and Resource Management complements this hook by addressing many of the recently devised techniques and approaches for scheduling tasks and managing resources in real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real time computing systems on a scientific basis. The notion of real-time system has alternative interpretations, not all of which are intended usages in this collection of papers. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the formal specification and verification of computer software and systems whose correct performance is dependent on carefully orchestrated interactions with time, e. g., meeting deadlines and synchronizing with clocks. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End use applications of real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."
The first textbook ever to cover multi-relational data mining and inductive logic programming, this book fully explores logical and relational learning. Ideal for graduate students and researchers, it also looks at statistical relational learning.
This book constitutes the refereed proceedings of the 10th IFIP TC 9 International Conference on Human Choice and Computers, HCC10 2012, held in Amsterdam, The Netherlands, in September 2012. The 37 revised full papers presented were carefully reviewed and selected for inclusion in the volume. The papers are organized in topical sections on national and international policies, sustainable and responsible innovation, ICT for peace and war, and citizens' involvement, citizens' rights and ICT.
This volume contains a selection of papers that focus on the state-of the-art in real-time scheduling and resource management. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D.C. A companion volume by the title Foundations of Real-Time Computing: Fonnal Specifications and Methods complements this book by addressing many of the most advanced approaches currently being investigated in the arena of formal specification and verification of real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real-time computing systems on a scientific basis. Many of the papers in this book take care to define the notion of real-time system precisely, because it is often easy to misunderstand what is meant by that term. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the very difficult problems of scheduling tasks and resource management in computer systems whose performance is inextricably fused with the achievement of deadlines. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End-use applications of deadline-driven real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications." |
You may like...
Performance Evaluation and Benchmarking…
Raj Madhavan, Edward Tunstel, …
Hardcover
R4,201
Discovery Miles 42 010
Applications of Sliding Mode Control in…
Sundarapandian Vaidyanathan, Chang-Hua Lien
Hardcover
|