![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
Information fusion is becoming a major requirement in data mining and knowledge discovery in databases. This book presents some recent fusion techniques that are currently in use in data mining, as well as data mining applications that use information fusion. Special focus of the book is on information fusion in preprocessing, model building and information extraction with various applications.
Information intermediation is the foundation stone of some of the most successful Internet companies, and is perhaps second only to the Internet Infrastructure companies. On the heels of information integration and interoperability, this book on information brokering discusses the next step in information interoperability and integration. The emerging Internet economy based on burgeoning B2B and B2C trading will soon demand semantics-based information intermediation for its feasibility and success. B2B ventures are involved in the rationalization' of new vertical markets and construction of domain specific product catalogs. This book provides approaches for re-use of existing vocabularies and domain ontologies as a basis for this rationalization and provides a framework based on inter-ontology interoperation. Infrastructural trade-offs that identify optimizations in performance and scalability of web sites will soon give way to information based trade-offs as alternate rationalization schemes come into play and the necessity of interoperating across these schemes is realized. Information Brokering Across Heterogeneous Digital Data's intended readers are researchers, software architects and CTOs, advanced product developers dealing with information intermediation issues in the context of e-commerce (B2B and B2C), information technology professionals in various vertical markets (e.g., geo-spatial information, medicine, auto), and all librarians interested in information brokering.
It is over 20 years since the functional data model and functional programming languages were first introduced to the computing community. Although developed by separate research communities, recent work, presented in this book, suggests there is powerful synergy in their integration. As database technology emerges as central to yet more complex and demanding applications in areas such as bioinformatics, national security, criminal investigations and advanced engineering, more sophisticated approaches like those presented here, are needed. A tutorial introduction by the editors prepares the reader for the chapters that follow, written by leading researchers, including some of the early pioneers. They provide a comprehensive treatment showing how the functional approach provides for modeling, analyzis and optimization in databases, and also data integration and interoperation in heterogeneous environments. Several chapters deal with mathematical results on the transformation of expressions, fundamental to the functional approach. The book also aims to show how the approach relates to the Internet and current work on semistructured data, XML and RDF. The book presents a comprehensive view of the functional approach to data management, bringing together important material hitherto widely scattered, some new research, and a comprehensive set of references. It will serve as a valuable resource for researchers, faculty and graduate students, as well as those in industry responsible for new systems development.
This book proposes representations of multicast rate regions in wireless networks based on the mathematical concept of submodular functions, e.g., the submodular cut model and the polymatroid broadcast model. These models subsume and generalize the graph and hypergraph models. The submodular structure facilitates a dual decomposition approach to network utility maximization problems, which exploits the greedy algorithm for linear programming on submodular polyhedra. This approach yields computationally efficient characterizations of inner and outer bounds on the multicast capacity regions for various classes of wireless networks.
This book presents important applications of soft computing and fuzziness to the growing field of web planning. A new method of using fuzzy numbers to model uncertain probabilities and how these can be used to model a fuzzy queuing system is demonstrated, as well as a method of modeling fuzzy queuing systems employing fuzzy arrival rates and fuzzy service rates. All the computations needed to get to the fuzzy numbers for system performance are described starting for the one server case to more than three servers. A variety of optimization models are discussed with applications to the average response times, server utilization, server and queue costs, as well as to phenomena identified with web sites such as "burstiness" and "long tailed distributions".
Parallel and Distributed Information Systems brings together in one place important contributions and up-to-date research results in this fast moving area. Parallel and Distributed Information Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
What will business software look like in the future? And how will it be developed? This book covers the proceedings of the first international conference on Future Business Software - a new think tank discussing the trends in enterprise software with speakers from Europe's most successful software companies and the leading research institutions. The articles focus on two of the most prominent trends in the field: emergent software and agile development processes. "Emergent Software" is a new paradigm of software development that addresses the highly complex requirements of tomorrow's business software and aims at dynamically and flexibly combining a business software solution's different components in order to fulfill customers' needs with a minimum of effort. Agile development processes are the response of software technology to the implementation of diverse and rapidly changing software requirements. A major focus is on the minimization of project risks, e.g. through short, iterative development cycles, test-driven development and an intensive culture of communication."
This informative text/reference presents a detailed review of the state of the art in industrial sensor and control networks. The book examines a broad range of applications, along with their design objectives and technical challenges. The coverage includes fieldbus technologies, wireless communication technologies, network architectures, and resource management and optimization for industrial networks. Discussions are also provided on industrial communication standards for both wired and wireless technologies, as well as for the Industrial Internet of Things (IIoT). Topics and features: describes the FlexRay, CAN, and Modbus fieldbus protocols for industrial control networks, as well as the MIL-STD-1553 standard; proposes a dual fieldbus approach, incorporating both CAN and ModBus fieldbus technologies, for a ship engine distributed control system; reviews a range of industrial wireless sensor network (IWSN) applications, from environmental sensing and condition monitoring, to process automation; examines the wireless networking performance, design requirements, and technical limitations of IWSN applications; presents a survey of IWSN commercial solutions and service providers, and summarizes the emerging trends in this area; discusses the latest technologies and open challenges in realizing the vision of the IIoT, highlighting various applications of the IIoT in industrial domains; introduces a logistics paradigm for adopting IIoT technology on the Physical Internet. This unique work will be of great value to all researchers involved in industrial sensor and control networks, wireless networking, and the Internet of Things.
This book comprises the refereed papers together with the invited keynote papers, presented at the Second International Conference on Enterprise Information Systems. The conference was organised by the School of Computing at Staffordshire University, UK, and the Escola Superior de Tecnologia of Setubal, Portugal, in cooperation with the British Computer Society and the International Federation for Information Processing, Working Group 8.1. The purpose of this 2nd International Conference was to bring together researchers, engineers and practitioners interested in the advances in and business applications of information systems. The papers demonstrate the vitality and vibrancy of the field of Enterprise Information Systems. The research papers included here were selected from among 143 submissions from 32 countries in the following four areas: Enterprise Database Applications, Artificial Intelligence Applications and Decision Support Systems, Systems Analysis and Specification, and Internet and Electronic Commerce. Every paper had at least two reVIewers drawn from 10 countries. The papers included in this book were recommended by the reviewers. On behalf of the conference organising committee we would like to thank all the members of the Programme Committee for their work in reviewing and selecting the papers that appear in this volume. We would also like to thank all the authors who have submitted their papers to this conference, and would like to apologise to the authors that we were unable to include and wish them success next year.
This book celebrates Michael Stonebraker's accomplishments that led to his 2014 ACM A.M. Turing Award "for fundamental contributions to the concepts and practices underlying modern database systems." The book describes, for the broad computing community, the unique nature, significance, and impact of Mike's achievements in advancing modern database systems over more than forty years. Today, data is considered the world's most valuable resource, whether it is in the tens of millions of databases used to manage the world's businesses and governments, in the billions of databases in our smartphones and watches, or residing elsewhere, as yet unmanaged, awaiting the elusive next generation of database systems. Every one of the millions or billions of databases includes features that are celebrated by the 2014 Turing Award and are described in this book. Why should I care about databases? What is a database? What is data management? What is a database management system (DBMS)? These are just some of the questions that this book answers, in describing the development of data management through the achievements of Mike Stonebraker and his over 200 collaborators. In reading the stories in this book, you will discover core data management concepts that were developed over the two greatest eras (so far) of data management technology. The book is a collection of 36 stories written by Mike and 38 of his collaborators: 23 world-leading database researchers, 11 world-class systems engineers, and 4 business partners. If you are an aspiring researcher, engineer, or entrepreneur you might read these stories to find these turning points as practice to tilt at your own computer-science windmills, to spur yourself to your next step of innovation and achievement.
Recent years have seen an explosive growth in the use of new database applications such as CAD/CAM systems, spatial information systems, and multimedia information systems. The needs of these applications are far more complex than traditional business applications. They call for support of objects with complex data types, such as images and spatial objects, and for support of objects with wildly varying numbers of index terms, such as documents. Traditional indexing techniques such as the B-tree and its variants do not efficiently support these applications, and so new indexing mechanisms have been developed. As a result of the demand for database support for new applications, there has been a proliferation of new indexing techniques. The need for a book addressing indexing problems in advanced applications is evident. For practitioners and database and application developers, this book explains best practice, guiding the selection of appropriate indexes for each application. For researchers, this book provides a foundation for the development of new and more robust indexes. For newcomers, this book is an overview of the wide range of advanced indexing techniques. Indexing Techniques for Advanced Database Systems is suitable as a secondary text for a graduate level course on indexing techniques, and as a reference for researchers and practitioners in industry.
Semistructured Database Design provides an essential reference for anyone interested in the effective management of semsistructured data. Since many new and advanced web applications consume a huge amount of such data, there is a growing need to properly design efficient databases. This volume responds to that need by describing a semantically rich data model for semistructured data, called Object-Relationship-Attribute model for Semistructured data (ORA-SS). Focusing on this new model, the book discuss problems and present solutions for a number of topics, including schema extraction, the design of non-redundant storage organizations for semistructured data, and physical semsitructured database design, among others. Semistructured Database Design, presents researchers and professionals with the most complete and up-to-date research in this fast-growing field.
Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms.
Today we are witnessing an exponential growth of information accumulated within universities, corporations, and government organizations. Autonomous repositories that store different types of digital data in multiple formats are becoming available for use on the fast-evolving global information systems infrastructure. More concretely, with the World Wide Web and related internetworking technologies, there has been an explosion in the types, availability, and volume of data accessible to a global information system. However, this information overload makes it nearly impossible for users to be aware of the locations, organization or structures, query languages, and semantics of the information in various repositories. Available browsing and navigation tools assist users in locating information resources on the Internet. However, there is a real need to complement current browsing and keyword-based techniques with concept-based approaches. An important next step should be to support queries that do not contain information describing location or manipulation of relevant resources. Ontology-Based Query Processing for Global Information Systems describes an initiative for enhancing query processing in a global information system. The following are some of the relevant features: Providing semantic descriptions of data repositories using ontologies; Dealing with different vocabularies so that users are not forced to use a common one; Defining a strategy that permits the incremental enrichment of answers by visiting new ontologies; Managing imprecise answers and estimations of the incurred loss of information. In summary, technologies such as information brokerage, domain ontologies, andestimation of imprecision in answers based on vocabulary heterogeneity have been synthesized with Internet computing, representing an advance in developing semantics-based information access on the Web. Theoretical results are complemented by the presentation of a prototype that implements the main ideas presented in this book. Ontology-Based Query Processing for Global Information Systems is suitable as a secondary text for a graduate-level course, and as a reference for researchers and practitioners in industry.
Real-Time Systems Engineering and Applications is a well-structured collection of chapters pertaining to present and future developments in real-time systems engineering. After an overview of real-time processing, theoretical foundations are presented. The book then introduces useful modeling concepts and tools. This is followed by concentration on the more practical aspects of real-time engineering with a thorough overview of the present state of the art, both in hardware and software, including related concepts in robotics. Examples are given of novel real-time applications which illustrate the present state of the art. The book concludes with a focus on future developments, giving direction for new research activities and an educational curriculum covering the subject. This book can be used as a source for academic and industrial researchers as well as a textbook for computing and engineering courses covering the topic of real-time systems engineering.
This book provides a comprehensive set of optimization and prediction techniques for an enterprise information system. Readers with a background in operations research, system engineering, statistics, or data analytics can use this book as a reference to derive insight from data and use this knowledge as guidance for production management. The authors identify the key challenges in enterprise information management and present results that have emerged from leading-edge research in this domain. Coverage includes topics ranging from task scheduling and resource allocation, to workflow optimization, process time and status prediction, order admission policies optimization, and enterprise service-level performance analysis and prediction. With its emphasis on the above topics, this book provides an in-depth look at enterprise information management solutions that are needed for greater automation and reconfigurability-based fault tolerance, as well as to obtain data-driven recommendations for effective decision-making.
This is the first book on brain-computer interfaces (BCI) that aims to explain how these BCI interfaces can be used for artistic goals. Devices that measure changes in brain activity in various regions of our brain are available and they make it possible to investigate how brain activity is related to experiencing and creating art. Brain activity can also be monitored in order to find out about the affective state of a performer or bystander and use this knowledge to create or adapt an interactive multi-sensorial (audio, visual, tactile) piece of art. Making use of the measured affective state is just one of the possible ways to use BCI for artistic expression. We can also stimulate brain activity. It can be evoked externally by exposing our brain to external events, whether they are visual, auditory, or tactile. Knowing about the stimuli and the effect on the brain makes it possible to translate such external stimuli to decisions and commands that help to design, implement, or adapt an artistic performance, or interactive installation. Stimulating brain activity can also be done internally. Brain activity can be voluntarily manipulated and changes can be translated into computer commands to realize an artistic vision. The chapters in this book have been written by researchers in human-computer interaction, brain-computer interaction, neuroscience, psychology and social sciences, often in cooperation with artists using BCI in their work. It is the perfect book for those seeking to learn about brain-computer interfaces used for artistic applications.
This book takes a unique approach to information retrieval by laying down the foundations for a modern algebra of information retrieval based on lattice theory. All major retrieval methods developed so far are described in detail a" Boolean, Vector Space and probabilistic methods, but also Web retrieval algorithms like PageRank, HITS, and SALSA a" and the author shows that they all can be treated elegantly in a unified formal way, using lattice theory as the one basic concept. Further, he also demonstrates that the lattice-based approach to information retrieval allows us to formulate new retrieval methods. SAndor Dominicha (TM)s presentation is characterized by an engineering-like approach, describing all methods and technologies with as much mathematics as needed for clarity and exactness. His readers in both computer science and mathematics will learn how one single concept can be used to understand the most important retrieval methods, to propose new ones, and also to gain new insights into retrieval modeling in general. Thus, his book is appropriate for researchers and graduate students, who will additionally benefit from the many exercises at the end of each chapter.
As the use of computerized information continues to proliferate, so does the need for a writing method suited to this new medium. In "Writing for the Computer Screen," Hillary Goodall and Susan Smith Reilly call attention to new forms of information display unique to computers. The authors draw upon years of professional experience in business and education to present practical computer display techniques. This book examines the shortfalls of using established forms of writing for the computer where information needed in a hurry can be buried in a cluttered screen. Such problems can be minimized if screen design is guided by the characteristics of the medium.
In two parts, the book focusses on materials science developments in the area of 1) Materials Data and Informatics: - Materials data quality and infrastructure - Materials databases - Materials data mining, image analysis, data driven materials discovery, data visualization. 2) Materials for Tomorrow's Energy Infrastructure: - Pipeline, transport and storage materials for future fuels: biofuels, hydrogen, natural gas, ethanol, etc. -Materials for renewable energy technologies This book presents selected contributions of exceptional young postdoctoral scientists to the 4th WMRIF Workshop for Young Scientists, hosted by the National Institute of Standards and Technology, at the NIST site in Boulder, Colorado, USA, September 8 to September 10, 2014.
Metadata play a fundamental role in both DLs and SDIs. Commonly defined as "structured data about data" or "data which describe attributes of a resource" or, more simply, "information about data," it is an essential requirement for locating and evaluating available data. Therefore, this book focuses on the study of different metadata aspects, which contribute to a more efficient use of DLs and SDIs. The three main issues addressed are: the management of nested collections of resources, the interoperability between metadata schemas, and the integration of information retrieval techniques to the discovery services of geographic data catalogs (contributing in this way to avoid metadata content heterogeneity).
Motivation Modem enterprises rely on database management systems (DBMS) to collect, store and manage corporate data, which is considered a strategic corporate re source. Recently, with the proliferation of personal computers and departmen tal computing, the trend has been towards the decentralization and distribution of the computing infrastructure, with autonomy and responsibility for data now residing at the departmental and workgroup level of the organization. Users want their data delivered to their desktops, allowing them to incor porate data into their personal databases, spreadsheets, word processing doc uments, and most importantly, into their daily tasks and activities. They want to be able to share their information while retaining control over its access and distribution. There are also pressures from corporate leaders who wish to use information technology as a strategic resource in offering specialized value-added services to customers. Database technology is being used to manage the data associated with corporate processes and activities. Increasingly, the data being managed are not simply formatted tables in relational databases, but all types of ob jects, including unstructured text, images, audio, and video. Thus, the database management providers are being asked to extend the capabilities of DBMS to include object-relational models as well as full object-oriented database man agement systems."
Organizing websites is highly dynamic and often chaotic. Thus, it is crucial that host web servers manipulate URLs in order to cope with temporarily or permanently relocated resources, prevent attacks by automated worms, and control resource access. The Apache mod_rewrite module has long inspired fits of joy because it offers an unparalleled toolset for manipulating URLs. "The Definitive Guide to Apache mod_rewrite" guides you through configuration and use of the module for a variety of purposes, including basic and conditional rewrites, access control, virtual host maintenance, and proxies. This book was authored by Rich Bowen, noted Apache expert and Apache Software Foundation member, and draws on his years of experience administering, and regular speaking and writing about, the Apache server.
Modern applications are both data and computationally intensive and require the storage and manipulation of voluminous traditional (alphanumeric) and nontraditional data sets (images, text, geometric objects, time-series). Examples of such emerging application domains are: Geographical Information Systems (GIS), Multimedia Information Systems, CAD/CAM, Time-Series Analysis, Medical Information Sstems, On-Line Analytical Processing (OLAP), and Data Mining. These applications pose diverse requirements with respect to the information and the operations that need to be supported. From the database perspective, new techniques and tools therefore need to be developed towards increased processing efficiency. This monograph explores the way spatial database management systems aim at supporting queries that involve the space characteristics of the underlying data, and discusses query processing techniques for nearest neighbor queries. It provides both basic concepts and state-of-the-art results in spatial databases and parallel processing research, and studies numerous applications of nearest neighbor queries.
Mining Spatio-Temporal Information Systems, an edited volume is
composed of chapters from leading experts in the field of
Spatial-Temporal Information Systems and addresses the many issues
in support of modeling, creation, querying, visualizing and mining.
Mining Spatio-Temporal Information Systems is intended to bring
together a coherent body of recent knowledge relating to STIS data
modeling, design, implementation and STIS in knowledge discovery.
In particular, the reader is exposed to the latest techniques for
the practical design of STIS, essential for complex query
processing. |
You may like...
Numbers, Hypotheses & Conclusions - A…
Colin Tredoux, Kevin Durrheim
Paperback
Uniform Distribution and Quasi-Monte…
Christoph Aistleitner, Jozsef Beck, …
Hardcover
R5,026
Discovery Miles 50 260
Mathematical and Statistical Methods for…
Marco Corazza, Pizzi Claudio
Hardcover
R2,691
Discovery Miles 26 910
|