![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Today we are witnessing an exponential growth of information accumulated within universities, corporations, and government organizations. Autonomous repositories that store different types of digital data in multiple formats are becoming available for use on the fast-evolving global information systems infrastructure. More concretely, with the World Wide Web and related internetworking technologies, there has been an explosion in the types, availability, and volume of data accessible to a global information system. However, this information overload makes it nearly impossible for users to be aware of the locations, organization or structures, query languages, and semantics of the information in various repositories. Available browsing and navigation tools assist users in locating information resources on the Internet. However, there is a real need to complement current browsing and keyword-based techniques with concept-based approaches. An important next step should be to support queries that do not contain information describing location or manipulation of relevant resources. Ontology-Based Query Processing for Global Information Systems describes an initiative for enhancing query processing in a global information system. The following are some of the relevant features: Providing semantic descriptions of data repositories using ontologies; Dealing with different vocabularies so that users are not forced to use a common one; Defining a strategy that permits the incremental enrichment of answers by visiting new ontologies; Managing imprecise answers and estimations of the incurred loss of information. In summary, technologies such as information brokerage, domain ontologies, andestimation of imprecision in answers based on vocabulary heterogeneity have been synthesized with Internet computing, representing an advance in developing semantics-based information access on the Web. Theoretical results are complemented by the presentation of a prototype that implements the main ideas presented in this book. Ontology-Based Query Processing for Global Information Systems is suitable as a secondary text for a graduate-level course, and as a reference for researchers and practitioners in industry.
In recent years, as part of the increasing "informationization" of industry and the economy, enterprises have been accumulating vast amounts of detailed data such as high-frequency transaction data in nancial markets and point-of-sale information onindividualitems in theretail sector. Similarly,vast amountsof data arenow ava- able on business networks based on inter rm transactions and shareholdings. In the past, these types of information were studied only by economists and management scholars. More recently, however, researchers from other elds, such as physics, mathematics, and information sciences, have become interested in this kind of data and, based on novel empirical approaches to searching for regularities and "laws" akin to those in the natural sciences, have produced intriguing results. This book is the proceedings of the international conference THICCAPFA7 that was titled "New Approaches to the Analysis of Large-Scale Business and E- nomic Data," held in Tokyo, March 1-5, 2009. The letters THIC denote the Tokyo Tech (Tokyo Institute of Technology)-Hitotsubashi Interdisciplinary Conference. The conference series, titled APFA (Applications of Physics in Financial Analysis), focuses on the analysis of large-scale economic data. It has traditionally brought physicists and economists together to exchange viewpoints and experience (APFA1 in Dublin 1999, APFA2 in Liege ` 2000, APFA3 in London 2001, APFA4 in Warsaw 2003, APFA5 in Torino 2006, and APFA6 in Lisbon 2007). The aim of the conf- ence is to establish fundamental analytical techniques and data collection methods, taking into account the results from a variety of academic disciplines.
This book takes a unique approach to information retrieval by laying down the foundations for a modern algebra of information retrieval based on lattice theory. All major retrieval methods developed so far are described in detail a" Boolean, Vector Space and probabilistic methods, but also Web retrieval algorithms like PageRank, HITS, and SALSA a" and the author shows that they all can be treated elegantly in a unified formal way, using lattice theory as the one basic concept. Further, he also demonstrates that the lattice-based approach to information retrieval allows us to formulate new retrieval methods. SAndor Dominicha (TM)s presentation is characterized by an engineering-like approach, describing all methods and technologies with as much mathematics as needed for clarity and exactness. His readers in both computer science and mathematics will learn how one single concept can be used to understand the most important retrieval methods, to propose new ones, and also to gain new insights into retrieval modeling in general. Thus, his book is appropriate for researchers and graduate students, who will additionally benefit from the many exercises at the end of each chapter.
Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms.
Semistructured Database Design provides an essential reference for anyone interested in the effective management of semsistructured data. Since many new and advanced web applications consume a huge amount of such data, there is a growing need to properly design efficient databases. This volume responds to that need by describing a semantically rich data model for semistructured data, called Object-Relationship-Attribute model for Semistructured data (ORA-SS). Focusing on this new model, the book discuss problems and present solutions for a number of topics, including schema extraction, the design of non-redundant storage organizations for semistructured data, and physical semsitructured database design, among others. Semistructured Database Design, presents researchers and professionals with the most complete and up-to-date research in this fast-growing field.
Actuarial Principles: Lifetables and Mortality Models explores the core of actuarial science: the study of mortality and other risks and applications. Including the CT4 and CT5 UK courses, but applicable to a global audience, this work lightly covers the mathematical and theoretical background of the subject to focus on real life practice. It offers a brief history of the field, why actuarial notation has become universal, and how theory can be applied to many situations. Uniquely covering both life contingency risks and survival models, the text provides numerous exercises (and their solutions), along with complete self-contained real-world assignments.
This book focuses on new research challenges in intelligent information filtering and retrieval. It collects invited chapters and extended research contributions from DART 2014 (the 8th International Workshop on Information Filtering and Retrieval), held in Pisa (Italy), on December 10, 2014, and co-hosted with the XIII AI*IA Symposium on Artificial Intelligence. The main focus of DART was to discuss and compare suitable novel solutions based on intelligent techniques and applied to real-world contexts. The chapters of this book present a comprehensive review of related works and the current state of the art. The contributions from both practitioners and researchers have been carefully reviewed by experts in the area, who also gave useful suggestions to improve the quality of the book.
Real-Time Systems Engineering and Applications is a well-structured collection of chapters pertaining to present and future developments in real-time systems engineering. After an overview of real-time processing, theoretical foundations are presented. The book then introduces useful modeling concepts and tools. This is followed by concentration on the more practical aspects of real-time engineering with a thorough overview of the present state of the art, both in hardware and software, including related concepts in robotics. Examples are given of novel real-time applications which illustrate the present state of the art. The book concludes with a focus on future developments, giving direction for new research activities and an educational curriculum covering the subject. This book can be used as a source for academic and industrial researchers as well as a textbook for computing and engineering courses covering the topic of real-time systems engineering.
This book provides a comprehensive set of optimization and prediction techniques for an enterprise information system. Readers with a background in operations research, system engineering, statistics, or data analytics can use this book as a reference to derive insight from data and use this knowledge as guidance for production management. The authors identify the key challenges in enterprise information management and present results that have emerged from leading-edge research in this domain. Coverage includes topics ranging from task scheduling and resource allocation, to workflow optimization, process time and status prediction, order admission policies optimization, and enterprise service-level performance analysis and prediction. With its emphasis on the above topics, this book provides an in-depth look at enterprise information management solutions that are needed for greater automation and reconfigurability-based fault tolerance, as well as to obtain data-driven recommendations for effective decision-making.
Data mining, an interdisciplinary field combining methods from artificial intelligence, machine learning, statistics and database systems, has grown tremendously over the last 20 years and produced core results for applications like business intelligence, spatio-temporal data analysis, bioinformatics, and stream data processing. The fifteen contributors to this volume are successful and well-known data mining scientists and professionals. Although by no means an exhaustive list, all of them have helped the field to gain the reputation and importance it enjoys today, through the many valuable contributions they have made. Mohamed Medhat Gaber has asked them (and many others) to write down their journeys through the data mining field, trying to answer the following questions: 1. What are your motives for conducting research in the data mining field? 2. Describe the milestones of your research in this field. 3. What are your notable success stories? 4. How did you learn from your failures? 5. Have you encountered unexpected results? 6. What are the current research issues and challenges in your area? 7. Describe your research tools and techniques. 8. How would you advise a young researcher to make an impact? 9. What do you predict for the next two years in your area? 10. What are your expectations in the long term? In order to maintain the informal character of their contributions, they were given complete freedom as to how to organize their answers. This narrative presentation style provides PhD students and novices who are eager to find their way to successful research in data mining with valuable insights into career planning. In addition, everyone else interested in the history of computer science may be surprised about the stunning successes and possible failures computer science careers (still) have to offer.
As the use of computerized information continues to proliferate, so does the need for a writing method suited to this new medium. In "Writing for the Computer Screen," Hillary Goodall and Susan Smith Reilly call attention to new forms of information display unique to computers. The authors draw upon years of professional experience in business and education to present practical computer display techniques. This book examines the shortfalls of using established forms of writing for the computer where information needed in a hurry can be buried in a cluttered screen. Such problems can be minimized if screen design is guided by the characteristics of the medium.
Information security concerns the confidentiality, integrity, and availability of information processed by a computer system. With an emphasis on prevention, traditional information security research has focused little on the ability to survive successful attacks, which can seriously impair the integrity and availability of a system. Trusted Recovery And Defensive Information Warfare uses database trusted recovery, as an example, to illustrate the principles of trusted recovery in defensive information warfare. Traditional database recovery mechanisms do not address trusted recovery, except for complete rollbacks, which undo the work of benign transactions as well as malicious ones, and compensating transactions, whose utility depends on application semantics. Database trusted recovery faces a set of unique challenges. In particular, trusted database recovery is complicated mainly by (a) the presence of benign transactions that depend, directly or indirectly on malicious transactions; and (b) the requirement by many mission-critical database applications that trusted recovery should be done on-the-fly without blocking the execution of new user transactions. Trusted Recovery And Defensive Information Warfare proposes a new model and a set of innovative algorithms for database trusted recovery. Both read-write dependency based and semantics based trusted recovery algorithms are proposed. Both static and dynamic database trusted recovery algorithms are proposed. These algorithms can typically save a lot of work by innocent users and can satisfy a variety of attack recovery requirements of real world database applications. Trusted Recovery And Defensive Information Warfare is suitable as a secondary text for a graduate level course in computer science, and as a reference for researchers and practitioners in information security.
In two parts, the book focusses on materials science developments in the area of 1) Materials Data and Informatics: - Materials data quality and infrastructure - Materials databases - Materials data mining, image analysis, data driven materials discovery, data visualization. 2) Materials for Tomorrow's Energy Infrastructure: - Pipeline, transport and storage materials for future fuels: biofuels, hydrogen, natural gas, ethanol, etc. -Materials for renewable energy technologies This book presents selected contributions of exceptional young postdoctoral scientists to the 4th WMRIF Workshop for Young Scientists, hosted by the National Institute of Standards and Technology, at the NIST site in Boulder, Colorado, USA, September 8 to September 10, 2014.
In recent years, data mining has become a powerful tool in assisting society with its various layers and individual elements useful in obtaining intelligent information for making knowledgeable decisions. In the realm of knowledge discovery, data mining is becoming one of the most popular topics in information technology. ""Social and Political Implications of Data Mining: Knowledge Management in E-Government"" focuses on the data mining and knowledge management implications that lie within online government. This significant reference book contains cases on improvement of governance system, enhancement of security techniques, upgrade of social service sectors, and foremost empowerment of citizens and societies - a valuable added asset to academicians, researchers, and practitioners.
As consumer costs for multimedia devices such as digital cameras and Web phones have decreased and diversity in the market has skyrocketed, the amount of digital information has grown considerably. Intelligent Multimedia Databases and Information Retrieval: Advancing Applications and Technologies details the latest information retrieval technologies and applications, the research surrounding the field, and the methodologies and design related to multimedia databases. Together with academic researchers and developers from both information retrieval and artificial intelligence fields, this book details issues and semantics of data retrieval with contributions from around the globe. As the information and data from multimedia databases continues to expand, the research and documentation surrounding it should keep pace as best as possible, and this book provides an excellent resource for the latest developments.
Metadata play a fundamental role in both DLs and SDIs. Commonly defined as "structured data about data" or "data which describe attributes of a resource" or, more simply, "information about data," it is an essential requirement for locating and evaluating available data. Therefore, this book focuses on the study of different metadata aspects, which contribute to a more efficient use of DLs and SDIs. The three main issues addressed are: the management of nested collections of resources, the interoperability between metadata schemas, and the integration of information retrieval techniques to the discovery services of geographic data catalogs (contributing in this way to avoid metadata content heterogeneity).
Organizing websites is highly dynamic and often chaotic. Thus, it is crucial that host web servers manipulate URLs in order to cope with temporarily or permanently relocated resources, prevent attacks by automated worms, and control resource access. The Apache mod_rewrite module has long inspired fits of joy because it offers an unparalleled toolset for manipulating URLs. "The Definitive Guide to Apache mod_rewrite" guides you through configuration and use of the module for a variety of purposes, including basic and conditional rewrites, access control, virtual host maintenance, and proxies. This book was authored by Rich Bowen, noted Apache expert and Apache Software Foundation member, and draws on his years of experience administering, and regular speaking and writing about, the Apache server.
Modern applications are both data and computationally intensive and require the storage and manipulation of voluminous traditional (alphanumeric) and nontraditional data sets (images, text, geometric objects, time-series). Examples of such emerging application domains are: Geographical Information Systems (GIS), Multimedia Information Systems, CAD/CAM, Time-Series Analysis, Medical Information Sstems, On-Line Analytical Processing (OLAP), and Data Mining. These applications pose diverse requirements with respect to the information and the operations that need to be supported. From the database perspective, new techniques and tools therefore need to be developed towards increased processing efficiency. This monograph explores the way spatial database management systems aim at supporting queries that involve the space characteristics of the underlying data, and discusses query processing techniques for nearest neighbor queries. It provides both basic concepts and state-of-the-art results in spatial databases and parallel processing research, and studies numerous applications of nearest neighbor queries.
Data Streams: Models and Algorithms primarily discusses issues related to the mining aspects of streams. Recent progress in hardware technology makes it possible for organizations to store and record large streams of transactional data. For example, even simple daily transactions, such as using the credit card or phone, result in automated data storage, which brings us to a fairly new topic called data streams. This volume covers mining aspects of data streams in a comprehensive style, in which each contributed chapter contains a survey on the topic, the key ideas in the field from that particular topic, and future research directions. Data Streams: Models and Algorithms is intended for a professional audience composed of researchers and practitioners in industry. This book is also appropriate for graduate-level students in computer science.
Mining Spatio-Temporal Information Systems, an edited volume is
composed of chapters from leading experts in the field of
Spatial-Temporal Information Systems and addresses the many issues
in support of modeling, creation, querying, visualizing and mining.
Mining Spatio-Temporal Information Systems is intended to bring
together a coherent body of recent knowledge relating to STIS data
modeling, design, implementation and STIS in knowledge discovery.
In particular, the reader is exposed to the latest techniques for
the practical design of STIS, essential for complex query
processing.
Motivation Modem enterprises rely on database management systems (DBMS) to collect, store and manage corporate data, which is considered a strategic corporate re source. Recently, with the proliferation of personal computers and departmen tal computing, the trend has been towards the decentralization and distribution of the computing infrastructure, with autonomy and responsibility for data now residing at the departmental and workgroup level of the organization. Users want their data delivered to their desktops, allowing them to incor porate data into their personal databases, spreadsheets, word processing doc uments, and most importantly, into their daily tasks and activities. They want to be able to share their information while retaining control over its access and distribution. There are also pressures from corporate leaders who wish to use information technology as a strategic resource in offering specialized value-added services to customers. Database technology is being used to manage the data associated with corporate processes and activities. Increasingly, the data being managed are not simply formatted tables in relational databases, but all types of ob jects, including unstructured text, images, audio, and video. Thus, the database management providers are being asked to extend the capabilities of DBMS to include object-relational models as well as full object-oriented database man agement systems."
Foundations of Dependable Computing: System Implementation, explores the system infrastructure needed to support the various paradigms of Paradigms for Dependable Applications. Approaches to implementing support mechanisms and to incorporating additional appropriate levels of fault detection and fault tolerance at the processor, network, and operating system level are presented. A primary concern at these levels is balancing cost and performance against coverage and overall dependability. As these chapters demonstrate, low overhead, practical solutions are attainable and not necessarily incompatible with performance considerations. The section on innovative compiler support, in particular, demonstrates how the benefits of application specificity may be obtained while reducing hardware cost and run-time overhead. A companion to this volume (published by Kluwer) subtitled Models and Frameworks for Dependable Systems presents two comprehensive frameworks for reasoning about system dependability, thereby establishing a context for understanding the roles played by specific approaches presented in this book's two companion volumes. It then explores the range of models and analysis methods necessary to design, validate and analyze dependable systems. Another companion to this book (published by Kluwer), subtitled Paradigms for Dependable Applications, presents a variety of specific approaches to achieving dependability at the application level. Driven by the higher level fault models of Models and Frameworks for Dependable Systems, and built on the lower level abstractions implemented in a third companion book subtitled System Implementation, these approaches demonstrate how dependability may be tuned to the requirements of an application, the fault environment, and the characteristics of the target platform. Three classes of paradigms are considered: protocol-based paradigms for distributed applications, algorithm-based paradigms for parallel applications, and approaches to exploiting application semantics in embedded real-time control systems.
Recently, IT has entered all important areas of society. Enterprises, individuals and civilisations all depend on functioning, safe and secure IT. Focus on IT security has previously been fractionalised, detailed and often linked to non-business applicaitons. The aim of this book is to address the current and future prospects of modern IT security, functionality in business, trade, industry, health care and government. The main topic areas covered include existing IT security tools and methodology for modern IT environments, laws, regulations and ethics in IT security environments, current and future prospects in technology, infrastructures, technique and methodology and IT security in retrospective.
The primary aim for this book is to gather and collate articles which represent the best and latest thinking in the domain of technology transfer, from research, academia and practice around the world. We envisage that the book will, as a result of this, represent an important source of knowledge in this domain to students (undergraduate and postgraduate), researchers, practitioners and consultants, chiefly in the software engineering and IT/industries, but also in management and other organisational and social disciplines. An important aspect of the book is the role that reflective practitioners (and not just academics) play. They will be involved in the production, and evaluation of contributions, as well as in the design and delivery of conference events, upon which of course, the book will be based.
Text Retrieval and Filtering: Analytical Models of Performance is the first book that addresses the problem of analytically computing the performance of retrieval and filtering systems. The book describes means by which retrieval may be studied analytically, allowing one to describe current performance, predict future performance, and to understand why systems perform as they do. The focus is on retrieving and filtering natural language text, with material addressing retrieval performance for the simple case of queries with a single term, the more complex case with multiple terms, both with term independence and term dependence, and for the use of grammatical information to improve performance. Unambiguous statements of the conditions under which one method or system will be more effective than another are developed. Text Retrieval and Filtering: Analytical Models of Performance focuses on the performance of systems that retrieve natural language text, considering full sentences as well as phrases and individual words. The last chapter explicitly addresses how grammatical constructs and methods may be studied in the context of retrieval or filtering system performance. The book builds toward solving this problem, although the material in earlier chapters is as useful to those addressing non-linguistic, statistical concerns as it is to linguists. Those interested in grammatical information should be cautioned to carefully examine earlier chapters, especially Chapters 7 and 8, which discuss purely statistical relationships between terms, before moving on to Chapter 10, which explicitly addresses linguistic issues. Text Retrieval and Filtering: Analytical Models of Performance is suitable as a secondary text for a graduate level course on Information Retrieval or Linguistics, and as a reference for researchers and practitioners in industry. |
You may like...
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
Machine Learning for Biometrics…
Partha Pratim Sarangi, Madhumita Panda, …
Paperback
R2,570
Discovery Miles 25 700
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
|