![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
The explosion of information technology has led to substantial growth of web-accessible linguistic data in terms of quantity, diversity and complexity. These resources become even more useful when interlinked with each other to generate network effects. The general trend of providing data online is thus accompanied by newly developing methodologies to interconnect linguistic data and metadata. This includes linguistic data collections, general-purpose knowledge bases (e.g., the DBpedia, a machine-readable edition of the Wikipedia), and repositories with specific information about languages, linguistic categories and phenomena. The Linked Data paradigm provides a framework for interoperability and access management, and thereby allows to integrate information from such a diverse set of resources. The contributions assembled in this volume illustrate the band-width of applications of the Linked Data paradigm for representative types of language resources. They cover lexical-semantic resources, annotated corpora, typological databases as well as terminology and metadata repositories. The book includes representative applications from diverse fields, ranging from academic linguistics (e.g., typology and corpus linguistics) over applied linguistics (e.g., lexicography and translation studies) to technical applications (in computational linguistics, Natural Language Processing and information technology). This volume accompanies the Workshop on Linked Data in Linguistics 2012 (LDL-2012) in Frankfurt/M., Germany, organized by the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation (OKFN). It assembles contributions of the workshop participants and, beyond this, it summarizes initial steps in the formation of a Linked Open Data cloud of linguistic resources, the Linguistic Linked Open Data cloud (LLOD).
This book provides a comprehensive set of optimization and prediction techniques for an enterprise information system. Readers with a background in operations research, system engineering, statistics, or data analytics can use this book as a reference to derive insight from data and use this knowledge as guidance for production management. The authors identify the key challenges in enterprise information management and present results that have emerged from leading-edge research in this domain. Coverage includes topics ranging from task scheduling and resource allocation, to workflow optimization, process time and status prediction, order admission policies optimization, and enterprise service-level performance analysis and prediction. With its emphasis on the above topics, this book provides an in-depth look at enterprise information management solutions that are needed for greater automation and reconfigurability-based fault tolerance, as well as to obtain data-driven recommendations for effective decision-making.
Knowledge management (KM) is about managing the lifecycle of knowledge consisting of creating, storing, sharing and applying knowledge. Two main approaches towards KM are codification and personalization. The first focuses on capturing knowledge using technology and the latter on the process of socializing for sharing and creating knowledge. Social media are becoming very popular as individuals and also organizations learn how to use it. The primary applications of social media in a business context are marketing and recruitment. But there is also a huge potential for knowledge management in these organizations. For example, wikis can be used to collect organizational knowledge and social networking tools, which leads to exchanging new ideas and innovation. The interesting part of social media is that, by using them, one immediately starts to generate content that can be useful for the organization. Hence, they naturally combine the codification and personalisation approaches to KM. This book aims to provide an overview of new and innovative applications of social media and to report challenges that need to be solved. One example is the watering down of knowledge as a result of the use of organizational social media (Von Krogh, 2012).
Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms.
Recent years have seen an explosive growth in the use of new database applications such as CAD/CAM systems, spatial information systems, and multimedia information systems. The needs of these applications are far more complex than traditional business applications. They call for support of objects with complex data types, such as images and spatial objects, and for support of objects with wildly varying numbers of index terms, such as documents. Traditional indexing techniques such as the B-tree and its variants do not efficiently support these applications, and so new indexing mechanisms have been developed. As a result of the demand for database support for new applications, there has been a proliferation of new indexing techniques. The need for a book addressing indexing problems in advanced applications is evident. For practitioners and database and application developers, this book explains best practice, guiding the selection of appropriate indexes for each application. For researchers, this book provides a foundation for the development of new and more robust indexes. For newcomers, this book is an overview of the wide range of advanced indexing techniques. Indexing Techniques for Advanced Database Systems is suitable as a secondary text for a graduate level course on indexing techniques, and as a reference for researchers and practitioners in industry.
Today we are witnessing an exponential growth of information accumulated within universities, corporations, and government organizations. Autonomous repositories that store different types of digital data in multiple formats are becoming available for use on the fast-evolving global information systems infrastructure. More concretely, with the World Wide Web and related internetworking technologies, there has been an explosion in the types, availability, and volume of data accessible to a global information system. However, this information overload makes it nearly impossible for users to be aware of the locations, organization or structures, query languages, and semantics of the information in various repositories. Available browsing and navigation tools assist users in locating information resources on the Internet. However, there is a real need to complement current browsing and keyword-based techniques with concept-based approaches. An important next step should be to support queries that do not contain information describing location or manipulation of relevant resources. Ontology-Based Query Processing for Global Information Systems describes an initiative for enhancing query processing in a global information system. The following are some of the relevant features: Providing semantic descriptions of data repositories using ontologies; Dealing with different vocabularies so that users are not forced to use a common one; Defining a strategy that permits the incremental enrichment of answers by visiting new ontologies; Managing imprecise answers and estimations of the incurred loss of information. In summary, technologies such as information brokerage, domain ontologies, andestimation of imprecision in answers based on vocabulary heterogeneity have been synthesized with Internet computing, representing an advance in developing semantics-based information access on the Web. Theoretical results are complemented by the presentation of a prototype that implements the main ideas presented in this book. Ontology-Based Query Processing for Global Information Systems is suitable as a secondary text for a graduate-level course, and as a reference for researchers and practitioners in industry.
In recent years, as part of the increasing "informationization" of industry and the economy, enterprises have been accumulating vast amounts of detailed data such as high-frequency transaction data in nancial markets and point-of-sale information onindividualitems in theretail sector. Similarly,vast amountsof data arenow ava- able on business networks based on inter rm transactions and shareholdings. In the past, these types of information were studied only by economists and management scholars. More recently, however, researchers from other elds, such as physics, mathematics, and information sciences, have become interested in this kind of data and, based on novel empirical approaches to searching for regularities and "laws" akin to those in the natural sciences, have produced intriguing results. This book is the proceedings of the international conference THICCAPFA7 that was titled "New Approaches to the Analysis of Large-Scale Business and E- nomic Data," held in Tokyo, March 1-5, 2009. The letters THIC denote the Tokyo Tech (Tokyo Institute of Technology)-Hitotsubashi Interdisciplinary Conference. The conference series, titled APFA (Applications of Physics in Financial Analysis), focuses on the analysis of large-scale economic data. It has traditionally brought physicists and economists together to exchange viewpoints and experience (APFA1 in Dublin 1999, APFA2 in Liege ` 2000, APFA3 in London 2001, APFA4 in Warsaw 2003, APFA5 in Torino 2006, and APFA6 in Lisbon 2007). The aim of the conf- ence is to establish fundamental analytical techniques and data collection methods, taking into account the results from a variety of academic disciplines.
Real-Time Systems Engineering and Applications is a well-structured collection of chapters pertaining to present and future developments in real-time systems engineering. After an overview of real-time processing, theoretical foundations are presented. The book then introduces useful modeling concepts and tools. This is followed by concentration on the more practical aspects of real-time engineering with a thorough overview of the present state of the art, both in hardware and software, including related concepts in robotics. Examples are given of novel real-time applications which illustrate the present state of the art. The book concludes with a focus on future developments, giving direction for new research activities and an educational curriculum covering the subject. This book can be used as a source for academic and industrial researchers as well as a textbook for computing and engineering courses covering the topic of real-time systems engineering.
This book takes a unique approach to information retrieval by laying down the foundations for a modern algebra of information retrieval based on lattice theory. All major retrieval methods developed so far are described in detail a" Boolean, Vector Space and probabilistic methods, but also Web retrieval algorithms like PageRank, HITS, and SALSA a" and the author shows that they all can be treated elegantly in a unified formal way, using lattice theory as the one basic concept. Further, he also demonstrates that the lattice-based approach to information retrieval allows us to formulate new retrieval methods. SAndor Dominicha (TM)s presentation is characterized by an engineering-like approach, describing all methods and technologies with as much mathematics as needed for clarity and exactness. His readers in both computer science and mathematics will learn how one single concept can be used to understand the most important retrieval methods, to propose new ones, and also to gain new insights into retrieval modeling in general. Thus, his book is appropriate for researchers and graduate students, who will additionally benefit from the many exercises at the end of each chapter.
Data mining, an interdisciplinary field combining methods from artificial intelligence, machine learning, statistics and database systems, has grown tremendously over the last 20 years and produced core results for applications like business intelligence, spatio-temporal data analysis, bioinformatics, and stream data processing. The fifteen contributors to this volume are successful and well-known data mining scientists and professionals. Although by no means an exhaustive list, all of them have helped the field to gain the reputation and importance it enjoys today, through the many valuable contributions they have made. Mohamed Medhat Gaber has asked them (and many others) to write down their journeys through the data mining field, trying to answer the following questions: 1. What are your motives for conducting research in the data mining field? 2. Describe the milestones of your research in this field. 3. What are your notable success stories? 4. How did you learn from your failures? 5. Have you encountered unexpected results? 6. What are the current research issues and challenges in your area? 7. Describe your research tools and techniques. 8. How would you advise a young researcher to make an impact? 9. What do you predict for the next two years in your area? 10. What are your expectations in the long term? In order to maintain the informal character of their contributions, they were given complete freedom as to how to organize their answers. This narrative presentation style provides PhD students and novices who are eager to find their way to successful research in data mining with valuable insights into career planning. In addition, everyone else interested in the history of computer science may be surprised about the stunning successes and possible failures computer science careers (still) have to offer.
Semistructured Database Design provides an essential reference for anyone interested in the effective management of semsistructured data. Since many new and advanced web applications consume a huge amount of such data, there is a growing need to properly design efficient databases. This volume responds to that need by describing a semantically rich data model for semistructured data, called Object-Relationship-Attribute model for Semistructured data (ORA-SS). Focusing on this new model, the book discuss problems and present solutions for a number of topics, including schema extraction, the design of non-redundant storage organizations for semistructured data, and physical semsitructured database design, among others. Semistructured Database Design, presents researchers and professionals with the most complete and up-to-date research in this fast-growing field.
The vision of ubiquitous computing and ambient intelligence describes a world of technology which is present anywhere, anytime in the form of smart, sensible devices that communicate with each other and provide personalized services. However, open interconnected systems are much more vulnerable to attacks and unauthorized data access. In the context of this threat, this book provides a comprehensive guide to security and privacy and trust in data management.
In recent years, data mining has become a powerful tool in assisting society with its various layers and individual elements useful in obtaining intelligent information for making knowledgeable decisions. In the realm of knowledge discovery, data mining is becoming one of the most popular topics in information technology. ""Social and Political Implications of Data Mining: Knowledge Management in E-Government"" focuses on the data mining and knowledge management implications that lie within online government. This significant reference book contains cases on improvement of governance system, enhancement of security techniques, upgrade of social service sectors, and foremost empowerment of citizens and societies - a valuable added asset to academicians, researchers, and practitioners.
As the use of computerized information continues to proliferate, so does the need for a writing method suited to this new medium. In "Writing for the Computer Screen," Hillary Goodall and Susan Smith Reilly call attention to new forms of information display unique to computers. The authors draw upon years of professional experience in business and education to present practical computer display techniques. This book examines the shortfalls of using established forms of writing for the computer where information needed in a hurry can be buried in a cluttered screen. Such problems can be minimized if screen design is guided by the characteristics of the medium.
This is the first book on brain-computer interfaces (BCI) that aims to explain how these BCI interfaces can be used for artistic goals. Devices that measure changes in brain activity in various regions of our brain are available and they make it possible to investigate how brain activity is related to experiencing and creating art. Brain activity can also be monitored in order to find out about the affective state of a performer or bystander and use this knowledge to create or adapt an interactive multi-sensorial (audio, visual, tactile) piece of art. Making use of the measured affective state is just one of the possible ways to use BCI for artistic expression. We can also stimulate brain activity. It can be evoked externally by exposing our brain to external events, whether they are visual, auditory, or tactile. Knowing about the stimuli and the effect on the brain makes it possible to translate such external stimuli to decisions and commands that help to design, implement, or adapt an artistic performance, or interactive installation. Stimulating brain activity can also be done internally. Brain activity can be voluntarily manipulated and changes can be translated into computer commands to realize an artistic vision. The chapters in this book have been written by researchers in human-computer interaction, brain-computer interaction, neuroscience, psychology and social sciences, often in cooperation with artists using BCI in their work. It is the perfect book for those seeking to learn about brain-computer interfaces used for artistic applications.
The concept of a big data warehouse appeared in order to store moving data objects and temporal data information. Moving objects are geometries that change their position and shape continuously over time. In order to support spatio-temporal data, a data model and associated query language is needed for supporting moving objects. Emerging Perspectives in Big Data Warehousing is an essential research publication that explores current innovative activities focusing on the integration between data warehousing and data mining with an emphasis on the applicability to real-world problems. Featuring a wide range of topics such as index structures, ontology, and user behavior, this book is ideally designed for IT consultants, researchers, professionals, computer scientists, academicians, and managers.
Metadata play a fundamental role in both DLs and SDIs. Commonly defined as "structured data about data" or "data which describe attributes of a resource" or, more simply, "information about data," it is an essential requirement for locating and evaluating available data. Therefore, this book focuses on the study of different metadata aspects, which contribute to a more efficient use of DLs and SDIs. The three main issues addressed are: the management of nested collections of resources, the interoperability between metadata schemas, and the integration of information retrieval techniques to the discovery services of geographic data catalogs (contributing in this way to avoid metadata content heterogeneity).
Data mining techniques are commonly used to extract meaningful information from the web, such as data from web documents, website usage logs, and hyperlinks. Building on this, modern organizations are focusing on running and improving their business methods and returns by using opinion mining. Extracting Knowledge From Opinion Mining is an essential resource that presents detailed information on web mining, business intelligence through opinion mining, and how to effectively use knowledge retrieved through mining operations. While highlighting relevant topics, including the differences between ontology-based opinion mining and feature-based opinion mining, this book is an ideal reference source for information technology professionals within research or business settings, graduate and post-graduate students, as well as scholars.
Information security concerns the confidentiality, integrity, and availability of information processed by a computer system. With an emphasis on prevention, traditional information security research has focused little on the ability to survive successful attacks, which can seriously impair the integrity and availability of a system. Trusted Recovery And Defensive Information Warfare uses database trusted recovery, as an example, to illustrate the principles of trusted recovery in defensive information warfare. Traditional database recovery mechanisms do not address trusted recovery, except for complete rollbacks, which undo the work of benign transactions as well as malicious ones, and compensating transactions, whose utility depends on application semantics. Database trusted recovery faces a set of unique challenges. In particular, trusted database recovery is complicated mainly by (a) the presence of benign transactions that depend, directly or indirectly on malicious transactions; and (b) the requirement by many mission-critical database applications that trusted recovery should be done on-the-fly without blocking the execution of new user transactions. Trusted Recovery And Defensive Information Warfare proposes a new model and a set of innovative algorithms for database trusted recovery. Both read-write dependency based and semantics based trusted recovery algorithms are proposed. Both static and dynamic database trusted recovery algorithms are proposed. These algorithms can typically save a lot of work by innocent users and can satisfy a variety of attack recovery requirements of real world database applications. Trusted Recovery And Defensive Information Warfare is suitable as a secondary text for a graduate level course in computer science, and as a reference for researchers and practitioners in information security.
In the era of big data, this book explores the new challenges of urban-rural planning and management from a practical perspective based on a multidisciplinary project. Researchers as contributors to this book have accomplished their projects by using big data and relevant data mining technologies for investigating the possibilities of big data, such as that obtained through cell phones, social network systems and smart cards instead of conventional survey data for urban planning support. This book showcases active researchers who share their experiences and ideas on human mobility, accessibility and recognition of places, connectivity of transportation and urban structure in order to provide effective analytic and forecasting tools for smart city planning and design solutions in China.
ICMCCA 2012 is the first International Conference on Multimedia Processing, Communication and Computing Applications and the theme of the Conference is chosen as 'Multimedia Processing and its Applications'. Multimedia processing has been an active research area contributing in many frontiers of today's science and technology. This book presents peer-reviewed quality papers on multimedia processing, which covers a very broad area of science and technology. The prime objective of the book is to familiarize readers with the latest scientific developments that are taking place in various fields of multimedia processing and is widely used in many disciplines such as Medical Diagnosis, Digital Forensic, Object Recognition, Image and Video Analysis, Robotics, Military, Automotive Industries, Surveillance and Security, Quality Inspection, etc. The book will assist the research community to get the insight of the overlapping works which are being carried out across the globe at many medical hospitals and institutions, defense labs, forensic labs, academic institutions, IT companies and security & surveillance domains. It also discusses latest state-of-the-art research problems and techniques and helps to encourage, motivate and introduce the budding researchers to a larger domain of multimedia.
Mining Spatio-Temporal Information Systems, an edited volume is
composed of chapters from leading experts in the field of
Spatial-Temporal Information Systems and addresses the many issues
in support of modeling, creation, querying, visualizing and mining.
Mining Spatio-Temporal Information Systems is intended to bring
together a coherent body of recent knowledge relating to STIS data
modeling, design, implementation and STIS in knowledge discovery.
In particular, the reader is exposed to the latest techniques for
the practical design of STIS, essential for complex query
processing.
This book gathers selected research papers presented at the AICTE-sponsored International Conference on IoT Inclusive Life (ICIIL 2019), which was organized by the Department of Computer Science and Engineering, National Institute of Technical Teachers Training and Research, Chandigarh, India, on December 19-20, 2019. In contributions by active researchers, the book presents innovative findings and important developments in IoT-related studies, making it a valuable resource for researchers, engineers, and industrial professionals around the globe. |
![]() ![]() You may like...
Internet of Things - Cases and Studies
Fausto Pedro Garcia Marquez, Benjamin Lev
Hardcover
R2,667
Discovery Miles 26 670
Sustainable Consumption, Production and…
Paul Nieuwenhuis, Daniel Newman, …
Hardcover
R2,660
Discovery Miles 26 600
Data Protection and Confidentiality in…
Commission of the European Communities. (CEC) DG for Energy
Hardcover
R2,434
Discovery Miles 24 340
Proceedings of International Conference…
C. Kiran Mai, B. V. Kiranmayee, …
Hardcover
R5,690
Discovery Miles 56 900
Handbook of Reinforcement Learning and…
Kyriakos G. Vamvoudakis, Yan Wan, …
Hardcover
R6,510
Discovery Miles 65 100
|