![]() |
![]() |
Your cart is empty |
||
Showing 1 - 10 of 10 matches in All Departments
Requiring heterogeneous information systems to cooperate and communicate has now become crucial, especially in application areas like e-business, Web-based mash-ups and the life sciences. Such cooperating systems have to automatically and efficiently match, exchange, transform and integrate large data sets from different sources and of different structure in order to enable seamless data exchange and transformation. The book edited by Bellahsene, Bonifati and Rahm provides an overview of the ways in which the schema and ontology matching and mapping tools have addressed the above requirements and points to the open technical challenges. The contributions from leading experts are structured into three parts: large-scale and knowledge-driven schema matching, quality-driven schema mapping and evolution, and evaluation and tuning of matching tasks. The authors describe the state of the art by discussing the latest achievements such as more effective methods for matching data, mapping transformation verification, adaptation to the context and size of the matching and mapping tasks, mapping-driven schema evolution and merging, and mapping evaluation and tuning. The overall result is a coherent, comprehensive picture of the field. With this book, the editors introduce graduate students and advanced professionals to this exciting field. For researchers, they provide an up-to-date source of reference about schema and ontology matching, schema and ontology evolution, and schema merging.
This book constitutes the refereed proceedings of the 10th International Conference on Data Integration in the Life Sciences, DILS 2014, held in Lisbon, Portugal, in July 2014. The 9 revised full papers and the 5 short papers included in this volume were carefully reviewed and selected from 20 submissions. The papers cover a range of important topics such as data integration platforms and applications; biodiversity data management; ontologies and visualization; linked data and query processing.
DILS 2004 (Data Integration in the Life Sciences) is a new bioinformatics wo- shop focusing on topics related to data management and integration. It was motivated by the observation that new advances in life sciences, e. g., molecular biology, biodiversity, drug discovery and medical research, increasingly depend on bioinformatics methods to manage and analyze vast amounts of highly - verse data. Relevant data is typically distributed across many data sources on the Web and is often structured only to a limited extent. Despite new inter- erability technologies such as XML and web services, integration of data is a highly di?cult and still largely manual task, especially due to the high degree of semantic heterogeneity and varying data quality as well as speci?c application requirements. The call for papers attracted many submissions on the workshop topics. - ter a carefulreviewing processthe internationalprogramcommittee accepted 13 long and 2 short papers which are included in this volume. They cover a wide spectrum of theoretical and practical issues including scienti?c/clinical wo- ?ows, ontologies, tools/systems, and integration techniques. DILS 2004 also f- tured two keynote presentations, by Dr. Thure Etzold (architect of the leading integration platform SRS, and president of Lion Bioscience, Cambridge, UK) and Prof. Dr. Svante Pa] ]abo (Director, Max Planck Institute for Evolutionary Anthropology, Leipzig). The workshop took place during March 25-26, 2004, in Leipzig, Germany, and was organized by the Interdisciplinary Bioinformatics Center (IZBI) of the Universityof Leipzig."
The Extensible Markup Language (XML) is playing an increasingly important role in the exchange of a wide variety of data on the Web and elsewhere. The database c- munity is interested in XML because it can be used to represent a variety of data f- mats originating in different kinds of data repositories while providing structure and the possibility to add type information. The theme of this symposium is the combination of database and XML te- nologies. Today, we see growing interest in using these technologies together for many Web-based and database-centric applications. XML is being used to publish data from database systems on the Web by providing input to content generators for Web pages, and database systems are increasingly being used to store and query XML data, often by handling queries issued over the Internet. As database systems incre- ingly start talking to each other over the Web, there is a fast-growing interest in using XML as the standard exchange format for distributed query processing. As a result, many relational database systems export data as XML documents, import data from XML documents, provide query and update capabilities for XML data. In addition, so-called native XML database and integration systems are appearing on the database market, and it s claimed that they are especially tailored to store, maintain and easily access XML documents."
This book constitutes the thoroughly refereed post-proceedings of the Web- and Database-Related Workshops held during the NetObjectDays international conference NODe 2002, in Erfurt, Germany, in October 2002. The 19 revised full papers presented together with 3 keynote papers were carefully selected during 2 rounds of reviewing and improvement. The papers are organized in topical sections on advanced Web-services, UDDI extensions, description and classification of Web services, applications based on Web-services, indexing and accessing, Web and XML databases, mobile devices and the Internet, and XML query languages.
Das Buch bietet eine umfassende und aktuelle Darstellung der
Konzepte und Techniken zur Implementierung von Datenbanksystemen.
Ausgangspunkt ist ein hierarchisches Architekturmodell: Die
Schichten dieses Modells erm glichen es, den Systemaufbau, die
Einordnung der bereitzustellenden Funktionen und ihr Zusammenspiel
detailliert zu beschreiben.
Requiring heterogeneous information systems to cooperate and communicate has now become crucial, especially in application areas like e-business, Web-based mash-ups and the life sciences. Such cooperating systems have to automatically and efficiently match, exchange, transform and integrate large data sets from different sources and of different structure in order to enable seamless data exchange and transformation. The book edited by Bellahsene, Bonifati and Rahm provides an overview of the ways in which the schema and ontology matching and mapping tools have addressed the above requirements and points to the open technical challenges. The contributions from leading experts are structured into three parts: large-scale and knowledge-driven schema matching, quality-driven schema mapping and evolution, and evaluation and tuning of matching tasks. The authors describe the state of the art by discussing the latest achievements such as more effective methods for matching data, mapping transformation verification, adaptation to the context and size of the matching and mapping tasks, mapping-driven schema evolution and merging, and mapping evaluation and tuning. The overall result is a coherent, comprehensive picture of the field. With this book, the editors introduce graduate students and advanced professionals to this exciting field. For researchers, they provide an up-to-date source of reference about schema and ontology matching, schema and ontology evolution, and schema merging.
Das Buch vermittelt umfassende Grundlagen moderner Techniken des verteilten und parallelen Datenmanagements, die das Fundament moderner Informationssysteme bilden. Ausgehend von einer Betrachtung der Architekturvarianten, die sich aus verteilten sowie parallelen Hardwareinfrastrukturen ergeben, werden die Bereiche Datenverteilung, Anfrageverarbeitung sowie Konsistenzsicherung behandelt. Hierbei werden jeweils Verfahren und Techniken fur klassische verteilte, parallele sowie moderne massiv-verteilte bzw. massiv-parallele Architekturen vorgestellt und hinsichtlich ihrer Eigenschaften diskutiert. Damit schlagen die Autoren die Brucke zwischen klassischen Verfahren und aktuellen Entwicklungen im Cloud- und Big Data-Umfeld.
Das Buch bietet eine umfassende und aktuelle Darstellung der
Konzepte und Techniken zur Implementierung von Datenbanksystemen.
Ausgangspunkt ist ein hierarchisches Architekturmodell: Die
Schichten dieses Modells ermoglichen es, den Systemaufbau, die
Einordnung der bereitzustellenden Funktionen und ihr Zusammenspiel
detailliert zu beschreiben.
Das Buch behandelt in systematischer und praxisorientierter Weise den Entwurf sowie die Leistungsbewertung von Synchronisationsverfahren in Mehrrechner-Datenbanksystemen. Nach einer Klassifikation von Mehrrechner-Datenbanksystemen werden zunachst die wichtigsten Synchronisationstechniken fur zentralisierte und verteilte Datenbanksysteme dargestellt und miteinander verglichen. Im Mittelpunkt der Uberlegungen stehen dann geeignete Synchronisationskonzepte (wie Sperrverfahren und optimistische Protokolle) fur sogenannte DB-Sharing-Systeme, welche eine allgemeine Mehrrechnerarchitektur zur Realisierung von Hochleistungs-Transaktionssystemen verkorpern. Fur DB-Sharing werden neben der Realisierung der Synchronisationskomponente auch fur neue Anforderungen bezuglich Systempufferverwaltung, Logging, Recovery und Lastkontrolle koordinierte Losungsmoglichkeiten angegeben. Zur quantitativen und realitatsnahen Leistungsanalyse von sechs Synchronisationsprotokollen wurde ein Trace-getriebenes Simulationssystem entwickelt, in dem alle wesentlichen Komponenten eines Mehrrechner-Datenbanksystems detailliert berucksichtigt sind. Das Buch gibt einen fundierten Uberblick uber den Stand der Wissenschaft im Bereich der Synchronisation in zentralisierten und Mehrrechner-Datenbanksystemen und zeigt neue Techniken zur Realisierung kunftiger Hochleistungs-Transaktionssysteme auf.
|
![]() ![]() You may like...
|