Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 9 of 9 matches in All Departments
Motivation for the Book This book aims to describe a comprehensive methodology for service-oriented inf- mation systems planning, considered in particular, in eGovernment initiatives. The methodology is based on the research results produced by the Italian project "eG- ernment for Mediterranean Countries (eG4M)," granted by the Italian Ministry of University and Research from 2005 to 2008. The concept of service is at the center of the book. The methodology is focused on quality of services as a key factor for eGovernment initiatives. Since its grou- ing is in a project whose goal has been to develop a methodology for eGove- ment in Mediterranean countries it is called eG4M. Furthermore, eG4M aims at encompassing the relationships existing between ICT technologies and social c- texts of service provision, organizational issues, and juridical framework, looking at ICT technologies more as a means than an end. eG4M satis es a real need of constituencies and stakeholders involved in eGovernment projects, con rmed in the eG4M experimentations and in previous preliminary experiences in the Italian P- lic Administrations. A structured process is needed that provides a clear perspective on the different facets that eGovernment initiatives usually have to challenge and disciplines the complex set of decisions to be taken. The available approaches to eGovernment usually provide only one perspective to public managers and local authorities on the domain of intervention, either te- nological, organizational, legal, economic, or social.
Poor data quality can seriously hinder or damage the efficiency and effectiveness of organizations and businesses. The growing awareness of such repercussions has led to major public initiatives like the "Data Quality Act" in the USA and the "European 2003/98" directive of the European Parliament. Batini and Scannapieco present a comprehensive and systematic introduction to the wide set of issues related to data quality. They start with a detailed description of different data quality dimensions, like accuracy, completeness, and consistency, and their importance in different types of data, like federated data, web data, or time-dependent data, and in different data categories classified according to frequency of change, like stable, long-term, and frequently changing data. The book's extensive description of techniques and methodologies from core data quality research as well as from related fields like data mining, probability theory, statistical data analysis, and machine learning gives an excellent overview of the current state of the art. The presentation is completed by a short description and critical comparison of tools and practical methodologies, which will help readers to resolve their own quality problems. This book is an ideal combination of the soundness of theoretical foundations and the applicability of practical approaches. It is ideally suited for everyone - researchers, students, or professionals - interested in a comprehensive overview of data quality issues. In addition, it will serve as the basis for an introductory course or for self-study on this topic.
This book provides a systematic and comparative description of the vast number of research issues related to the quality of data and information. It does so by delivering a sound, integrated and comprehensive overview of the state of the art and future development of data and information quality in databases and information systems. To this end, it presents an extensive description of the techniques that constitute the core of data and information quality research, including record linkage (also called object identification), data integration, error localization and correction, and examines the related techniques in a comprehensive and original methodological framework. Quality dimension definitions and adopted models are also analyzed in detail, and differences between the proposed solutions are highlighted and discussed. Furthermore, while systematically describing data and information quality as an autonomous research area, paradigms and influences deriving from other areas, such as probability theory, statistical data analysis, data mining, knowledge representation, and machine learning are also included. Last not least, the book also highlights very practical solutions, such as methodologies, benchmarks for the most effective techniques, case studies, and examples. The book has been written primarily for researchers in the fields of databases and information management or in natural sciences who are interested in investigating properties of data and information that have an impact on the quality of experiments, processes and on real life. The material presented is also sufficiently self-contained for masters or PhD-level courses, and it covers all the fundamentals and topics without the need for other textbooks. Data and information system administrators and practitioners, who deal with systems exposed to data-quality issues and as a result need a systematization of the field and practical methods in the area, will also benefit from the combination of concrete practical approaches with sound theoretical formalisms.
This book provides a systematic and comparative description of the vast number of research issues related to the quality of data and information. It does so by delivering a sound, integrated and comprehensive overview of the state of the art and future development of data and information quality in databases and information systems. To this end, it presents an extensive description of the techniques that constitute the core of data and information quality research, including record linkage (also called object identification), data integration, error localization and correction, and examines the related techniques in a comprehensive and original methodological framework. Quality dimension definitions and adopted models are also analyzed in detail, and differences between the proposed solutions are highlighted and discussed. Furthermore, while systematically describing data and information quality as an autonomous research area, paradigms and influences deriving from other areas, such as probability theory, statistical data analysis, data mining, knowledge representation, and machine learning are also included. Last not least, the book also highlights very practical solutions, such as methodologies, benchmarks for the most effective techniques, case studies, and examples. The book has been written primarily for researchers in the fields of databases and information management or in natural sciences who are interested in investigating properties of data and information that have an impact on the quality of experiments, processes and on real life. The material presented is also sufficiently self-contained for masters or PhD-level courses, and it covers all the fundamentals and topics without the need for other textbooks. Data and information system administrators and practitioners, who deal with systems exposed to data-quality issues and as a result need a systematization of the field and practical methods in the area, will also benefit from the combination of concrete practical approaches with sound theoretical formalisms.
Motivation for the Book This book aims to describe a comprehensive methodology for service-oriented inf- mation systems planning, considered in particular, in eGovernment initiatives. The methodology is based on the research results produced by the Italian project "eG- ernment for Mediterranean Countries (eG4M)," granted by the Italian Ministry of University and Research from 2005 to 2008. The concept of service is at the center of the book. The methodology is focused on quality of services as a key factor for eGovernment initiatives. Since its grou- ing is in a project whose goal has been to develop a methodology for eGove- ment in Mediterranean countries it is called eG4M. Furthermore, eG4M aims at encompassing the relationships existing between ICT technologies and social c- texts of service provision, organizational issues, and juridical framework, looking at ICT technologies more as a means than an end. eG4M satis es a real need of constituencies and stakeholders involved in eGovernment projects, con rmed in the eG4M experimentations and in previous preliminary experiences in the Italian P- lic Administrations. A structured process is needed that provides a clear perspective on the different facets that eGovernment initiatives usually have to challenge and disciplines the complex set of decisions to be taken. The available approaches to eGovernment usually provide only one perspective to public managers and local authorities on the domain of intervention, either te- nological, organizational, legal, economic, or social.
Poor data quality can seriously hinder or damage the efficiency and effectiveness of organizations and businesses. The growing awareness of such repercussions has led to major public initiatives like the "Data Quality Act" in the USA and the "European 2003/98" directive of the European Parliament. Batini and Scannapieco present a comprehensive and systematic introduction to the wide set of issues related to data quality. They start with a detailed description of different data quality dimensions, like accuracy, completeness, and consistency, and their importance in different types of data, like federated data, web data, or time-dependent data, and in different data categories classified according to frequency of change, like stable, long-term, and frequently changing data. The book's extensive description of techniques and methodologies from core data quality research as well as from related fields like data mining, probability theory, statistical data analysis, and machine learning gives an excellent overview of the current state of the art. The presentation is completed by a short description and critical comparison of tools and practical methodologies, which will help readers to resolve their own quality problems. This book is an ideal combination of the soundness of theoretical foundations and the applicability of practical approaches. It is ideally suited for everyone - researchers, students, or professionals - interested in a comprehensive overview of data quality issues. In addition, it will serve as the basis for an introductory course or for self-study on this topic.
Cooperative Information Systems have emerged as a central concept in a variety of applications, projects, and systems in the new era of e-business. The conference at which the papers in this volume were presented was the ninth international conference on the topic of Cooperative Information Systems (CoopIS 2001), and was held in Trento, Italy on September 5-7, 2001. Like the previous conferences, CoopIS 2001 has been remarkably successful in bringing together representatives of many di?erent ?elds, spanning the entire range of e?ective web-based Cooperative Information Systems, and with interests ranging from industrial experience to original research concepts and results. The 29 papers collected here out of the 79 ones that were submitted, dem- strate well the range of results achieved in several areas such as agent te- nologies, models and architectures, web systems, information integration, m- dleware technologies, federated and multi-database systems. The papers th- selves, however, do not convey the lively excitement of the conference itself, and the continuing spirit of cooperation and communication across disciplines that has been the hallmark of these conferences. We would especially like to thank our keynote speakers: Philip A. Bernstein (Microsoft Research, USA), Edward E. Cobb (BEA Systems, USA), and Ma- izio Lenzerini (Universit'a di Roma "La Sapienza", Italy) for providing a portrait of the best contemporary work in the ?eld. We would also like to thank the many people who made CoopIS 2001 possible.
This book argues that "organizing" is a broader term than managing, as it entails understanding how people and machines interact with each other; how resources, data, goods are exchanged in complex and intertwined value chains; and how lines of action and activities can be articulated using flexible protocols and often ad-hoc processes in situated practices of use and production. The book presents a collection of research papers shedding new light on these phenomena and related practices from both academic and professional perspectives. Given the plurality of views that it offers, the book makes a relevant contribution to the understanding and appreciation of the complexity of the digital world at various levels of granularity. It focuses on how individuals, communities and the coopetitive societies of our new, global and hyperconnected world produce value and pursue their objectives and ideals in mutually dependent ways. The content of the book is based on a selection of the best papers - original double-blind peer-reviewed contributions - presented at the annual conference of the Italian chapter of the AIS, which was held in Milan, Italy in October 2017.
La scarsa qualitA dei dati puA ostacolare o danneggiare seriamente la (TM)efficienza e la (TM)efficacia di organizzazioni e imprese. La crescente consapevolezza di tali ripercussioni ha condotto a importanti iniziative pubbliche come la promulgazione del "Data Quality Act" negli Stati Uniti e della direttiva 2003/98 del Parlamento Europeo. Gli autori presentano una (TM)introduzione completa e sistematica alla (TM)ampio insieme di problemi legati alla qualitA dei dati. Il libro parte con una descrizione dettagliata di diverse dimensioni della qualitA dei dati, come la (TM)accuratezza, la completezza e la consistenza, e ne discute la (TM)importanza in relazione sia a diverse tipologie di dati, come i dati federati, i dati presenti sul web e i dati con dipendenze temporali, che alle diverse categorie in cui i dati si possono classificare. La (TM)esauriente descrizione di tecniche e metodologie provenienti non solo dalla ricerca nella (TM)area della qualitA dei dati ma anche in aree correlate, quali data mining, teoria della probabilitA, analisi statistica dei dati e apprendimento automatico, fornisce una (TM)eccellente introduzione allo stato della (TM)arte attuale. La presentazione A] completata da una breve descrizione e da un confronto critico di strumenti e metodologie pratiche, che aiuterA il lettore a risolvere i propri problemi di qualitA . Questo libro costituisce la combinazione ideale fra la correttezza dei fondamenti teorici e la (TM)applicabilitA degli approcci pratici. Ea (TM) ideale per tutti coloro a" ricercatori, studenti o professionisti a" che siano interessati a una panoramica completa sui problemi della qualitA dei dati. PuA essere inoltre impiegato comemanuale in un corso introduttivo alla (TM)argomento, o dalla (TM)autodidatta.
|
You may like...
Clare - The Killing Of A Gentle Activist
Christopher Clark
Paperback
|