![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
Web mining has become a popular area of research, integrating the different research areas of data mining and the World Wide Web. According to the taxonomy of Web mining, there are three sub-fields of Web-mining research: Web usage mining, Web content mining and Web structure mining. These three research fields cover most content and activities on the Web. With the rapid growth of the World Wide Web, Web mining has become a hot topic and is now part of the mainstream of Web - search, such as Web information systems and Web intelligence. Among all of the possible applications in Web research, e-commerce and e-services have been iden- fied as important domains for Web-mining techniques. Web-mining techniques also play an important role in e-commerce and e-services, proving to be useful tools for understanding how e-commerce and e-service Web sites and services are used, e- bling the provision of better services for customers and users. Thus, this book will focus upon Web-mining applications in e-commerce and e-services. Some chapters in this book are extended from the papers that presented in WMEE 2008 (the 2nd International Workshop for E-commerce and E-services). In addition, we also sent invitations to researchers that are famous in this research area to contr- ute for this book. The chapters of this book are introduced as follows: In chapter 1, Peter I.
Introduction The International Federation for Information Processing (IFIP) is a non-profit umbrella organization for national societies working in the field of information processing. It was founded in 1960 under the auspices of UNESCO. It is organized into several technical c- mittees. This book represents the proceedings of the 2008 conference of technical committee 8 (TC8), which covers the field of infor- tion systems. TC8 aims to promote and encourage the advancement of research and practice of concepts, methods, techniques and issues related to information systems in organisations. TC8 has established eight working groups covering the following areas: design and evaluation of information systems; the interaction of information systems and the organization; decision support systems; e-business information systems: multi-disciplinary research and practice; inf- mation systems in public administration; smart cards, technology, applications and methods; and enterprise information systems. Further details of the technical committee and its working groups can be found on our website (ifiptc8. dsi. uminho. pt). This conference was part of IFIP's World Computer Congress in Milan, Italy which took place 7-10 September 2008. The occasion celebrated the 32nd anniversary of IFIP TC8. The call for papers invited researchers, educators, and practitioners to submit papers and panel proposals that advance concepts, methods, techniques, tools, issues, education, and practice of information systems in organi- tions. Thirty one submissions were received.
This invaluable reference offers the most comprehensive introduction available to the concepts of multisensor data fusion. It introduces key algorithms, provides advice on their utilization, and raises issues associated with their implementation. With a diverse set of mathematical and heuristic techniques for combining data from multiple sources, the book shows how to implement a data fusion system, describes the process for algorithm selection, functional architectures and requirements for ancillary software, and illustrates man-machine interface requirements an database issues.
Researchers in data management have recently recognized the importance of a new class of data-intensive applications that requires managing data streams, i.e., data composed of continuous, real-time sequence of items. Streaming applications pose new and interesting challenges for data management systems. Such application domains require queries to be evaluated continuously as opposed to the one time evaluation of a query for traditional applications. Streaming data sets grow continuously and queries must be evaluated on such unbounded data sets. These, as well as other challenges, require a major rethink of almost all aspects of traditional database management systems to support streaming applications. Stream Data Management comprises eight invited chapters by researchers active in stream data management. The collected chapters provide exposition of algorithms, languages, as well as systems proposed and implemented for managing streaming data. Stream Data Management is designed to appeal to researchers or practitioners already involved in stream data management, as well as to those starting out in this area. This book is also suitable for graduate students in computer science interested in learning about stream data management.
Richard Chbeir, Youakim Badr, Ajith Abraham, and Aboul-Ella Hassanien Abstract As the Web continues to grow and evolve, more and more data are becoming available. Particularly, multimedia and XML-based data are produced regularly and in increasing way in our daily digital activities, and their retrieval and access must be explored and studied in this emergent web-based era. This book provides reviews of the cutting-edge technologies and insights of various topics related to XML-based and multimedia information access and retrieval under the umbrella of Web Intelligence and reporting how organizations can gain compe- tive advantages by applying new different emergent techniques in the real-world scenarios. The primary target audience for the book includes researchers, scholars, postgraduate students and developers who are interested in advanced information retrieval on the web research and related issues. 1 Introduction Since the last two decades, Internet has changed our daily life by rede?ning the meanings and processes of business, commerce, marketing, ?nance, publishing, R. Chbeir Universite ' de Bourgogne, LE2I-UMR CNRS 5158, Fac. de Sciences Mirande, 21078 Dijon Cedex, France e-mail: richard. chbeir@u-bourgogne. fr Y. Badr INSA de Lyon, Universite ' de Lyon, Depart ' ement Informatique, 7 avenue Jean Capelle, 69621 Villeurbanne CX, France e-mail: youakim. badr@insa-lyon. fr A. Abraham Norwegian University of Science & Technology, Center for Quanti?able Quality of Service in Communication Systems, O. S. Bragstads plass 2E, 7491 Trondheim, Norway e-mail: ajith. abraham@ieee. org A. -E. Hassanien Kuwait University, College of Business & Administration, Dept.
This book covers challenges and solutions in establishing Industry 4.0 standards for Internet of Things. It proposes a clear view about the role of Internet of Things in establishing standards. The sensor design for industrial problem, challenges faced, and solutions are all addressed. The concept of digital twin and complexity in data analytics for predictive maintenance and fault prediction is also covered. The book is aimed at existing problems faced by the industry at present, with the goal of cost-efficiency and unmanned automation. It also concentrates on predictive maintenance and predictive failures. In addition, it includes design challenges and a survey of literature.
Data engineering has grown rapidly in the past decade, leaving many software engineers, data scientists, and analysts looking for a comprehensive view of this practice. With this practical book, you will learn how to plan and build systems to serve the needs of your organization and customers by evaluating the best technologies available in the framework of the data engineering lifecycle. Authors Joe Reis and Matt Housley walk you through the data engineering lifecycle and show you how to stitch together a variety of cloud technologies to serve the needs of downstream data consumers. You will understand how to apply the concepts of data generation, ingestion, orchestration, transformation, storage, governance, and deployment that are critical in any data environment regardless of the underlying technology. This book will help you: Assess data engineering problems using an end-to-end data framework of best practices Cut through marketing hype when choosing data technologies, architecture, and processes Use the data engineering lifecycle to design and build a robust architecture Incorporate data governance and security across the data engineering lifecycle
Details recent research in areas such as ontology design for information integration, metadata generation and management, and representation and management of distributed ontologies. Provides decision support on the use of novel technologies, information about potential problems, and guidelines for the successful application of existing technologies.
This book focuses on the basic theory and methods of multisensor data fusion state estimation and its application. It consists of four parts with 12 chapters. In Part I, the basic framework and methods of multisensor optimal estimation and the basic concepts of Kalman filtering are briefly and systematically introduced. In Part II, the data fusion state estimation algorithms under networked environment are introduced. Part III consists of three chapters, in which the fusion estimation algorithms under event-triggered mechanisms are introduced. Part IV consists of two chapters, in which fusion estimation for systems with non-Gaussian but heavy-tailed noises are introduced. The book is primarily intended for researchers and engineers in the field of data fusion and state estimation. It also benefits for both graduate and undergraduate students who are interested in target tracking, navigation, networked control, etc.
DATA ENGINEERING: Mining, Information, and Intelligence describes applied research aimed at the task of collecting data and distilling useful information from that data. Most of the work presented emanates from research completed through collaborations between Acxiom Corporation and its academic research partners under the aegis of the Acxiom Laboratory for Applied Research (ALAR). Chapters are roughly ordered to follow the logical sequence of the transformation of data from raw input data streams to refined information. Four discrete sections cover Data Integration and Information Quality; Grid Computing; Data Mining; and Visualization. Additionally, there are exercises at the end of each chapter. The primary audience for this book is the broad base of anyone interested in data engineering, whether from academia, market research firms, or business-intelligence companies. The volume is ideally suited for researchers, practitioners, and postgraduate students alike. With its focus on problems arising from industry rather than a basic research perspective, combined with its intelligent organization, extensive references, and subject and author indices, it can serve the academic, research, and industrial audiences.
Noisy data appears very naturally in applications where the authentication is based on physical identifiers, such as human beings, or physical structures, such as physical unclonable functions. This book examines how the presence of noise has an impact on information security, describes how it can be dealt with and possibly used to generate an advantage over traditional approaches, and provides a self-contained overview of the techniques and applications of security based on noisy data. Security with Noisy Data thoroughly covers the theory of authentication based on noisy data and shows it in practice as a key tool for preventing counterfeiting. Part I discusses security primitives that allow noisy inputs, and Part II focuses on the practical applications of the methods discussed in the first part. Key features: a [ Contains algorithms to derive secure keys from noisy data, in particular from physical unclonable functions and biometrics - as well as the theory proving that those algorithms are secure a [ Offers practical implementations of algorithms, including techniques that give insight into system security a [ Includes an overview and detailed description of new applications made possible by using these new algorithms a [ Discusses recent theoretical as well as application-oriented developments in the field, combining noisy data with cryptography a [ Describes the foundations of the subject in a clear, accessible and reader-friendly style a [ Presents the principles of key establishment and multiparty computation over noisy channels a [ Provides a detailed overview of the building blocks of cryptography for noisy data and explains how these techniquescan be applied, (for example as anti-counterfeiting and key storage) a [ Introduces privacy protected biometric systems, analyzes the theoretical and practical properties of PUFs and discusses PUF based systems a [ Addresses biometrics and physical unclonable functions extensively This comprehensive introduction offers an excellent foundation to graduate students and researchers entering the field, and will also benefit professionals needing to expand their knowledge. Readers will gain a well-rounded and broad understanding of the topic through the insight it provides into both theory and practice. Pim Tuyls is a Principal Scientist at Philips Research and a Visiting Professor at the COSIC Department of the Katholieke Universiteit of Leuven, Dr Boris Skoric and Dr Tom Kevenaar are research scientists at Philips Research Laboratories, Eindhoven.
The present volume gathers together the talks presented at the second colloquim on the Future Professional Communication in Astronomy (FPCA II), held at Harvard University (Cambridge, MA) on 13-14 April 2010. This meeting provided a forum for editors, publishers, scientists, librarians and officers of learned societies to discuss the future of the field. The program included talks from leading researchers and practitioners and drew a crowd of approximately 50 attendees from 10 countries. These proceedings contain contributions from invited and contributed talks from leaders in the field, touching on a number of topics. Among them: - The role of disciplinary repositories such as ADS and arXiv in astronomy and the physical sciences; - Current status and future of Open Access Publishing models and their impact on astronomy and astrophysics publishing; - Emerging trends in scientific article publishing: semantic annotations, multimedia content, links to data products hosted by astrophysics archives; - Novel approaches to the evaluation of facilities and projects based on bibliometric indicators; - Impact of Government mandates, Privacy laws, and Intellectual Property Rights on the evolving digital publishing environment in astronomy; - Communicating astronomy to the public: the experience of the International Year of Astronomy 2009.
Information Processing and Security Systems is a collection of forty papers that were originally presented at an international multi-conference on Advanced Computer Systems (ACS) and Computer Information Systems and Industrial Management Applications (CISIM) held in Elk, Poland. This volume describes the latest developments in advanced computer systems and their applications within artificial intelligence, biometrics and information technology security. The volume also includes contributions on computational methods, algorithms and applications, computational science, education and industrial management applications.
This book presents the proceedings of the Working Conference on the societal and organizational implications for information systems of social inclusion. The contributed papers explore technology design and use in organizations, and consider the processes that engender social exclusion along with the issues that derive from it. The conference, sponsored by the International Federation for Information Processing Working Group 8.2, was held in Limerick, Ireland, in July, 2006.
The technique of data fusion has been used extensively in information retrieval due to the complexity and diversity of tasks involved such as web and social networks, legal, enterprise, and many others. This book presents both a theoretical and empirical approach to data fusion. Several typical data fusion algorithms are discussed, analyzed and evaluated. A reader will find answers to the following questions, among others: What are the key factors that affect the performance of data fusion algorithms significantly? What conditions are favorable to data fusion algorithms? CombSum and CombMNZ, which one is better? and why? What is the rationale of using the linear combination method? How can the best fusion option be found under any given circumstances?"
Artificial Intelligence and Security in Computing Systems is a peer-reviewed conference volume focusing on three areas of practice and research progress in information technologies: -Methods of Artificial Intelligence presents methods and
algorithms which are the basis for applications of artificial
intelligence environments.
This book constitutes the thoroughly refereed post conference proceedings of the 6th IFIP WG 9.2, 9.6/11.7, 11.4, 11.6/PrimeLife International Summer School, held in Helsingborg, Sweden, in August 2010. The 27 revised papers were carefully selected from numerous submissions during two rounds of reviewing. They are organized in topical sections on terminology, privacy metrics, ethical, social, and legal aspects, data protection and identity management, eID cards and eID interoperability, emerging technologies, privacy for eGovernment and AAL applications, social networks and privacy, privacy policies, and usable privacy.
Blockchain technologies, as an emerging distributed architecture and computing paradigm, have accelerated the development/application of the Cloud/GPU/Edge Computing, Artificial Intelligence, cyber physical systems, social networking, crowdsourcing and crowdsensing, 5G, trust management, and finance. The popularity and rapid development of Blockchain brings many technical and regulatory challenges for research and academic communities. This book will feature contributions from experts on topics related to performance, benchmarking, durability, robustness, as well data gathering and management, algorithms, analytics techniques for transactions processing, and implementation of applications.
Linking Government Data provides a practical approach to addressing common information management issues. The approaches taken are based on international standards of the World Wide Web Consortium. Linking Government Data gives both the costs and benefits of using linked data techniques with government data; describes how agencies can fulfill their missions with less cost; and recommends how intra-agency culture must change to allow public presentation of linked data. Case studies from early adopters of linked data approaches in international governments are presented in the last section of the book. Linking Government Data is designed as a professional book for those working in Semantic Web research and standards development, and for early adopters of Semantic Web standards and techniques. Enterprise architects, project managers and application developers in commercial, not-for-profit and government organizations concerned with scalability, flexibility and robustness of information management systems will also find this book valuable. Students focused on computer science and business management will also find value in this book.
The emergence of open access, web technology, and e-publishing has slowly transformed modern libraries into digital libraries. With this variety of technologies utilized, cloud computing and virtual technology has become an advantage for libraries to provide a single efficient system that saves money and time. Cloud Computing and Virtualization Technologies in Libraries highlights the concerns and limitations that need addressed in order to optimize the benefits of cloud computing to the virtualization of libraries. Focusing on the latest innovations and technological advancements, this book is essential for professionals, students, and researchers interested in cloud library management and development in different types of information environments.
Motivation for the Book This book aims to describe a comprehensive methodology for service-oriented inf- mation systems planning, considered in particular, in eGovernment initiatives. The methodology is based on the research results produced by the Italian project "eG- ernment for Mediterranean Countries (eG4M)," granted by the Italian Ministry of University and Research from 2005 to 2008. The concept of service is at the center of the book. The methodology is focused on quality of services as a key factor for eGovernment initiatives. Since its grou- ing is in a project whose goal has been to develop a methodology for eGove- ment in Mediterranean countries it is called eG4M. Furthermore, eG4M aims at encompassing the relationships existing between ICT technologies and social c- texts of service provision, organizational issues, and juridical framework, looking at ICT technologies more as a means than an end. eG4M satis es a real need of constituencies and stakeholders involved in eGovernment projects, con rmed in the eG4M experimentations and in previous preliminary experiences in the Italian P- lic Administrations. A structured process is needed that provides a clear perspective on the different facets that eGovernment initiatives usually have to challenge and disciplines the complex set of decisions to be taken. The available approaches to eGovernment usually provide only one perspective to public managers and local authorities on the domain of intervention, either te- nological, organizational, legal, economic, or social.
This book includes 23 papers dealing with the impact of modern information and communication technologies that support a wide variety of communities: local communities, virtual communities, and communities of practice, such as knowledge communities and scientific communities. The volume is the result of the second multidisciplinary "Communities and Technologies Conference," a major event in this emerging research field. The various chapters discuss how communities are affected by technologies, and how understanding of the way that communities function can be used in improving information systems design. This state of the art overview will be of interest to computer and information scientists, social scientists and practitioners alike.
This book proposes a novel approach to classification, discusses its myriad advantages, and outlines how such an approach to classification can best be pursued. It encourages a collaborative effort toward the detailed development of such a classification. This book is motivated by the increased importance of interdisciplinary scholarship in the academy, and the widely perceived shortcomings of existing knowledge organization schemes in serving interdisciplinary scholarship. It is designed for scholars of classification research, knowledge organization, the digital environment, and interdisciplinarity itself. The approach recommended blends a general classification with domain-specific classification practices. The book reaches a set of very strong conclusions: -Existing classification systems serve interdisciplinary research and teaching poorly. -A novel approach to classification, grounded in the phenomena studied rather than disciplines, would serve interdisciplinary scholarship much better. It would also have advantages for disciplinary scholarship. The productivity of scholarship would thus be increased. -This novel approach is entirely feasible. Various concerns that might be raised can each be addressed. The broad outlines of what a new classification would look like are developed. -This new approach might serve as a complement to or a substitute for existing classification systems. -Domain analysis can and should be employed in the pursuit of a general classification. This will be particularly important with respect to interdisciplinary domains. -Though the impetus for this novel approach comes from interdisciplinarity, it is also better suited to the needs of the Semantic Web, and a digital environment more generally. Though the primary focus of the book is on classification systems, most chapters also address how the analysis could be extended to thesauri and ontologies. The possibility of a universal thesaurus is explored. The classification proposed has many of the advantages sought in ontologies for the Semantic Web. The book is therefore of interest to scholars working in these areas as well.
There are many invaluable books available on data mining theory and applications. However, in compiling a volume titled DATA MINING: Foundations and Intelligent Paradigms: Volume 1: Clustering, Association and Classification we wish to introduce some of the latest developments to a broad audience of both specialists and non-specialists in this field. " |
![]() ![]() You may like...
Demystifying Graph Data Science - Graph…
Pethuru Raj, Abhishek Kumar, …
Hardcover
Database Solutions - A step by step…
Thomas Connolly, Carolyn Begg
Paperback
R2,256
Discovery Miles 22 560
Database Systems: The Complete Book…
Hector Garcia-Molina, Jeffrey Ullman, …
Paperback
R2,849
Discovery Miles 28 490
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,539
Discovery Miles 15 390
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
|