![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
The authors focus on the mathematical models and methods that support most data mining applications and solution techniques.
This book presents an overview of techniques for discovering high-utility patterns (patterns with a high importance) in data. It introduces the main types of high-utility patterns, as well as the theory and core algorithms for high-utility pattern mining, and describes recent advances, applications, open-source software, and research opportunities. It also discusses several types of discrete data, including customer transaction data and sequential data. The book consists of twelve chapters, seven of which are surveys presenting the main subfields of high-utility pattern mining, including itemset mining, sequential pattern mining, big data pattern mining, metaheuristic-based approaches, privacy-preserving pattern mining, and pattern visualization. The remaining five chapters describe key techniques and applications, such as discovering concise representations and regular patterns.
Despite its explosive growth over the last decade, the Web remains essentially a tool to allow humans to access information. Semantic Web technologies like RDF, OWL and other W3C standards aim to extend the Web's capability through increased availability of machine-processable information. Davies, Grobelnik and Mladenic have grouped contributions from renowned researchers into four parts: technology; integration aspects of knowledge management; knowledge discovery and human language technologies; and case studies. Together, they offer a concise vision of semantic knowledge management, ranging from knowledge acquisition to ontology management to knowledge integration, and their applications in domains such as telecommunications, social networks and legal information processing. This book is an excellent combination of fundamental research, tools and applications in Semantic Web technologies. It serves the fundamental interests of researchers and developers in this field in both academia and industry who need to track Web technology developments and to understand their business implications.
In today's market, emerging technologies are continually assisting in common workplace practices as companies and organizations search for innovative ways to solve modern issues that arise. Prevalent applications including internet of things, big data, and cloud computing all have noteworthy benefits, but issues remain when separately integrating them into the professional practices. Significant research is needed on converging these systems and leveraging each of their advantages in order to find solutions to real-time problems that still exist. Challenges and Opportunities for the Convergence of IoT, Big Data, and Cloud Computing is a pivotal reference source that provides vital research on the relation between these technologies and the impact they collectively have in solving real-world challenges. While highlighting topics such as cloud-based analytics, intelligent algorithms, and information security, this publication explores current issues that remain when attempting to implement these systems as well as the specific applications IoT, big data, and cloud computing have in various professional sectors. This book is ideally designed for academicians, researchers, developers, computer scientists, IT professionals, practitioners, scholars, students, and engineers seeking research on the integration of emerging technologies to solve modern societal issues.
Content-based multimedia retrieval is a challenging research field with many unsolved problems. This monograph details concepts and algorithms for robust and efficient information retrieval of two different types of multimedia data: waveform-based music data and human motion data. It first examines several approaches in music information retrieval, in particular general strategies as well as efficient algorithms. The book then introduces a general and unified framework for motion analysis, retrieval, and classification, highlighting the design of suitable features, the notion of similarity used to compare data streams, and data organization.
Background InformationRetrieval (IR) has become, mainly as aresultofthe huge impact of the World Wide Web (WWW) and CD-ROM industry, one of the most important theoretical and practical research topics in Information and Computer Science. Since the inception ofits first theoretical roots about 40 years ago, IR has made avariety ofpractical, experimental and technological advances. It is usually defined as being concerned with the organisation, storage, retrieval and evaluation of information (stored in computer databases) that is likely to be relevant to users' informationneeds (expressed in queries). A huge number ofarticles published in specialisedjournals and at conferences (such as, for example, the Journal of the American Society for Information Science, Information Processing and Management, The Computer Journal, Information Retrieval, Journal of Documentation, ACM TOIS, ACM SIGIR Conferences, etc. ) deal with many different aspects of IR. A number of books have also been written about IR, for example: van Rijsbergen, 1979; Salton and McGill, 1983; Korfhage, 1997; Kowalski, 1997;Baeza-Yates and Ribeiro-Neto, 1999; etc. . IR is typically divided and presented in a structure (models, data structures, algorithms, indexing, evaluation, human-eomputer interaction, digital libraries, WWW-related aspects, and so on) thatreflects its interdisciplinarynature. All theoretical and practical research in IR is ultimately based on a few basic models (or types) which have been elaborated over time. Every model has a formal (mathematical, algorithmic, logical) description of some sort, and these decriptions are scattered all over the literature.
This book shows C# developers how to use C# 2008 and ADO.NET 3.5 to develop database applications the way the best professionals do. After an introductory section, section 2 shows how to use data sources and datasets for Rapid Application Development and prototyping of Windows Forms applications. Section 3 shows how to build professional 3-layer applications that consist of presentation, business, and database classes. Section 4 shows how to use the new LINQ feature to work with data structures like datasets, SQL Server databases, and XML documents. And section 5 shows how to build database applications by using the new Entity Framework to map business objects to database objects. To ensure mastery, this book presents 23 complete database applications that demonstrate best programming practices. And it's all done in the distinctive Murach style that has been training professional developers for 35 years.
Temporal Information Systems in Medicine introduces the engineering of information systems for medically-related problems and applications. The chapters are organized into four parts; fundamentals, temporal reasoning & maintenance in medicine, time in clinical tasks, and the display of time-oriented clinical information. The chapters are self-contained with pointers to other relevant chapters or sections in this book when necessary. Time is of central importance and is a key component of the engineering process for information systems. This book is designed as a secondary text or reference book for upper -undergraduate level students and graduate level students concentrating on computer science, biomedicine and engineering. Industry professionals and researchers working in health care management, information systems in medicine, medical informatics, database management and AI will also find this book a valuable asset.
Given its effective techniques and theories from various sources and fields, data science is playing a vital role in transportation research and the consequences of the inevitable switch to electronic vehicles. This fundamental insight provides a step towards the solution of this important challenge. Data Science and Simulation in Transportation Research highlights entirely new and detailed spatial-temporal micro-simulation methodologies for human mobility and the emerging dynamics of our society. Bringing together novel ideas grounded in big data from various data mining and transportation science sources, this book is an essential tool for professionals, students, and researchers in the fields of transportation research and data mining.
This book focuses on new and emerging data mining solutions that offer a greater level of transparency than existing solutions. Transparent data mining solutions with desirable properties (e.g. effective, fully automatic, scalable) are covered in the book. Experimental findings of transparent solutions are tailored to different domain experts, and experimental metrics for evaluating algorithmic transparency are presented. The book also discusses societal effects of black box vs. transparent approaches to data mining, as well as real-world use cases for these approaches.As algorithms increasingly support different aspects of modern life, a greater level of transparency is sorely needed, not least because discrimination and biases have to be avoided. With contributions from domain experts, this book provides an overview of an emerging area of data mining that has profound societal consequences, and provides the technical background to for readers to contribute to the field or to put existing approaches to practical use.
In this book about a hundred papers are presented. These were selected from over 450 papers submitted to WCCE95. The papers are of high quality and cover many aspects of computers in education. Within the overall theme of "Liberating the learner" the papers cover the following main conference themes: Accreditation, Artificial Intelligence, Costing, Developing Countries, Distance Learning, Equity Issues, Evaluation (Formative and Summative), Flexible Learning, Implications, Informatics as Study Topic, Information Technology, Infrastructure, Integration, Knowledge as a Resource, Learner Centred Learning, Methodologies, National Policies, Resources, Social Issues, Software, Teacher Education, Tutoring, Visions. Also included are papers from the chairpersons of the six IFIP Working Groups on education (elementary/primary education, secondary education, university education, vocational education and training, research on educational applications and distance learning). In these papers the work in the groups is explained and a basis is given for the work of Professional Groups during the world conference. In the Professional Groups experts share their experience and expertise with other expert practitioners and contribute to a postconference report which will determine future actions of IFIP with respect to education. J. David Tinsley J. van Weert Tom Editors Acknowledgement The editors wish to thank Deryn Watson of Kings College London for organizing the paper reviewing process. The editors also wish to thank the School of Informatics, Faculty of Mathematics and Informatics of the Catholic University of Nijmegen for its support in the production of this document.
The present text aims at helping the reader to maximize the reuse of information. Topics covered include tools and services for creating simple, rich, and reusable knowledge representations to explore strategies for integrating this knowledge into legacy systems. The reuse and integration are essential concepts that must be enforced to avoid duplicating the effort and reinventing the wheel each time in the same field. This problem is investigated from different perspectives. in organizations, high volumes of data from different sources form a big threat for filtering out the information for effective decision making. the reader will be informed of the most recent advances in information reuse and integration.
Manufacturing and operations management paradigms are evolving toward more open and resilient spaces where innovation is driven not only by ever-changing customer needs but also by agile and fast-reacting networked structures. Flexibility, adaptability and responsiveness are properties that the next generation of systems must have in order to successfully support such new emerging trends. Customers are being attracted to be involved in Co-innovation Networks, as - proved responsiveness and agility is expected from industry ecosystems. Renewed production systems needs to be modeled, engineered and deployed in order to achieve cost-effective solutions. BASYS conferences have been developed and organized as a forum in which to share visions and research findings for innovative sustainable and knowledge-based products-services and manufacturing models. Thus, the focus of BASYS is to discuss how human actors, emergent technologies and even organizations are integrated in order to redefine the way in which the val- creation process must be conceived and realized. BASYS 2010, which was held in Valencia, Spain, proposed new approaches in automation where synergies between people, systems and organizations need to be fully exploited in order to create high added-value products and services. This book contains the selection of the papers which were accepted for presentation at the BASYS 2010 conference, covering consolidated and emerging topics of the conference scope.
During the last decade, Knowledge Discovery and Management (KDM or, in French, EGC for Extraction et Gestion des connaissances) has been an intensive and fruitful research topic in the French-speaking scientific community. In 2003, this enthusiasm for KDM led to the foundation of a specific French-speaking association, called EGC, dedicated to supporting and promoting this topic. More precisely, KDM is concerned with the interface between knowledge and data such as, among other things, Data Mining, Knowledge Discovery, Business Intelligence, Knowledge Engineering and Semantic Web. The recent and novel research contributions collected in this book are extended and reworked versions of a selection of the best papers that were originally presented in French at the EGC 2010 Conference held in Tunis, Tunisia in January 2010. The volume is organized in three parts. Part I includes four chapters concerned with various aspects of Data Cube and Ontology-based representations. Part II is composed of four chapters concerned with Efficient Pattern Mining issues, while in Part III the last four chapters address Data Preprocessing and Information Retrieval.
Security of Data and Transaction Processing brings together in one place important contributions and up-to-date research results in this fast moving area. Security of Data and Transaction Processing serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Uncertainty Handling and Quality Assessment in Data Mining provides an introduction to the application of these concepts in Knowledge Discovery and Data Mining. It reviews the state-of-the-art in uncertainty handling and discusses a framework for unveiling and handling uncertainty. Coverage of quality assessment begins with an introduction to cluster analysis and a comparison of the methods and approaches that may be used. The techniques and algorithms involved in other essential data mining tasks, such as classification and extraction of association rules, are also discussed together with a review of the quality criteria and techniques for evaluating the data mining results. This book presents a general framework for assessing quality and handling uncertainty which is based on tested concepts and theories. This framework forms the basis of an implementation tool, 'Uminer' which is introduced to the reader for the first time. This tool supports the key data mining tasks while enhancing the traditional processes for handling uncertainty and assessing quality. Aimed at IT professionals involved with data mining and knowledge discovery, the work is supported with case studies from epidemiology and telecommunications that illustrate how the tool works in 'real world' data mining projects. The book would also be of interest to final year undergraduates or post-graduate students looking at: databases, algorithms, artificial intelligence and information systems particularly with regard to uncertainty and quality assessment.
The first Annual Working Conference ofWG11.4oftheInter nationalFederationforInformation Processing (IFIP), focuseson variousstate of the art concepts in the field of Network and Dis tributedSystemsSecurity. Oursocietyisrapidly evolvingand irreversibly set onacourse governedby electronicinteractions. Wehave seen thebirthofe mail in the early seventies, and are now facing new challenging applicationssuchase commerce, e government, ....Themoreour societyrelies on electronicforms ofcommunication, themorethe securityofthesecommunicationnetworks isessentialforitswell functioning. Asaconsequence, researchonmethodsandtechniques toimprove network security iso fparam ount importance. ThisWorking Conference bringstogetherresearchersandprac tionersofvariousdisciplines, organisationsandcountries, todiscuss thelatestdevelopmentsinsecurity protocols, secure software engin eering, mobileagentsecurity, e commercesecurityandsecurityfor distributedcomputing. Wearealsopleasedtohaveattractedtwointernationalspeakers topresenttwo case studies, one dealing withBelgium'sintentionto replacetheidentity card ofitscitizensbyanelectronicversion, and theotherdiscussingtheimplicationsofthesecuritycertificationin amultinationalcorporation. ThisWorking Conference s houldalsobeconsideredasthekick off activity ofWG11.4, the aimsof which can be summarizedas follows: topromoteresearch on technical measures forsecuringcom puternetworks, including bothhardware andsoftware based techniques. to promote dissemination of research results in the field of network security in real lifenetworks in industry, academia and administrative ins titutions. viii topromoteeducationintheapplicationofsecuritytechniques, andtopromotegeneral awarenessa boutsecurityproblems in thebroadfieldofinformationtechnology. Researchers and practioners who want to get involved in this Working Group, are kindlyrequestedtocontactthechairman. MoreinformationontheworkingsofWG11.4isavailable from the officialIFIP website: http: //www.ifip.at.org/. Finally, wewish toexpressour gratitudetoallthosewho have contributedtothisconference in one wayoranother. Wearegr ate fultothe internationalrefereeboard whoreviewedallthe papers andtotheauthorsandinvitedspeakers, whosecontributionswere essential to the successof the conference. We would alsoliketo thanktheparticipantswhosepresenceand interest, togetherwith thechangingimperativesofsociety, willprovea drivingforce for futureconferen
This book provides an overview of the theory and application of linear and nonlinear mixed-effects models in the analysis of grouped data, such as longitudinal data, repeated measures, and multilevel data. Over 170 figures are included in the book.
This book presents a new diagnostic information methodology to assess the quality of conversational telephone speech. For this, a conversation is separated into three individual conversational phases (listening, speaking, and interaction), and for each phase corresponding perceptual dimensions are identified. A new analytic test method allows gathering dimension ratings from non-expert test subjects in a direct way. The identification of the perceptual dimensions and the new test method are validated in two sophisticated conversational experiments. The dimension scores gathered with the new test method are used to determine the quality of each conversational phase, and the qualities of the three phases, in turn, are combined for overall conversational quality modeling. The conducted fundamental research forms the basis for the development of a preliminary new instrumental diagnostic conversational quality model. This multidimensional analysis of conversational telephone speech is a major landmark towards deeply analyzing conversational speech quality for diagnosis and optimization of telecommunication systems.
New state-of-the-art techniques for analyzing and managing Web data have emerged due to the need for dealing with huge amounts of data which are circulated on the Web. ""Web Data Management Practices: Emerging Techniques and Technologies"" provides a thorough understanding of major issues, current practices, and the main ideas in the field of Web data management, helping readers to identify current and emerging issues, as well as future trends in this area. ""Web Data Management Practices: Emerging Techniques and Technologies"" presents a complete overview of important aspects related to Web data management practices, such as: Web mining, Web data clustering, and others. This book also covers an extensive range of topics, including related issues about Web mining, Web caching and replication, Web services, and the XML standard.
Learn how applying risk management to each stage of the software engineering model can help the entire development process run on time and on budget. This practical guide identifies the potential threats associated with software development, explains how to establish an effective risk management program, and details the six critical steps involved in applying the process. It also explores the pros and cons of software and organizational maturity, discusses various software metrics approaches you can use to measure software quality, and highlights procedures for implementing a successful metrics program.
This book investigates the powerful role of online intermediaries, which connect companies with their end customers, to facilitate joint product innovation. Especially in the healthcare context, such intermediaries deploy interactive online platforms to foster co-creation between engaged healthcare consumers and innovation-seeking healthcare companies. In three empirical studies, this book outlines the key characteristics of online intermediaries in healthcare, their distinct strategies, and the remaining challenges in the field. Readers will also be introduced to the stages companies go through in adopting such co-created solutions. As such, the work appeals for both its academic scope and practical reach.
The proliferation of digital computing devices and their use in communication has resulted in an increased demand for systems and algorithms capable of mining textual data. Thus, the development of techniques for mining unstructured, semi-structured, and fully-structured textual data has become increasingly important in both academia and industry. This second volume continues to survey the evolving field of text mining - the application of techniques of machine learning, in conjunction with natural language processing, information extraction and algebraic/mathematical approaches, to computational information retrieval. Numerous diverse issues are addressed, ranging from the development of new learning approaches to novel document clustering algorithms, collectively spanning several major topic areas in text mining. Features: a [ Acts as an important benchmark in the development of current and future approaches to mining textual information a [ Serves as an excellent companion text for courses in text and data mining, information retrieval and computational statistics a [ Experts from academia and industry share their experiences in solving large-scale retrieval and classification problems a [ Presents an overview of current methods and software for text mining a [ Highlights open research questions in document categorization and clustering, and trend detection a [ Describes new application problems in areas such as email surveillance and anomaly detection Survey of Text Mining II offers a broad selection in state-of-the art algorithms and software for text mining from both academic and industrial perspectives, to generate interest and insight into the stateof the field. This book will be an indispensable resource for researchers, practitioners, and professionals involved in information retrieval, computational statistics, and data mining. Michael W. Berry is a professor in the Department of Electrical Engineering and Computer Science at the University of Tennessee, Knoxville. Malu Castellanos is a senior researcher at Hewlett-Packard Laboratories in Palo Alto, California.
Text mining applications have experienced tremendous advances because of web 2.0 and social networking applications. Recent advances in hardware and software technology have lead to a number of unique scenarios where text mining algorithms are learned. Mining Text Data introduces an important niche in the text analytics field, and is an edited volume contributed by leading international researchers and practitioners focused on social networks & data mining. This book contains a wide swath in topics across social networks & data mining. Each chapter contains a comprehensive survey including the key research content on the topic, and the future directions of research in the field. There is a special focus on Text Embedded with Heterogeneous and Multimedia Data which makes the mining process much more challenging. A number of methods have been designed such as transfer learning and cross-lingual mining for such cases. Mining Text Data simplifies the content, so that advanced-level students, practitioners and researchers in computer science can benefit from this book. Academic and corporate libraries, as well as ACM, IEEE, and Management Science focused on information security, electronic commerce, databases, data mining, machine learning, and statistics are the primary buyers for this reference book. |
You may like...
China's Macroeconomic Outlook…
Center for Macroeconomic Research at Xia
Hardcover
R1,408
Discovery Miles 14 080
Applied Shape Optimization for Fluids
Bijan Mohammadi, Olivier Pironneau
Hardcover
R3,754
Discovery Miles 37 540
Topology Optimization - Theory, Methods…
Martin Philip Bendsoe, Ole Sigmund
Hardcover
R3,744
Discovery Miles 37 440
Developments in Global Optimization
Immanuel M Bomze, Tibor Csendes, …
Hardcover
R4,204
Discovery Miles 42 040
Metaheuristics for Data Clustering and…
Meera Ramadas, Ajith Abraham
Hardcover
R2,653
Discovery Miles 26 530
Human Centric Visual Analysis with Deep…
Liang Lin, Dongyu Zhang, …
Hardcover
R3,785
Discovery Miles 37 850
Interactive 3D Multimedia Content…
Wojciech Cellary, Krzysztof Walczak
Hardcover
R2,683
Discovery Miles 26 830
|