![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > General
Pulling aside the curtain of 'Big Data' buzz, this book introduces C-suite and other non-technical senior leaders to the essentials of obtaining and maintaining accurate, reliable data, especially for decision-making purposes. Bad data begets bad decisions, and an understanding of data fundamentals - how data is generated, organized, stored, evaluated, and maintained - has never been more important when solving problems such as the pandemic-related supply chain crisis. This book addresses the data-related challenges that businesses face, answering questions such as: What are the characteristics of high-quality data? How do you get from bad data to good data? What procedures and practices ensure high-quality data? How do you know whether your data supports the decisions you need to make? This clear and valuable resource will appeal to C-suite executives and top-line managers across industries, as well as business analysts at all career stages and data analytics students.
Predictive analytics refers to making predictions about the future based on different parameters which are historical data, machine learning, and artificial intelligence. This book provides the most recent advances in the field along with case studies and real-world examples. It discusses predictive modeling and analytics in reliability engineering and introduces current achievements and applications of artificial intelligence, data mining, and other techniques in supply chain management. It covers applications to reliability engineering practice, presents numerous examples to illustrate the theoretical results, and considers and analyses case studies and real-word examples. The book is written for researchers and practitioners in the field of system reliability, quality, supply chain management, and logistics management. Students taking courses in these areas will also find this book of interest.
In the digital age, modern society is exposed to high volumes of multimedia information. In efforts to optimize this information, there are new and emerging methods of information retrieval and knowledge management leading to higher efficiency and a deeper understanding of this data. The Handbook of Research on Biomimicry in Information Retrieval and Knowledge Management is a critical scholarly resource that examines bio-inspired classes that solve computer problems. Featuring coverage on a broad range of topics such as big data analytics, bioinformatics, and black hole optimization, this book is geared towards academicians, practitioners, and researchers seeking current research on the use of biomimicry in information and knowledge management.
This proceedings volume highlights cutting-edge approaches for contemporary issues evolved in strategic marketing and the integration of theory and practice. It focuses on strategic research and innovative activities in marketing that can be used in everyday operations. The contributions have been divided into eight sections, grouping emerging marketing technologies together in a close examination of practices, problems and trends. The first section examines management challenges which influence societies, cultures, networks, organizations, teams, and individuals. It emphasizes ways business processes foster innovation and facilitate management transitions from dominant structures to more evolutionary, developmental paradigms. The second section discusses the benefits and guidelines to implementation of green marketing strategies. The following section pursues new perspectives of the role of location in marketing and its impact on consumer well-being. The next section explores the impacts of user generated content (UGC) on marketing theories and practice, which is followed by a section identifying how market-based assets can contribute to a sustainable competitive advantage. The sixth section covers understanding consumer perception to make marketing decisions. The final sections promote the use of business informatics and modeling in marketing and also the development of integrating information management in ways that change how people use information to engage in knowledge focused activities. The papers from the proceedings of the 6th International Conference on Strategic Innovative Marketing (IC-SIM 2017) have been written by scientists, researchers, practitioners and students that demonstrate a special orientation in strategic marketing, all of whom aspire to be ahead of the curve based on the pillars of innovation. This proceedings volume shares their recent contributions to the field and showcases their exchange of insights on strategic issues in the science of innovation marketing.
The World Wide Web and ubiquitous computing technologies have given a significant boost to the emerging field of XML database research. ""Open and Novel Issues in XML Database Applications: Future Directions and Advanced Technologies"" covers comprehensive issues and challenges discovered through leading international XML database research. A useful reference source for researchers, practitioners, and academicians, this book provides complete references to the latest research in XML technology.
We are living in a multilingual world and the diversity in languages which are used to interact with information access systems has generated a wide variety of challenges to be addressed by computer and information scientists. The growing amount of non-English information accessible globally and the increased worldwide exposure of enterprises also necessitates the adaptation of Information Retrieval (IR) methods to new, multilingual settings. Peters, Braschler and Clough present a comprehensive description of the technologies involved in designing and developing systems for Multilingual Information Retrieval (MLIR). They provide readers with broad coverage of the various issues involved in creating systems to make accessible digitally stored materials regardless of the language(s) they are written in. Details on Cross-Language Information Retrieval (CLIR) are also covered that help readers to understand how to develop retrieval systems that cross language boundaries. Their work is divided into six chapters and accompanies the reader step-by-step through the various stages involved in building, using and evaluating MLIR systems. The book concludes with some examples of recent applications that utilise MLIR technologies. Some of the techniques described have recently started to appear in commercial search systems, while others have the potential to be part of future incarnations. The book is intended for graduate students, scholars, and practitioners with a basic understanding of classical text retrieval methods. It offers guidelines and information on all aspects that need to be taken into consideration when building MLIR systems, while avoiding too many 'hands-on details' that could rapidly become obsolete. Thus it bridges the gap between the material covered by most of the classical IR textbooks and the novel requirements related to the acquisition and dissemination of information in whatever language it is stored.
This comprehensive and timely book, New Age Analytics: Transforming the Internet through Machine Learning, IoT, and Trust Modeling, explores the importance of tools and techniques used in machine learning, big data mining, and more. The book explains how advancements in the world of the web have been achieved and how the experiences of users can be analyzed. It looks at data gathering by the various electronic means and explores techniques for analysis and management, how to manage voluminous data, user responses, and more. This volume provides an abundance of valuable information for professionals and researchers working in the field of business analytics, big data, social network data, computer science, analytical engineering, and forensic analysis. Moreover, the book provides insights and support from both practitioners and academia in order to highlight the most debated aspects in the field.
Today, the use of machine intelligence, expert systems, and analytical technologies combined with Big Data is the natural evolution of both disciplines. As a result, there is a pressing need for new and innovative algorithms to help us find effective and practical solutions for smart applications such as smart cities, IoT, healthcare, and cybersecurity. This book presents the latest advances in big data intelligence for smart applications. It explores several problems and their solutions regarding computational intelligence and big data for smart applications. It also discusses new models, practical solutions,and technological advances related to developing and transforming cities through machine intelligence and big data models and techniques. This book is helpful for students and researchers as well as practitioners.
A Tour of Data Science: Learn R and Python in Parallel covers the fundamentals of data science, including programming, statistics, optimization, and machine learning in a single short book. It does not cover everything, but rather, teaches the key concepts and topics in Data Science. It also covers two of the most popular programming languages used in Data Science, R and Python, in one source. Key features: Allows you to learn R and Python in parallel Cover statistics, programming, optimization and predictive modelling, and the popular data manipulation tools - data.table and pandas Provides a concise and accessible presentation Includes machine learning algorithms implemented from scratch, linear regression, lasso, ridge, logistic regression, gradient boosting trees, etc. Appealing to data scientists, statisticians, quantitative analysts, and others who want to learn programming with R and Python from a data science perspective.
This book provides a comprehensive description of the novel coronavirus infection, spread analysis, and related challenges for the effective combat and treatment. With a detailed discussion on the nature of transmission of COVID-19, few other important aspects such as disease symptoms, clinical application of radiomics, image analysis, antibody treatments, risk analysis, drug discovery, emotion and sentiment analysis, virus infection, and fatality prediction are highlighted. The main focus is laid on different issues and futuristic challenges of computational intelligence techniques in solving and identifying the solutions for COVID-19. The book drops radiance on the reasons for the growing profusion and complexity of data in this sector. Further, the book helps to focus on further research challenges and directions of COVID-19 for the practitioners as well as researchers.
This book aims to increase the visibility of data science in real-world, which differs from what you learn from a typical textbook. Many aspects of day-to-day data science work are almost absent from conventional statistics, machine learning, and data science curriculum. Yet these activities account for a considerable share of the time and effort for data professionals in the industry. Based on industry experience, this book outlines real-world scenarios and discusses pitfalls that data science practitioners should avoid. It also covers the big data cloud platform and the art of data science, such as soft skills. The authors use R as the primary tool and provide code for both R and Python. This book is for readers who want to explore possible career paths and eventually become data scientists. This book comprehensively introduces various data science fields, soft and programming skills in data science projects, and potential career paths. Traditional data-related practitioners such as statisticians, business analysts, and data analysts will find this book helpful in expanding their skills for future data science careers. Undergraduate and graduate students from analytics-related areas will find this book beneficial to learn real-world data science applications. Non-mathematical readers will appreciate the reproducibility of the companion R and python codes. Key Features: * It covers both technical and soft skills. * It has a chapter dedicated to the big data cloud environment. For industry applications, the practice of data science is often in such an environment. * It is hands-on. We provide the data and repeatable R and Python code in notebooks. Readers can repeat the analysis in the book using the data and code provided. We also suggest that readers modify the notebook to perform analyses with their data and problems, if possible. The best way to learn data science is to do it!
This book brings together enterprise modeling and software specification, providing a conceptual background and methodological guidelines that concern the design of enterprise information systems. In this, two corresponding disciplines (enterprise engineering and software engineering) are considered in a complementary way. This is how the widely recognized gap between domain experts and software engineers could be effectively addressed. The content is, on the one hand, based on a conceptual invariance (embracing concepts whose essence transcends the barriers between social and technical disciplines) while on the other, the book is featuring a modeling duality, by bringing together social theories (that are underlying with regard to enterprise engineering) and computing paradigms (that are underlying as it concerns software engineering). In addition, the proposed approach as well as its guidelines and related notations further foster such enterprise-software modeling, by facilitating modeling generations and transformations. Considering unstructured business information in the beginning, the modeling process would progress through the methodological construction of enterprise models, to reach as far as a corresponding derivation of software specifications. Finally, the enterprise-software alignment is achieved in a component-based way, featuring a potential for re-using modeling constructs, such that the modeling effectiveness and efficiency are further stimulated. For the sake of grounding the presented studies, a case study and illustrative examples are considered. They are not only justifying the idea of bringing together (in a component-based way) enterprise modeling and software specification but they are also demonstrating various strengths and limitations of the proposed modeling approach. The book was mainly written for researchers and graduate students in enterprise information systems, and also for professionals whose work involves the specification and realization of such systems. In addition, researchers and practitioners entering these fields will benefit from the blended view on enterprise modeling and software specification, for the sake of an effective and efficient design of enterprise information systems.
This book focuses on environmental sustainability by employing elements of engineering and green computing through modern educational concepts and solutions. It visualizes the potential of artificial intelligence, enhanced by business activities and strategies for rapid implementation, in manufacturing and green technology. This book covers utilization of renewable resources and implementation of the latest energy-generation technologies. It discusses how to save natural resources from depletion and illustrates facilitation of green technology in industry through usage of advanced materials. The book also covers environmental sustainability and current trends in manufacturing. The book provides the basic concepts of green technology, along with the technology aspects, for researchers, faculty, and students.
- the book provides a unique overview of the NCBI resources, including BLAST (which are foundational to bioinformatics), and how to use them, making it a great introduction to bioinformatics and a great resource for those just starting in an industry lab - whereas many bioinformatics books try to cover every aspect of the topic and easily confuse readers, this is highly practical and focuses on key resources and tools, and how to use them - the companion website contains tutorials, R and Python codes, instructor materials including slides, exercises, and problems for students
This edited book explores the unique risks, opportunities, challenges, and societal implications associated with big data developments within the field of finance. While the general use of big data has been the subject of frequent discussions, this book will take a more focused look at big data applications in the financial sector. With contributions from researchers, practitioners, and entrepreneurs involved at the forefront of big data in finance, the book discusses technological and business-inspired breakthroughs in the field. The contributions offer technical insights into the different applications presented and highlight how these new developments may impact and contribute to the evolution of the financial sector. Additionally, the book presents several case studies that examine practical applications of big data in finance. In exploring the readiness of financial institutions to adapt to new developments in the big data/artificial intelligence space and assessing different implementation strategies and policy solutions, the book will be of interest to academics, practitioners, and regulators who work in this field.
A comprehensive guide to automated statistical data cleaning The production of clean data is a complex and time-consuming process that requires both technical know-how and statistical expertise. Statistical Data Cleaning brings together a wide range of techniques for cleaning textual, numeric or categorical data. This book examines technical data cleaning methods relating to data representation and data structure. A prominent role is given to statistical data validation, data cleaning based on predefined restrictions, and data cleaning strategy. Key features: Focuses on the automation of data cleaning methods, including both theory and applications written in R. Enables the reader to design data cleaning processes for either one-off analytical purposes or for setting up production systems that clean data on a regular basis. Explores statistical techniques for solving issues such as incompleteness, contradictions and outliers, integration of data cleaning components and quality monitoring. Supported by an accompanying website featuring data and R code. This book enables data scientists and statistical analysts working with data to deepen their understanding of data cleaning as well as to upgrade their practical data cleaning skills. It can also be used as material for a course in data cleaning and analyses.
This open access book attends to the co-creation of digital public services for ageing societies. Increasingly public services are provided in digital form; their uptake however remains well below expectations. In particular, amongst older adults the need for public services is high, while at the same time the uptake of digital services is lower than the population average. One of the reasons is that many digital public services (or e-services) do not respond well to the life worlds, use contexts and use practices of its target audiences. This book argues that when older adults are involved in the process of identifying, conceptualising, and designing digital public services, these services become more relevant and meaningful. The book describes and compares three co-creation projects that were conducted in two European cities, Bremen and Zaragoza, as part of a larger EU-funded innovation project. The first part of the book traces the origins of co-creation to three distinct domains, in which co-creation has become an equally important approach with different understandings of what it is and entails: (1) the co-production of public services, (2) the co-design of information systems and (3) the civic use of open data. The second part of the book analyses how decisions about a co-creation project's governance structure, its scope of action, its choice of methods, its alignment with strategic policies and its embedding in existing public information infrastructures impact on the process and its results. The final part of the book identifies key challenges to co-creation and provides a more general assessment of what co-creation may achieve, where the most promising areas of application may be and where it probably does not match with the contingent requirements of digital public services. Contributing to current discourses on digital citizenship in ageing societies and user-centric design, this book is useful for researchers and practitioners interested in co-creation, public sector innovation, open government, ageing and digital technologies, citizen engagement and civic participation in socio-technical innovation.
The book brings together the contributions of the 7th International Conference on Smart Learning Ecosystems and Regional Development (SLERD 2022), which aims at promoting reflection and discussion concerning R&D work, policies, case studies, and entrepreneur experiences with a special focus on understanding the relevance of smart learning ecosystems (e.g., schools, campus, working places, informal learning contexts, etc.) for regional development and social innovation and how the effectiveness of the relation of citizens and smart ecosystems can be boosted. This forum has a special interest in understanding how technology mediated instruments can foster the citizen's engagement with learning ecosystems and territories, namely by understanding innovative human-centric design and development models/techniques, education/training practices, informal social learning, innovative citizen-driven policies, technology mediated experiences, and their impact. This set of concerns will contribute to foster the social innovation sectors and ICT and economic development and deployment strategies alongside new policies for smarter proactive citizens.
This book presents advances in security assurance for cyber-physical systems (CPS) and report on new machine learning (ML) and artificial intelligence (AI) approaches and technologies developed by the research community and the industry to address the challenges faced by this emerging field. Cyber-physical systems bridge the divide between cyber and physical-mechanical systems by combining seamlessly software systems, sensors, and actuators connected over computer networks. Through these sensors, data about the physical world can be captured and used for smart autonomous decision-making. This book introduces fundamental AI/ML principles and concepts applied in developing secure and trustworthy CPS, disseminates recent research and development efforts in this fascinating area, and presents relevant case studies, examples, and datasets. We believe that it is a valuable reference for students, instructors, researchers, industry practitioners, and related government agencies staff.
Temporal databases have been an active research topic for at least fifteen years. During this time, several dozen temporal query languages have been proposed. Many within the temporal database research community perceived that the time had come to consolidate approaches to temporal data models and calculus based query languages, to achieve a consensus query language and associated data model upon which future research can be based. While there were many query language proposals, with a diversity of language and modeling constructs, common themes kept resurfacing. However, the community was quite frag mented, with each research project being based on a particular and different set of assumptions and approaches. Often these assumptions were not germane to the research per se, but were made simply because the research required a data model or query language with certain characteristics, with the partic ular one chosen rather arbitrarily. It would be better in such circumstances for research projects to choose the same language. Unfortunately, no existing language had attracted a following large enough to become the one of choice. In April, 1992 Richard Snodgrass circulated a white paper that proposed that a temporal extension to SQL be produced by the research community. Shortly thereafter, the temporal database community organized the "ARPA/NSF In ternational Workshop on an Infrastructure for Temporal Databases," which was held in Arlington, TX, in June, 1993."
This book first provides a comprehensive review of state-of-the-art IoT technologies and applications in different industrial sectors and public services. The authors give in-depth analyses of fog computing architecture and key technologies that fulfill the challenging requirements of enabling computing services anywhere along the cloud-to-thing continuum. Further, in order to make IoT systems more intelligent and more efficient, a fog-enabled service architecture is proposed to address the latency requirements, bandwidth limitations, and computing power issues in realistic cross-domain application scenarios with limited priori domain knowledge, i.e. physical laws, system statuses, operation principles and execution rules. Based on this fog-enabled architecture, a series of data-driven self-learning applications in different industrial sectors and public services are investigated and discussed, such as robot SLAM and formation control, wireless network self-optimization, intelligent transportation system, smart home and user behavior recognition. Finally, the advantages and future directions of fog-enabled intelligent IoT systems are summarized. Provides a comprehensive review of state-of-the-art IoT technologies and applications in different industrial sectors and public services Presents a fog-enabled service architecture with detailed technical approaches for realistic cross-domain application scenarios with limited prior domain knowledge Outlines a series of data-driven self-learning applications (with new algorithms) in different industrial sectors and public services
This book introduces readers to both basic and advanced concepts in deep network models. It covers state-of-the-art deep architectures that many researchers are currently using to overcome the limitations of the traditional artificial neural networks. Various deep architecture models and their components are discussed in detail, and subsequently illustrated by algorithms and selected applications. In addition, the book explains in detail the transfer learning approach for faster training of deep models; the approach is also demonstrated on large volumes of fingerprint and face image datasets. In closing, it discusses the unique set of problems and challenges associated with these models.
The overall mission of this book is to provide a comprehensive understanding and coverage of the various theories and models used in IS research. Specifically, it aims to focus on the following key objectives: To describe the various theories and models applicable to studying IS/IT management issues. To outline and describe, for each of the various theories and models, independent and dependent constructs, reference discipline/originating area, originating author(s), seminal articles, level of analysis (i.e. firm, individual, industry) and links with other theories. To provide a critical review/meta-analysis of IS/IT management articles that have used a particular theory/model.To discuss how a theory can be used to better understand how information systems can be effectively deployed in today's digital world. This book contributes to our understanding of a number of theories and models. The theoretical contribution of this book is that it analyzes and synthesizes the relevant literature in order to enhance knowledge of IS theories and models from various perspectives. To cater to the information needs of a diverse spectrum of readers, this book is structured into two volumes, with each volume further broken down into two sections. The first section of Volume 1 presents detailed descriptions of a set of theories centered around the IS lifecycle, including the Success Model, Technology Acceptance Model, User Resistance Theories, and four others. The second section of Volume 1 contains strategic and economic theories, including a Resource-Based View, Theory of Slack Resources, Portfolio Theory, Discrepancy Theory Models, and eleven others. The first section of Volume 2 concerns socio-psychological theories. These include Personal Construct Theory, Psychological Ownership, Transactive Memory, Language-Action Approach, and nine others. The second section of Volume 2 deals with methodological theories, including Critical Realism, Grounded Theory, Narrative Inquiry, Work System Method, and four others. Together, these theories provide a rich tapestry of knowledge around the use of theory in IS research. Since most of these theories are from contributing disciplines, they provide a window into the world of external thought leadership."
This volume is the second (II) of four under the main themes of Digitizing Agriculture and Information and Communication Technologies (ICT). The four volumes cover rapidly developing processes including Sensors (I), Data (II), Decision (III), and Actions (IV). Volumes are related to 'digital transformation" within agricultural production and provision systems, and in the context of Smart Farming Technology and Knowledge-based Agriculture. Content spans broadly from data mining and visualization to big data analytics and decision making, alongside with the sustainability aspects stemming from the digital transformation of farming. The four volumes comprise the outcome of the 12th EFITA Congress, also incorporating chapters that originated from select presentations of the Congress. The first part of this book (II) focuses on data technologies in relation to agriculture and presents three key points in data management, namely, data collection, data fusion, and their uses in machine learning and artificial intelligent technologies. Part 2 is devoted to the integration of these technologies in agricultural production processes by presenting specific applications in the domain. Part 3 examines the added value of data management within agricultural products value chain. The book provides an exceptional reference for those researching and working in or adjacent to agricultural production, including engineers in machine learning and AI, operations management, decision analysis, information analysis, to name just a few. Specific advances covered in the volume: Big data management from heterogenous sources Data mining within large data sets Data fusion and visualization IoT based management systems Data Knowledge Management for converting data into valuable information Metadata and data standards for expanding knowledge through different data platforms AI - based image processing for agricultural systems Data - based agricultural business Machine learning application in agricultural products value chain
This handbook discusses challenges and limitations in existing solutions, and presents state-of-the-art advances from both academia and industry, in big data analytics and digital forensics. The second chapter comprehensively reviews IoT security, privacy, and forensics literature, focusing on IoT and unmanned aerial vehicles (UAVs). The authors propose a deep learning-based approach to process cloud's log data and mitigate enumeration attacks in the third chapter. The fourth chapter proposes a robust fuzzy learning model to protect IT-based infrastructure against advanced persistent threat (APT) campaigns. Advanced and fair clustering approach for industrial data, which is capable of training with huge volume of data in a close to linear time is introduced in the fifth chapter, as well as offering an adaptive deep learning model to detect cyberattacks targeting cyber physical systems (CPS) covered in the sixth chapter. The authors evaluate the performance of unsupervised machine learning for detecting cyberattacks against industrial control systems (ICS) in chapter 7, and the next chapter presents a robust fuzzy Bayesian approach for ICS's cyber threat hunting. This handbook also evaluates the performance of supervised machine learning methods in identifying cyberattacks against CPS. The performance of a scalable clustering algorithm for CPS's cyber threat hunting and the usefulness of machine learning algorithms for MacOS malware detection are respectively evaluated. This handbook continues with evaluating the performance of various machine learning techniques to detect the Internet of Things malware. The authors demonstrate how MacOSX cyberattacks can be detected using state-of-the-art machine learning models. In order to identify credit card frauds, the fifteenth chapter introduces a hybrid model. In the sixteenth chapter, the editors propose a model that leverages natural language processing techniques for generating a mapping between APT-related reports and cyber kill chain. A deep learning-based approach to detect ransomware is introduced, as well as a proposed clustering approach to detect IoT malware in the last two chapters. This handbook primarily targets professionals and scientists working in Big Data, Digital Forensics, Machine Learning, Cyber Security Cyber Threat Analytics and Cyber Threat Hunting as a reference book. Advanced level-students and researchers studying and working in Computer systems, Computer networks and Artificial intelligence will also find this reference useful. |
![]() ![]() You may like...
Research Anthology on Blockchain…
Information Reso Management Association
Hardcover
R11,180
Discovery Miles 111 800
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
Database Systems: The Complete Book…
Hector Garcia-Molina, Jeffrey Ullman, …
Paperback
R2,905
Discovery Miles 29 050
Role of 6g Wireless Networks in AI and…
Malaya Dutta Borah, Steven A. Wright, …
Hardcover
R7,081
Discovery Miles 70 810
Applied Big Data Analytics and Its Role…
Peng Zhao, Xin Wang, …
Hardcover
R7,586
Discovery Miles 75 860
Database Solutions - A step by step…
Thomas Connolly, Carolyn Begg
Paperback
R2,300
Discovery Miles 23 000
|