![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
This book features a collection of high-quality research papers presented at the International Conference on Tourism, Technology & Systems (ICOTTS 2020), held at the University of Cartagena, in Cartagena de Indias, Colombia, from 29th to 31st October 2020. The book is divided into two volumes, and it covers the areas of technology in tourism and the tourist experience, generations and technology in tourism, digital marketing applied to tourism and travel, mobile technologies applied to sustainable tourism, information technologies in tourism, digital transformation of tourism business, e-tourism and tourism 2.0, big data and management for travel and tourism, geotagging and tourist mobility, smart destinations, robotics in tourism, and information systems and technologies.
Recent Advances in Chaotic Systems and Synchronization: From Theory to Real World Applications is a major reference for scientists and engineers interested in applying new computational and mathematical tools for solving complex problems related to modeling, analyzing and synchronizing chaotic systems. Furthermore, it offers an array of new, real-world applications in the field. Written by eminent scientists in the field of control theory and nonlinear systems from 19 countries (Cameroon, China, Ethiopia, France, Greece, India, Italia, Iran, Japan, Mexico, and more), this book covers the latest advances in chaos theory, along with the efficiency of novel synchronization approaches. Readers will find the fundamentals and algorithms related to the analysis and synchronization of chaotic systems, along with key applications, including electronic design, text and image encryption, and robot control and tracking.
This volume comprises the proceedings of ICITCS 2020. It aims to provide a snapshot of the latest issues encountered in IT convergence and security. The book explores how IT convergence and security is core to most current research, industrial and commercial activities. Topics covered in this volume include machine learning & deep learning, communication and signal processing, computer vision and applications, future network technology, artificial intelligence and robotics, software engineering and knowledge engineering, intelligent vehicular networking and applications, healthcare and wellness, web technology and applications, internet of things, and security & privacy. Through this volume, readers will gain an understanding of the current state-of-the-art information strategies and technologies in IT convergence and security. The book will be of use to researchers in academia, industry and other research institutes focusing on IT convergence and security.
This new volume explores a plethora of blockchain-based solutions for big data and IoT applications, looking at advances in real-world applications in several sectors, including higher education, cybersecurity, agriculture, business and management, healthcare and biomedical science, construction and project management, smart city development, and others. Chapters explore emerging technology to combat the ever-increasing threat of security to computer systems and offer new architectural solutions for problems encountered in data management and security. The chapters help to provide a high level of understanding of various blockchain algorithms along with the necessary tools and techniques. The novel architectural solutions in the deployment of blockchain presented here are the core of the book.
This book features a selection of extended papers presented at the 8th IFIP WG 12.6 International Workshop on Artificial Intelligence for Knowledge Management, AI4KM 2021, held in Yokohama, Japan, in January 2021, in the framework of the International Joint Conference on Artificial Intelligence, IJCAI 2020.*The 14 revised and extended papers presented together with an invited talk were carefully reviewed and selected for inclusion in this volume. They present new research and innovative aspects in the field of knowledge management and discuss methodological, technical and organizational aspects of artificial intelligence used for knowledge management. *The workshop was held virtually.
This book presents methods and approaches used to identify the true author of a doubtful document or text excerpt. It provides a broad introduction to all text categorization problems (like authorship attribution, psychological traits of the author, detecting fake news, etc.) grounded in stylistic features. Specifically, machine learning models as valuable tools for verifying hypotheses or revealing significant patterns hidden in datasets are presented in detail. Stylometry is a multi-disciplinary field combining linguistics with both statistics and computer science. The content is divided into three parts. The first, which consists of the first three chapters, offers a general introduction to stylometry, its potential applications and limitations. Further, it introduces the ongoing example used to illustrate the concepts discussed throughout the remainder of the book. The four chapters of the second part are more devoted to computer science with a focus on machine learning models. Their main aim is to explain machine learning models for solving stylometric problems. Several general strategies used to identify, extract, select, and represent stylistic markers are explained. As deep learning represents an active field of research, information on neural network models and word embeddings applied to stylometry is provided, as well as a general introduction to the deep learning approach to solving stylometric questions. In turn, the third part illustrates the application of the previously discussed approaches in real cases: an authorship attribution problem, seeking to discover the secret hand behind the nom de plume Elena Ferrante, an Italian writer known worldwide for her My Brilliant Friend's saga; author profiling in order to identify whether a set of tweets were generated by a bot or a human being and in this second case, whether it is a man or a woman; and an exploration of stylistic variations over time using US political speeches covering a period of ca. 230 years. A solutions-based approach is adopted throughout the book, and explanations are supported by examples written in R. To complement the main content and discussions on stylometric models and techniques, examples and datasets are freely available at the author's Github website.
This book examines the conflicts arising from the implementation of privacy principles enshrined in the GDPR, and most particularly of the ``Right to be Forgotten'', on a wide range of contemporary organizational processes, business practices, and emerging computing platforms and decentralized technologies. Among others, we study two ground-breaking innovations of our distributed era: the ubiquitous mobile computing and the decentralized p2p networks such as the blockchain and the IPFS, and we explore their risks to privacy in relation to the principles stipulated by the GDPR. In that context, we identify major inconsistencies between these state-of-the-art technologies with the GDPR and we propose efficient solutions to mitigate their conflicts while safeguarding the privacy and data protection rights. Last but not least, we analyse the security and privacy challenges arising from the COVID-19 pandemic during which digital technologies are extensively utilized to surveil people's lives.
This book brings together the insights from three different areas, Information Seeking and Retrieval, Cognitive Psychology, and Behavioral Economics, and shows how this new interdisciplinary approach can advance our knowledge about users interacting with diverse search systems, especially their seemingly irrational decisions and anomalies that could not be predicted by most normative models. The first part "Foundation" of this book introduces the general notions and fundamentals of this new approach, as well as the main concepts, terminology and theories. The second part "Beyond Rational Agents" describes the systematic biases and cognitive limits confirmed by behavioral experiments of varying types and explains in detail how they contradict the assumptions and predictions of formal models in information retrieval (IR). The third part "Toward A Behavioral Economics Approach" first synthesizes the findings from existing preliminary research on bounded rationality and behavioral economics modeling in information seeking, retrieval, and recommender system communities. Then, it discusses the implications, open questions and methodological challenges of applying the behavioral economics framework to different sub-areas of IR research and practices, such as modeling users and search sessions, developing unbiased learning to rank and adaptive recommendations algorithms, implementing bias-aware intelligent task support, as well as extending the conceptualization and evaluation on IR fairness, accountability, transparency and ethics (FATE) with the knowledge regarding both human biases and algorithmic biases. This book introduces a behavioral economics framework to IR scientists seeking a new perspective on both fundamental and new emerging problems of IR as well as the development and evaluation of bias-aware intelligent information systems. It is especially intended for researchers working on IR and human-information interaction who want to learn about the potential offered by behavioral economics in their own research areas.
This book addresses major challenges faced by farmers and the technological solutions based on Internet of Things (IoT). A major challenge in agriculture is cultivating and supplying high-quality produce at the best. Currently, around 50% of global farm produce never reaches the end consumer due to wastage and suboptimal prices. The book presents solutions that reduce the transport costs, improve the predictability of prices based on data analytics and the current market conditions, and reduce the number of middle steps and agents between the farmer and the end consumer. It discusses the design of an IoT-based monitoring system to analyze crop environments and a method to improve the efficiency of decision-making by analyzing harvest statistics. Further, it explores climate-smart methods, known as smart agriculture, that have been adopted by a number of Indian farmers.
This book offers a self-contained guide to the theory and main applications of soft sets. It introduces readers to the basic concepts, the algebraic and topological structures, as well as hybrid structures, such as fuzzy soft sets and intuitionistic fuzzy sets. The last part of the book explores a range of interesting applications in the fields of decision-making, pattern recognition, and data science. All in all, the book provides graduate students and researchers in mathematics and various applied science fields with a comprehensive and timely reference guide to soft sets.
This book introduces Python scripting for geographic information science (GIS) workflow optimization using ArcGIS. It builds essential programming skills for automating GIS analysis. Over 200 sample Python scripts and 175 classroom-tested exercises reinforce the learning objectives. Readers will learn to: * Write and run Python in the ArcGIS Python Window, the PythonWin IDE, and the PyScripter IDE * Work with Python syntax and data types * Call ArcToolbox tools, batch process GIS datasets, and manipulate map documents using the arcpy package * Read and modify proprietary and ASCII text GIS data * Parse HTML web pages and KML datasets * Create Web pages and fetch GIS data from Web sources. * Build user-interfaces with the native Python file dialog toolkit or the ArcGIS Script tools and PyToolboxes Python for ArcGIS is designed as a primary textbook for advanced-level students in GIS. Researchers, government specialists and professionals working in GIS will also find this book useful as a reference.
This book presents a rich compilation of real-world cases on digitalization, the goal being to share first-hand insights from respected organizations and to make digitalization more tangible. As virtually every economic and societal sector is now being challenged by emerging technologies, the digital economy is a highly volatile, uncertain, complex and ambiguous place - and one that holds substantial challenges and opportunities for established organizations. Against this backdrop, this book reports on best practices and lessons learned from organizations that have succeeded in overcoming the challenges and seizing the opportunities of the digital economy. It illustrates how twenty-one organizations have leveraged their capabilities to create disruptive innovations, to develop digital business models, and to digitally transform themselves. These cases stem from various industries (e.g. automotive, insurance, consulting, and public services) and countries, reflecting the many facets of digitalization. As all case descriptions follow a uniform schema, they are easily accessible, and provide insightful examples for practitioners as well as interesting cases for researchers, teachers and students. Digitalization is reshaping business on a global scale, and it is evident that organizations must transform to thrive in the digital economy. Digitalization Cases provides first-hand insights into the efforts of renowned companies. The presented actions, results, and lessons learned are a great inspiration for managers, students, and academics. Anna Kopp, Head of IT Germany, Microsoft Understanding digitalization in all its facets requires knowledge about its opportunities and challenges in different contexts. Providing 21 cases from different companies all around the world, Digitalization Cases makes an important contribution toward the comprehensibility of digitalization - from a practical and a scientific point of view. Dorothy Leidner, Ferguson Professor of Information Systems, Baylor University This book is a great source of inspiration and insight on how to drive digitalization. It shows easy to understand good practice examples which illustrate opportunities, and at the same time helps to learn what needs to be done to realize them. I consider this book a must-read for every practitioner who cares about digitalization. Martin Petry, Chief Information Officer and Head of Business Excellence, Hilti
This book offers an overview of state-of-the-art econometric techniques, with a special emphasis on financial econometrics. There is a major need for such techniques, since the traditional way of designing mathematical models - based on researchers' insights - can no longer keep pace with the ever-increasing data flow. To catch up, many application areas have begun relying on data science, i.e., on techniques for extracting models from data, such as data mining, machine learning, and innovative statistics. In terms of capitalizing on data science, many application areas are way ahead of economics. To close this gap, the book provides examples of how data science techniques can be used in economics. Corresponding techniques range from almost traditional statistics to promising novel ideas such as quantum econometrics. Given its scope, the book will appeal to students and researchers interested in state-of-the-art developments, and to practitioners interested in using data science techniques.
This book reviews IoT-centric vulnerabilities from a multidimensional perspective by elaborating on IoT attack vectors, their impacts on well-known security objectives, attacks which exploit such vulnerabilities, coupled with their corresponding remediation methodologies. This book further highlights the severity of the IoT problem at large, through disclosing incidents of Internet-scale IoT exploitations, while putting forward a preliminary prototype and associated results to aid in the IoT mitigation objective. Moreover, this book summarizes and discloses findings, inferences, and open challenges to inspire future research addressing theoretical and empirical aspects related to the imperative topic of IoT security. At least 20 billion devices will be connected to the Internet in the next few years. Many of these devices transmit critical and sensitive system and personal data in real-time. Collectively known as "the Internet of Things" (IoT), this market represents a $267 billion per year industry. As valuable as this market is, security spending on the sector barely breaks 1%. Indeed, while IoT vendors continue to push more IoT devices to market, the security of these devices has often fallen in priority, making them easier to exploit. This drastically threatens the privacy of the consumers and the safety of mission-critical systems. This book is intended for cybersecurity researchers and advanced-level students in computer science. Developers and operators working in this field, who are eager to comprehend the vulnerabilities of the Internet of Things (IoT) paradigm and understand the severity of accompanied security issues will also be interested in this book.
Knowledge Discovery and Measures of Interest is a reference book for knowledge discovery researchers, practitioners, and students. The knowledge discovery researcher will find that the material provides a theoretical foundation for measures of interest in data mining applications where diversity measures are used to rank summaries generated from databases. The knowledge discovery practitioner will find solid empirical evidence on which to base decisions regarding the choice of measures in data mining applications. The knowledge discovery student in a senior undergraduate or graduate course in databases and data mining will find the book is a good introduction to the concepts and techniques of measures of interest. In Knowledge Discovery and Measures of Interest, we study two closely related steps in any knowledge discovery system: the generation of discovered knowledge; and the interpretation and evaluation of discovered knowledge. In the generation step, we study data summarization, where a single dataset can be generalized in many different ways and to many different levels of granularity according to domain generalization graphs. In the interpretation and evaluation step, we study diversity measures as heuristics for ranking the interestingness of the summaries generated. The objective of this work is to introduce and evaluate a technique for ranking the interestingness of discovered patterns in data. It consists of four primary goals: To introduce domain generalization graphs for describing and guiding the generation of summaries from databases. To introduce and evaluate serial and parallel algorithms that traverse the domain generalization space described by the domain generalization graphs. To introduce and evaluate diversity measures as heuristic measures of interestingness for ranking summaries generated from databases. To develop the preliminary foundation for a theory of interestingness within the context of ranking summaries generated from databases. Knowledge Discovery and Measures of Interest is suitable as a secondary text in a graduate level course and as a reference for researchers and practitioners in industry.
This book presents the latest research on the statistical analysis of functional, high-dimensional and other complex data, addressing methodological and computational aspects, as well as real-world applications. It covers topics like classification, confidence bands, density estimation, depth, diagnostic tests, dimension reduction, estimation on manifolds, high- and infinite-dimensional statistics, inference on functional data, networks, operatorial statistics, prediction, regression, robustness, sequential learning, small-ball probability, smoothing, spatial data, testing, and topological object data analysis, and includes applications in automobile engineering, criminology, drawing recognition, economics, environmetrics, medicine, mobile phone data, spectrometrics and urban environments. The book gathers selected, refereed contributions presented at the Fifth International Workshop on Functional and Operatorial Statistics (IWFOS) in Brno, Czech Republic. The workshop was originally to be held on June 24-26, 2020, but had to be postponed as a consequence of the COVID-19 pandemic. Initiated by the Working Group on Functional and Operatorial Statistics at the University of Toulouse in 2008, the IWFOS workshops provide a forum to discuss the latest trends and advances in functional statistics and related fields, and foster the exchange of ideas and international collaboration in the field.
This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, today's smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students-anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one.
This reference text presents the usage of artificial intelligence in healthcare and discusses the challenges and solutions of using advanced techniques like wearable technologies and image processing in the sector. Features: Focuses on the use of artificial intelligence (AI) in healthcare with issues, applications, and prospects Presents the application of artificial intelligence in medical imaging, fractionalization of early lung tumour detection using a low intricacy approach, etc Discusses an artificial intelligence perspective on wearable technology Analyses cardiac dynamics and assessment of arrhythmia by classifying heartbeat using electrocardiogram (ECG) Elaborates machine learning models for early diagnosis of depressive mental affliction This book serves as a reference for students and researchers analyzing healthcare data. It can also be used by graduate and post graduate students as an elective course.
This book presents the latest research in the fields of computational intelligence, ubiquitous computing models, communication intelligence, communication security, machine learning, informatics, mobile computing, cloud computing and big data analytics. The best selected papers, presented at the International Conference on Innovative Data Communication Technologies and Application (ICIDCA 2020), are included in the book. The book focuses on the theory, design, analysis, implementation and applications of distributed systems and networks.
This timely text/reference explores the business and technical issues involved in the management of information systems in the era of big data and beyond. Topics and features: presents review questions and discussion topics in each chapter for classroom group work and individual research assignments; discusses the potential use of a variety of big data tools and techniques in a business environment, explaining how these can fit within an information systems strategy; reviews existing theories and practices in information systems, and explores their continued relevance in the era of big data; describes the key technologies involved in information systems in general and big data in particular, placing these technologies in an historic context; suggests areas for further research in this fast moving domain; equips readers with an understanding of the important aspects of a data scientist's job; provides hands-on experience to further assist in the understanding of the technologies involved.
The world is awash with digital data from social networks, blogs, business, science and engineering. Data-intensive computing facilitates understanding of complex problems that must process massive amounts of data. Through the development of new classes of software, algorithms and hardware, data-intensive applications can provide timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements. This emerging area brings many challenges that are different from traditional high-performance computing. This reference for computing professionals and researchers describes the dimensions of the field, the key challenges, the state of the art and the characteristics of likely approaches that future data-intensive problems will require. Chapters cover general principles and methods for designing such systems and for managing and analyzing the big data sets of today that live in the cloud and describe example applications in bioinformatics and cybersecurity that illustrate these principles in practice.
The aim of this book is to provide some useful methods to improve the spectrum sensing performance in a systematic way, and point out an effective method for the application of cognitive radio technology in wireless communications. The book gives a a state-of-the-art survey and proposes some new cooperative spectrum sensing (CSS) methods attempting to achieve better performance. For each CSS, the main idea and corresponding algorithm design are elaborated in detail. This book covers the fundamental concepts and the core technologies of CSS, especially its latest developments. Each chapter is presented in a self-sufficient and independent way so that the reader can select the chapters interesting to them. The methodologies are described in detail so that the readers can repeat the corresponding experiments easily. It will be a useful book for researchers helping them to understand the classifications of CSS, inspiring new ideas about the novel CSS technology for CR, and learning new ideas from the current status of CSS. For engineers, it will be a good guidebook to develop practical applications for CSS.
The concepts of telemedicine and e-healthcare have eased as well as improved the reachability of experienced doctors and medical staff to remote patients. A patient who is living in a remote village area can directly connect to specialist doctors across the globe though his/her mobile phone using telemedicine systems and e-healthcare services. In pandemic situations like COVID-19, these online platforms helped society to get medical treatment from their residence without any physical movement. Technology is transforming human lives by playing an important role in the planning, designing, and development of intelligent systems for better service. This book presents a cross-disciplinary perspective on the concept of machine learning, blockchain and IoT by congregating cutting-edge research and insights. It also identifies and discusses various advanced technologies such as internet of things (IoT), big data analytics, machine learning, artificial intelligence, cyber security, cloud computing, sensors and so on that are vital to foster the development of smart healthcare and telemedicine systems by providing effective solutions to the medical challenges faced by humankind.
This book provides a comprehensive overview of how the course, content and outcome of policy making is affected by big data. It scrutinises the notion that big and open data makes policymaking a more rational process, in which policy makers are able to predict, assess and evaluate societal problems. It also examines how policy makers deal with big data, the problems and limitations they face, and how big data shapes policymaking on the ground. The book considers big data from various perspectives, not just the political, but also the technological, legal, institutional and ethical dimensions. The potential of big data use in the public sector is also assessed, as well as the risks and dangers this might pose. Through several extended case studies, it demonstrates the dynamics of big data and public policy. Offering a holistic approach to the study of big data, this book will appeal to students and scholars of public policy, public administration and data science, as well as those interested in governance and politics.
This book presents select proceedings of the International Conference on Artificial Intelligence and Data Engineering (AIDE 2020). Various topics covered in this book include deep learning, neural networks, machine learning, computational intelligence, cognitive computing, fuzzy logic, expert systems, brain-machine interfaces, ant colony optimization, natural language processing, bioinformatics and computational biology, cloud computing, machine vision and robotics, ambient intelligence, intelligent transportation, sensing and sensor networks, big data challenge, data science, high performance computing, data mining and knowledge discovery, and data privacy and security. The book will be a valuable reference for beginners, researchers, and professionals interested in artificial intelligence, robotics and data engineering. |
You may like...
Sing Glory and Hallelujah! - Historical…
Samuel Rogal
Hardcover
Reconstruction in the United States - An…
David Lincove
Hardcover
|