![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
In this introductory textbook the author explains the key topics in cryptography. He takes a modern approach, where defining what is meant by "secure" is as important as creating something that achieves that goal, and security definitions are central to the discussion throughout. The author balances a largely non-rigorous style - many proofs are sketched only - with appropriate formality and depth. For example, he uses the terminology of groups and finite fields so that the reader can understand both the latest academic research and "real-world" documents such as application programming interface descriptions and cryptographic standards. The text employs colour to distinguish between public and private information, and all chapters include summaries and suggestions for further reading. This is a suitable textbook for advanced undergraduate and graduate students in computer science, mathematics and engineering, and for self-study by professionals in information security. While the appendix summarizes most of the basic algebra and notation required, it is assumed that the reader has a basic knowledge of discrete mathematics, probability, and elementary calculus.
This book provides conceptual understanding of machine learning algorithms though supervised, unsupervised, and advanced learning techniques. The book consists of four parts: foundation, supervised learning, unsupervised learning, and advanced learning. The first part provides the fundamental materials, background, and simple machine learning algorithms, as the preparation for studying machine learning algorithms. The second and the third parts provide understanding of the supervised learning algorithms and the unsupervised learning algorithms as the core parts. The last part provides advanced machine learning algorithms: ensemble learning, semi-supervised learning, temporal learning, and reinforced learning. Provides comprehensive coverage of both learning algorithms: supervised and unsupervised learning; Outlines the computation paradigm for solving classification, regression, and clustering; Features essential techniques for building the a new generation of machine learning.
"In what is certain to be a seminal work on metadata, John Horodyski masterfully affirms the value of metadata while providing practical examples of its role in our personal and professional lives. He does more than tell us that metadata matters-he vividly illustrates why it matters." -Patricia C. Franks, PhD, CA, CRM, IGP, CIGO, FAI, President, NAGARA, Professor Emerita, San Jose State University, USA If data is the language upon which our modern society will be built, then metadata will be its grammar, the construction of its meaning, the building for its content, and the ability to understand what data can be for us all. We are just starting to bring change into the management of the data that connects our experiences. Metadata Matters explains how metadata is the foundation of digital strategy. If digital assets are to be discovered, they want to be found. The path to good metadata design begins with the realization that digital assets need to be identified, organized, and made available for discovery. This book explains how metadata will help ensure that an organization is building the right system for the right users at the right time. Metadata matters and is the best chance for a return on investment on digital assets and is also a line of defense against lost opportunities. It matters to the digital experience of users. It helps organizations ensure that users can identify, discover, and experience their brands in the ways organizations intend. It is a necessary defense, which this book shows how to build.
People have a hard time communicating, and also have a hard time
finding business knowledge in the environment. With the
sophistication of search technologies like Google, business people
expect to be able to get their questions answered about the
business just like you can do an internet search. The truth is,
knowledge management is primitive today, and it is due to the fact
that we have poor business metadata management.
This book projects a futuristic scenario that is more existent than they have been at any time earlier. To be conscious of the bursting prospective of IoT, it has to be amalgamated with AI technologies. Predictive and advanced analysis can be made based on the data collected, discovered and analyzed. To achieve all these compatibility, complexity, legal and ethical issues arise due to automation of connected components and gadgets of widespread companies across the globe. While these are a few examples of issues, the authors' intention in editing this book is to offer concepts of integrating AI with IoT in a precise and clear manner to the research community. In editing this book, the authors' attempt is to provide novel advances and applications to address the challenge of continually discovering patterns for IoT by covering various aspects of implementing AI techniques to make IoT solutions smarter. The only way to remain pace with this data generated by the IoT and acquire the concealed acquaintance it encloses is to employ AI as the eventual catalyst for IoT. IoT together with AI is more than an inclination or existence; it will develop into a paradigm. It helps those researchers who have an interest in this field to keep insight into different concepts and their importance for applications in real life. This has been done to make the edited book more flexible and to stimulate further interest in topics. All these motivated the authors toward integrating AI in achieving smarter IoT. The authors believe that their effort can make this collection interesting and highly attract the student pursuing pre-research, research and even master in multidisciplinary domain.
This book provides an overview of the topics of data, sovereignty, and governance with respect to data and online activities through a legal lens and from a cybersecurity perspective. This first chapter explores the concepts of data, ownerships, and privacy with respect to digital media and content, before defining the intersection of sovereignty in law with application to data and digital media content. The authors delve into the issue of digital governance, as well as theories and systems of governance on a state level, national level, and corporate/organizational level. Chapter three jumps into the complex area of jurisdictional conflict of laws and the related issues regarding digital activities in international law, both public and private. Additionally, the book discusses the many technical complexities which underlay the evolution and creation of new law and governance strategies and structures. This includes socio-political, legal, and industrial technical complexities which can apply in these areas. The fifth chapter is a comparative examination of the legal strategies currently being explored by a variety of nations. The book concludes with a discussion about emerging topics which either influence, or are influenced by, data sovereignty and digital governance, such as indigenous data sovereignty, digital human rights and self-determination, artificial intelligence, and global digital social responsibility. Cumulatively, this book provides the full spectrum of information, from foundational principles underlining the described topics, through to the larger, more complex, evolving issues which we can foresee ahead of us.
A statisticallanguage model, or more simply a language model, is a prob abilistic mechanism for generating text. Such adefinition is general enough to include an endless variety of schemes. However, a distinction should be made between generative models, which can in principle be used to synthesize artificial text, and discriminative techniques to classify text into predefined cat egories. The first statisticallanguage modeler was Claude Shannon. In exploring the application of his newly founded theory of information to human language, Shannon considered language as a statistical source, and measured how weH simple n-gram models predicted or, equivalently, compressed natural text. To do this, he estimated the entropy of English through experiments with human subjects, and also estimated the cross-entropy of the n-gram models on natural 1 text. The ability of language models to be quantitatively evaluated in tbis way is one of their important virtues. Of course, estimating the true entropy of language is an elusive goal, aiming at many moving targets, since language is so varied and evolves so quickly. Yet fifty years after Shannon's study, language models remain, by all measures, far from the Shannon entropy liInit in terms of their predictive power. However, tbis has not kept them from being useful for a variety of text processing tasks, and moreover can be viewed as encouragement that there is still great room for improvement in statisticallanguage modeling."
This book highlights future research directions and latent solutions by integrating AI and Blockchain 6G networks, comprising computation efficiency, algorithms robustness, hardware development and energy management. This book brings together leading researchers in Academia and industry from diverse backgrounds to deliver to the technical community an outline of emerging technologies, advanced architectures, challenges, open issues and future directions of 6G networks. This book is written for researchers, professionals and students to learn about the integration of technologies such as AI and Blockchain into 6G network and communications. This book addresses the topics such as consensus protocol, architecture, intelligent dynamic resource management, security and privacy in 6G to integrate AI and Blockchain and new real-time application with further research opportunities.
Pattern recognition in data is a well known classical problem that falls under the ambit of data analysis. As we need to handle different data, the nature of patterns, their recognition and the types of data analyses are bound to change. Since the number of data collection channels increases in the recent time and becomes more diversified, many real-world data mining tasks can easily acquire multiple databases from various sources. In these cases, data mining becomes more challenging for several essential reasons. We may encounter sensitive data originating from different sources - those cannot be amalgamated. Even if we are allowed to place different data together, we are certainly not able to analyze them when local identities of patterns are required to be retained. Thus, pattern recognition in multiple databases gives rise to a suite of new, challenging problems different from those encountered before. Association rule mining, global pattern discovery and mining patterns of select items provide different patterns discovery techniques in multiple data sources. Some interesting item-based data analyses are also covered in this book. Interesting patterns, such as exceptional patterns, icebergs and periodic patterns have been recently reported. The book presents a thorough influence analysis between items in time-stamped databases. The recent research on mining multiple related databases is covered while some previous contributions to the area are highlighted and contrasted with the most recent developments.
Continuous Media Databases brings together in one place important contributions and up-to-date research results in this fast moving area. Continuous Media Databases serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
This BriefBook is a much extended glossary or a much condensed handbook, depending on the way one looks at it. In encyclopedic format, it covers subjects in statistics, computing, analysis, and related fields, resulting in a book that is both an introduction and a reference for scientists and engineers, especially experimental physicists dealing with data analysis.
This book documents progress and presents a broad perspective of recent developments in database security. It also discusses in depth the current state-of-the-art in research in the field. A number of topics are explored in detail including: current reseearch in database security and the state of security controls in present commercial database systems. Database Security IX will be essential reading for advanced students working in the area of database security research and development in for industrial researchers in this technical area.
This book provides an overview of how comparable corpora can be used to overcome the lack of parallel resources when building machine translation systems for under-resourced languages and domains. It presents a wealth of methods and open tools for building comparable corpora from the Web, evaluating comparability and extracting parallel data that can be used for the machine translation task. It is divided into several sections, each covering a specific task such as building, processing, and using comparable corpora, focusing particularly on under-resourced language pairs and domains. The book is intended for anyone interested in data-driven machine translation for under-resourced languages and domains, especially for developers of machine translation systems, computational linguists and language workers. It offers a valuable resource for specialists and students in natural language processing, machine translation, corpus linguistics and computer-assisted translation, and promotes the broader use of comparable corpora in natural language processing and computational linguistics.
This book highlights new trends and challenges in research on agents and the new digital and knowledge economy. It includes papers on business process management, agent-based modeling and simulation, and anthropic-oriented computing that were originally presented at the 15th International KES Conference on Agents and Multi-Agent Systems: Technologies and Applications (KES-AMSTA 2021), being held as a Virtual Conference in June 14-16, 2021. The respective papers cover topics such as software agents, multi-agent systems, agent modeling, mobile and cloud computing, big data analysis, business intelligence, artificial intelligence, social systems, computer embedded systems, and nature-inspired manufacturing, all of which contribute to the modern digital economy.
This open access book covers the use of data science, including advanced machine learning, big data analytics, Semantic Web technologies, natural language processing, social media analysis, time series analysis, among others, for applications in economics and finance. In addition, it shows some successful applications of advanced data science solutions used to extract new knowledge from data in order to improve economic forecasting models. The book starts with an introduction on the use of data science technologies in economics and finance and is followed by thirteen chapters showing success stories of the application of specific data science methodologies, touching on particular topics related to novel big data sources and technologies for economic analysis (e.g. social media and news); big data models leveraging on supervised/unsupervised (deep) machine learning; natural language processing to build economic and financial indicators; and forecasting and nowcasting of economic variables through time series analysis. This book is relevant to all stakeholders involved in digital and data-intensive research in economics and finance, helping them to understand the main opportunities and challenges, become familiar with the latest methodological findings, and learn how to use and evaluate the performances of novel tools and frameworks. It primarily targets data scientists and business analysts exploiting data science technologies, and it will also be a useful resource to research students in disciplines and courses related to these topics. Overall, readers will learn modern and effective data science solutions to create tangible innovations for economic and financial applications.
This book provides an overview of fake news detection, both through a variety of tutorial-style survey articles that capture advancements in the field from various facets and in a somewhat unique direction through expert perspectives from various disciplines. The approach is based on the idea that advancing the frontier on data science approaches for fake news is an interdisciplinary effort, and that perspectives from domain experts are crucial to shape the next generation of methods and tools. The fake news challenge cuts across a number of data science subfields such as graph analytics, mining of spatio-temporal data, information retrieval, natural language processing, computer vision and image processing, to name a few. This book will present a number of tutorial-style surveys that summarize a range of recent work in the field. In a unique feature, this book includes perspective notes from experts in disciplines such as linguistics, anthropology, medicine and politics that will help to shape the next generation of data science research in fake news. The main target groups of this book are academic and industrial researchers working in the area of data science, and with interests in devising and applying data science technologies for fake news detection. For young researchers such as PhD students, a review of data science work on fake news is provided, equipping them with enough know-how to start engaging in research within the area. For experienced researchers, the detailed descriptions of approaches will enable them to take seasoned choices in identifying promising directions for future research.
This book develops survey data analysis tools in Python, to create and analyze cross-tab tables and data visuals, weight data, perform hypothesis tests, and handle special survey questions such as Check-all-that-Apply. In addition, the basics of Bayesian data analysis and its Python implementation are presented. Since surveys are widely used as the primary method to collect data, and ultimately information, on attitudes, interests, and opinions of customers and constituents, these tools are vital for private or public sector policy decisions. As a compact volume, this book uses case studies to illustrate methods of analysis essential for those who work with survey data in either sector. It focuses on two overarching objectives: Demonstrate how to extract actionable, insightful, and useful information from survey data; and Introduce Python and Pandas for analyzing survey data.
This book presents Proceedings of the International Conference on Intelligent Systems and Networks (ICISN 2021), held at Hanoi in Vietnam. It includes peer-reviewed high-quality articles on intelligent system and networks. It brings together professionals and researchers in the area and presents a platform for exchange of ideas and to foster future collaboration. The topics covered in this book include-foundations of computer science; computational intelligence language and speech processing; software engineering software development methods; wireless communications signal processing for communications; electronics track IoT and sensor systems embedded systems; etc.
This book constitutes the refereed post-conference proceedings of the Fifth IFIP TC 12 International Conference on Computational Intelligence in Data Science, ICCIDS 2022, held virtually, in March 2022. The 28 revised full papers presented were carefully reviewed and selected from 96 submissions. The papers cover topics such as computational intelligence for text analysis; computational intelligence for image and video analysis; blockchain and data science.
This book is a collection of representative and novel works in the field of data mining, knowledge discovery, clustering and classification. Discussing both theoretical and practical aspects of "Knowledge Discovery and Management" (KDM), it is intended for researchers interested in these fields, including PhD and MSc students, and researchers from public or private laboratories. The contributions included are extended and reworked versions of six of the best papers that were originally presented in French at the EGC'2016 conference held in Reims (France) in January 2016. This was the 16th edition of this successful conference, which takes place each year, and also featured workshops and other events with the aim of promoting exchanges between researchers and companies concerned with KDM and its applications in business, administration, industry and public organizations. For more details about the EGC society, please consult egc.asso.fr.
This volume offers the reader a systematic and throughout account of branches of logic instrumental for computer science, data science and artificial intelligence. Addressed in it are propositional, predicate, modal, epistemic, dynamic, temporal logics as well as applicable in data science many-valued logics and logics of concepts (rough logics). It offers a look into second-order logics and approximate logics of parts. The book concludes with appendices on set theory, algebraic structures, computability, complexity, MV-algebras and transition systems, automata and formal grammars. By this composition of the text, the reader obtains a self-contained exposition that can serve as the textbook on logics and relevant disciplines as well as a reference text.
This book provides practical information about web archives, offers inspiring examples for web archivists, raises new challenges, and shares recent research results about access methods to explore information from the past preserved by web archives. The book is structured in six parts. Part 1 advocates for the importance of web archives to preserve our collective memory in the digital era, demonstrates the problem of web ephemera and shows how web archiving activities have been trying to address this challenge. Part 2 then focuses on different strategies for selecting web content to be preserved and on the media types that different web archives host. It provides an overview of efforts to address the preservation of web content as well as smaller-scale but high-quality collections of social media or audiovisual content. Next, Part 3 presents examples of initiatives to improve access to archived web information and provides an overview of access mechanisms for web archives designed to be used by humans or automatically accessed by machines. Part 4 presents research use cases for web archives. It also discusses how to engage more researchers in exploiting web archives and provides inspiring research studies performed using the exploration of web archives. Subsequently, Part 5 demonstrates that web archives should become crucial infrastructures for modern connected societies. It makes the case for developing web archives as research infrastructures and presents several inspiring examples of added-value services built on web archives. Lastly, Part 6 reflects on the evolution of the web and the sustainability of web archiving activities. It debates the requirements and challenges for web archives if they are to assume the responsibility of being societal infrastructures that enable the preservation of memory. This book targets academics and advanced professionals in a broad range of research areas such as digital humanities, social sciences, history, media studies and information or computer science. It also aims to fill the need for a scholarly overview to support lecturers who would like to introduce web archiving into their courses by offering an initial reference for students.
This book discusses the various open issues of blockchain technology, such as the efficiency of blockchain in different domains of digital cryptocurrency, smart contracts, smart education system, smart cities, cloud identity and access, safeguard to cybersecurity and health care. For the first time in human history, people across the world can trust each other and transact over a large peer-to-peer networks without any central authority. This proves that, trust can be built not only by centralized institution but also by protocols and cryptographic mechanisms. The potential and collaboration between organizations and individuals within peer networks make it possible to potentially move to a global collaborative network without centralization. Blockchain is a complex social, economic and technological phenomenon. This questions what the established terminologies of the modern world like currency, trust, economics and exchange would mean. To make any sense, one needs to realize how much insightful and potential it is in the context and the way it is technically developed. Due to rapid changes in accessing the documents through online transactions and transferring the currency online, many previously used methods are proving insufficient and not secure to solve the problem which arises in the safe and hassle-free transaction. Nowadays, the world changes rapidly, and a transition flow is also seen in Business Process Management (BPM). The traditional Business Process Management holds good establishment last one to two decades, but, the internal workflow confined in a single organization. They do not manage the workflow process and information across organizations. If they do so, again fall in the same trap as the control transfers to the third party that is centralized server and it leads to tampering the data, and single point of failure. To address these issues, this book highlights a number of unique problems and effective solutions that reflects the state-of-the art in blockchain Technology. This book explores new experiments and yields promising solutions to the current challenges of blockchain technology. This book is intended for the researchers, academicians, faculties, scientists, blockchain specialists, business management and software industry professionals who will find it beneficial for their research work and set new ideas in the field of blockchain. This book caters research work in many fields of blockchain engineering, and it provides an in-depth knowledge of the fields covered.
These are the proceedings of the Eleventh International Information Security Conference which was held in Cape Town, South Africa, May 1995. This conference addressed the information security requirements of the next decade and papers were presented covering a wide range of subjects including current industry expectations and current research aspects. The evolutionary development of information security as a professional and research discipline was discussed along with security in open distributed systems and security in groupware.
This book focuses on the combination of IoT and data science, in particular how methods, algorithms, and tools from data science can effectively support IoT. The authors show how data science methodologies, techniques and tools, can translate data into information, enabling the effectiveness and usefulness of new services offered by IoT stakeholders. The authors posit that if IoT is indeed the infrastructure of the future, data structure is the key that can lead to a significant improvement of human life. The book aims to present innovative IoT applications as well as ongoing research that exploit modern data science approaches. Readers are offered issues and challenges in a cross-disciplinary scenario that involves both IoT and data science fields. The book features contributions from academics, researchers, and professionals from both fields. |
You may like...
Computation and Storage in the Cloud…
Dong Yuan, Yun Yang, …
Paperback
Job Scheduling Strategies for Parallel…
Dror Feitelson, Larry Rudolph, …
Paperback
R1,404
Discovery Miles 14 040
Structured Parallel Programming…
Michael McCool, James Reinders, …
Paperback
R1,300
Discovery Miles 13 000
Boolean Functions and Computation Models
Peter Clote, Evangelos Kranakis
Hardcover
|