![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Peer-to-peer (P2P) technology, or peer computing, is a paradigm that is viewed as a potential technology for redesigning distributed architectures and, consequently, distributed processing. Yet the scale and dynamism that characterize P2P systems demand that we reexamine traditional distributed technologies. A paradigm shift that includes self-reorganization, adaptation and resilience is called for. On the other hand, the increased computational power of such networks opens up completely new applications, such as in digital content sharing, scientific computation, gaming, or collaborative work environments. In this book, Vu, Lupu and Ooi present the technical challenges offered by P2P systems, and the means that have been proposed to address them. They provide a thorough and comprehensive review of recent advances on routing and discovery methods; load balancing and replication techniques; security, accountability and anonymity, as well as trust and reputation schemes; programming models and P2P systems and projects. Besides surveying existing methods and systems, they also compare and evaluate some of the more promising schemes. The need for such a book is evident. It provides a single source for practitioners, researchers and students on the state of the art. For practitioners, this book explains best practice, guiding selection of appropriate techniques for each application. For researchers, this book provides a foundation for the development of new and more effective methods. For students, it is an overview of the wide range of advanced techniques for realizing effective P2P systems, and it can easily be used as a text for an advanced course on Peer-to-Peer Computing and Technologies, or as a companion text for courses on various subjects, such as distributed systems, and grid and cluster computing.
This book summarizes recent inventions, provides guidelines and recommendations, and demonstrates many practical applications of homomorphic encryption. This collection of papers represents the combined wisdom of the community of leading experts on Homomorphic Encryption. In the past 3 years, a global community consisting of researchers in academia, industry, and government, has been working closely to standardize homomorphic encryption. This is the first publication of whitepapers created by these experts that comprehensively describes the scientific inventions, presents a concrete security analysis, and broadly discusses applicable use scenarios and markets. This book also features a collection of privacy-preserving machine learning applications powered by homomorphic encryption designed by groups of top graduate students worldwide at the Private AI Bootcamp hosted by Microsoft Research. The volume aims to connect non-expert readers with this important new cryptographic technology in an accessible and actionable way. Readers who have heard good things about homomorphic encryption but are not familiar with the details will find this book full of inspiration. Readers who have preconceived biases based on out-of-date knowledge will see the recent progress made by industrial and academic pioneers on optimizing and standardizing this technology. A clear picture of how homomorphic encryption works, how to use it to solve real-world problems, and how to efficiently strengthen privacy protection, will naturally become clear.
Statistics and hypothesis testing are routinely used in areas (such as linguistics) that are traditionally not mathematically intensive. In such fields, when faced with experimental data, many students and researchers tend to rely on commercial packages to carry out statistical data analysis, often without understanding the logic of the statistical tests they rely on. As a consequence, results are often misinterpreted, and users have difficulty in flexibly applying techniques relevant to their own research they use whatever they happen to have learned. A simple solution is to teach the fundamental ideas of statistical hypothesis testing without using too much mathematics. This book provides a non-mathematical, simulation-based introduction to basic statistical concepts and encourages readers to try out the simulations themselves using the source code and data provided (the freely available programming language R is used throughout). Since the code presented in the text almost always requires the use of previously introduced programming constructs, diligent students also acquire basic programming abilities in R. The book is intended for advanced undergraduate and graduate students in any discipline, although the focus is on linguistics, psychology, and cognitive science. It is designed for self-instruction, but it can also be used as a textbook for a first course on statistics. Earlier versions of the book have been used in undergraduate and graduate courses in Europe and the US. Vasishth and Broe have written an attractive introduction to the foundations of statistics. It is concise, surprisingly comprehensive, self-contained and yet quite accessible. Highly recommended. Harald Baayen, Professor of Linguistics, University of Alberta, Canada By using the text students not only learn to do the specific things outlined in the book, they also gain a skill set that empowers them to explore new areas that lie beyond the book s coverage. Colin Phillips, Professor of Linguistics, University of Maryland, USA
First of all, I would like to congratulate Gabriella Pasi and Gloria Bordogna for the work they accomplished in preparing this new book in the series "Study in Fuzziness and Soft Computing." "Recent Issues on the Management of Fuzziness in Databases" is undoubtedly a token of their long-lasting and active involvement in the area of Fuzzy Information Retrieval and Fuzzy Database Systems. This book is really welcome in the area of fuzzy databases where they are not numerous although the first works at the crossroads of fuzzy sets and databases were initiated about twenty years ago by L. Zadeh. Only five books have been published since 1995, when the first volume dedicated to fuzzy databases published in the series "Study in Fuzziness and Soft Computing" edited by J. Kacprzyk and myself appeared. Going beyond books strictly speaking, let us also mention the existence of review papers that are part of a couple of handbooks related to fuzzy sets published since 1998. The area known as fuzzy databases covers a bunch of topics among which: -flexible queries addressed to regular databases, -the extension of the notion of a functional dependency, -data mining and fuzzy summarization, -querying databases containing imperfect attribute values represented thanks to possibility distributions.
Software that covertly monitors user actions, also known as spyware, has become a first-level security threat due to its ubiquity and the difficulty of detecting and removing it. This is especially so for video conferencing, thin-client computing and Internet cafes. CryptoGraphics: Exploiting Graphics Cards for Security explores the potential for implementing ciphers within GPUs, and describes the relevance of GPU-based encryption to the security of applications involving remote displays. As the processing power of GPUs increases, research involving the use of GPUs for general purpose computing has arisen. This work extends such research by considering the use of a GPU as a parallel processor for encrypting data. The authors evaluate the operations found in symmetric and asymmetric key ciphers to determine if encryption can be programmed in existing GPUs. A detailed description for a GPU based implementation of AES is provided. The feasibility of GPU-based encryption allows the authors to explore the use of a GPU as a trusted system component. Unencrypted display data can be confined to the GPU to avoid exposing it to any malware running on the operating system.
This book describes various methods and recent advances in predictive computing and information security. It highlights various predictive application scenarios to discuss these breakthroughs in real-world settings. Further, it addresses state-of-art techniques and the design, development and innovative use of technologies for enhancing predictive computing and information security. Coverage also includes the frameworks for eTransportation and eHealth, security techniques, and algorithms for predictive computing and information security based on Internet-of-Things and Cloud computing. As such, the book offers a valuable resource for graduate students and researchers interested in exploring predictive modeling techniques and architectures to solve information security, privacy and protection issues in future communication.
This book includes high-quality papers presented at the Second International Conference on Data Science and Management (ICDSM 2021), organized by the Gandhi Institute for Education and Technology, Bhubaneswar, from 19 to 20 February 2021. It features research in which data science is used to facilitate the decision-making process in various application areas, and also covers a wide range of learning methods and their applications in a number of learning problems. The empirical studies, theoretical analyses and comparisons to psychological phenomena described contribute to the development of products to meet market demands.
Canadian Semantic Web is an edited volume based on the first Canadian Web Working Symposium, June 2006, in Quebec, Canada. It is the first edited volume based on this subject. This volume includes, but is not limited to, the following popular topics: "Trust, Privacy, Security on the Semantic Web," "Semantic Grid and Semantic Grid Services" and "Semantic Web Mining."
This open access book provides a comprehensive view on data ecosystems and platform economics from methodical and technological foundations up to reports from practical implementations and applications in various industries. To this end, the book is structured in four parts: Part I "Foundations and Contexts" provides a general overview about building, running, and governing data spaces and an introduction to the IDS and GAIA-X projects. Part II "Data Space Technologies" subsequently details various implementation aspects of IDS and GAIA-X, including eg data usage control, the usage of blockchain technologies, or semantic data integration and interoperability. Next, Part III describes various "Use Cases and Data Ecosystems" from various application areas such as agriculture, healthcare, industry, energy, and mobility. Part IV eventually offers an overview of several "Solutions and Applications", eg including products and experiences from companies like Google, SAP, Huawei, T-Systems, Innopay and many more. Overall, the book provides professionals in industry with an encompassing overview of the technological and economic aspects of data spaces, based on the International Data Spaces and Gaia-X initiatives. It presents implementations and business cases and gives an outlook to future developments. In doing so, it aims at proliferating the vision of a social data market economy based on data spaces which embrace trust and data sovereignty.
Data analytics underpin our modern data-driven economy. This textbook explains the relevance of data analytics at the firm and industry levels, tracing the evolution and key components of the field, and showing how data analytics insights can be leveraged for business results. The first section of the text covers key topics such as data analytics tools, data mining, business intelligence, customer relationship management, and cybersecurity. The chapters then take an industry focus, exploring how data analytics can be used in particular settings to strengthen business decision-making. A range of sectors are examined, including financial services, accounting, marketing, sport, health care, retail, transport, and education. With industry case studies, clear definitions of terminology, and no background knowledge required, this text supports students in gaining a solid understanding of data analytics and its practical applications. PowerPoint slides, a test bank of questions, and an instructor's manual are also provided as online supplements. This will be a valuable text for undergraduate level courses in data analytics, data mining, business intelligence, and related areas.
This book focuses on the design of secure and efficient signature and signcryption schemes for vehicular ad-hoc networks (VANETs). We use methods such as public key cryptography (PKI), identity-based cryptography (IDC), and certificateless cryptography (CLC) to design bilinear pairing and elliptic curve cryptography-based signature and signcryption schemes and prove their security in the random oracle model. The signature schemes ensure the authenticity of source and integrity of a safety message. While signcryption schemes ensure authentication and confidentiality of the safety message in a single logical step. To provide readers to study the schemes that securely and efficiently process a message and multiple messages in vehicle to vehicle and vehicle to infrastructure communications is the main benefit of this book. In addition, it can benefit researchers, engineers, and graduate students in the fields of security and privacy of VANETs, Internet of vehicles securty, wireless body area networks security, etc.
This book presents the latest cutting-edge research, theoretical methods, and novel applications in the field of computational intelligence techniques and methods for combating fake news. Fake news is everywhere. Despite the efforts of major social network players such as Facebook and Twitter to fight disinformation, miracle cures and conspiracy theories continue to rain down on the net. Artificial intelligence can be a bulwark against the diversity of fake news on the Internet and social networks. This book discusses new models, practical solutions, and technological advances related to detecting and analyzing fake news based on computational intelligence models and techniques, to help decision-makers, managers, professionals, and researchers design new paradigms considering the unique opportunities associated with computational intelligence techniques. Further, the book helps readers understand computational intelligence techniques combating fake news in a systematic and straightforward way.
This book discusses the effective use of modern ICT solutions for business needs, including the efficient use of IT resources, decision support systems, business intelligence, data mining and advanced data processing algorithms, as well as the processing of large datasets (inter alia social networking such as Twitter and Facebook, etc.). The ability to generate, record and process qualitative and quantitative data, including in the area of big data, the Internet of Things (IoT) and cloud computing offers a real prospect of significant improvements for business, as well as the operation of a company within Industry 4.0. The book presents new ideas, approaches, solutions and algorithms in the area of knowledge representation, management and processing, quantitative and qualitative data processing (including sentiment analysis), problems of simulation performance, and the use of advanced signal processing to increase the speed of computation. The solutions presented are also aimed at the effective use of business process modeling and notation (BPMN), business process semantization and investment project portfolio selection. It is a valuable resource for researchers, data analysts, entrepreneurs and IT professionals alike, and the research findings presented make it possible to reduce costs, increase the accuracy of investment, optimize resources and streamline operations and marketing.
Data-mining has become a popular research topic in recent years for the treatment of the "data rich and information poor" syndrome. Currently, application oriented engineers are only concerned with their immediate problems, which results in an ad hoc method of problem solving. Researchers, on the other hand, lack an understanding of the practical issues of data-mining for real-world problems and often concentrate on issues that are of no significance to the practitioners. In this volume, we hope to remedy problems by (1) presenting a theoretical foundation of data-mining, and (2) providing important new directions for data-mining research. A set of well respected data mining theoreticians were invited to present their views on the fundamental science of data mining. We have also called on researchers with practical data mining experiences to present new important data-mining topics.
Explains the basic concepts of Python and its role in machine learning Provides comprehensive coverage of feature-engineering including real-time case studies Perceive the structural patterns with reference to data science and statistics and analytics Includes machine learning based structured exercises Appreciates different algorithmic concepts of machine learning including unsupervised, supervised and reinforcement learning
This book provides a comprehensive methodology to measure systemic risk in many of its facets and dimensions based on state-of-the-art risk assessment methods. Systemic risk has gained attention in the public eye since the collapse of Lehman Brothers in 2008. The bankruptcy of the fourth-biggest bank in the USA raised questions whether banks that are allowed to become "too big to fail" and "too systemic to fail" should carry higher capital surcharges on their size and systemic importance. The Global Financial Crisis of 2008-2009 was followed by the Sovereign Debt Crisis in the euro area that saw the first Eurozone government de facto defaulting on its debt and prompted actions at international level to stem further domino and cascade effects to other Eurozone governments and banks. Against this backdrop, a careful measurement of systemic risk is of utmost importance for the new capital regulation to be successful and for sovereign risk to remain in check. Most importantly, the book introduces a number of systemic fragility indicators for banks and sovereigns that can help to assess systemic risk and the impact of macroprudential and microprudential policies.
The one instruction set computer (OISC) is the ultimate reduced instruction set computer (RISC). In OISC, the instruction set consists of only one instruction, and then by composition, all other necessary instructions are synthesized. This is an approach completely opposite to that of a complex instruction set computer (CISC), which incorporates complex instructions as microprograms within the processor. Computer Architecture: A Minimalist Perspective examines
computer architecture, computability theory, and the history of
computers from the perspective of one instruction set computing - a
novel approach in which the computer supports only one, simple
instruction. This bold, new paradigm offers significant promise in
biological, chemical, optical, and molecular scale computers. - Provides a comprehensive study of computer architecture using
computability theory as a base.
The book discusses how augmented intelligence can increase the efficiency and speed of diagnosis in healthcare organizations. The concept of augmented intelligence can reflect the enhanced capabilities of human decision-making in clinical settings when augmented with computation systems and methods. It includes real-life case studies highlighting impact of augmented intelligence in health care. The book offers a guided tour of computational intelligence algorithms, architecture design, and applications of learning in healthcare challenges. It presents a variety of techniques designed to represent, enhance, and empower multi-disciplinary and multi-institutional machine learning research in healthcare informatics. It also presents specific applications of augmented intelligence in health care, and architectural models and frameworks-based augmented solutions.
The purpose of this book is to discuss, in depth, the current state of research and practice in database security, to enable readers to expand their knowledge. The book brings together contributions from experts in the field throughout the world. Database security is still a key topic in mist businesses and in the public sector, having implications for the whole of society.
Knowledge management captures the right knowledge, to the right user, who in turn uses the knowledge to improve organizational or individual performance to increase effectiveness. ""Strategies for Knowledge Management Success: Exploring Organizational Efficacy"" collects and presents key research articles focused on identifying, defining, and measuring accomplishment in knowledge management. A significant collection of the latest international findings within the field, this book provides a strong reference for students, researchers, and practitioners involved with organizational knowledge management.
This book presents a state-of-the-art review of current perspectives in information systems security in view of the information society of the 21st century. It will be essential reading for information technology security specialists, computer professionals, EDP managers, EDP auditors, managers, researchers and students working on the subject.
This edited book covers ongoing research in both theory and practical applications of using deep learning for social media data. Social networking platforms are overwhelmed by different contents, and their huge amounts of data have enormous potential to influence business, politics, security, planning and other social aspects. Recently, deep learning techniques have had many successful applications in the AI field. The research presented in this book emerges from the conviction that there is still much progress to be made toward exploiting deep learning in the context of social media data analytics. It includes fifteen chapters, organized into four sections that report on original research in network structure analysis, social media text analysis, user behaviour analysis and social media security analysis. This work could serve as a good reference for researchers, as well as a compilation of innovative ideas and solutions for practitioners interested in applying deep learning techniques to social media data analytics.
In recent years, tremendous research has been devoted to the design of database systems for real-time applications, called real-time database systems (RTDBS), where transactions are associated with deadlines on their completion times, and some of the data objects in the database are associated with temporal constraints on their validity. Examples of important applications of RTDBS include stock trading systems, navigation systems and computer integrated manufacturing. Different transaction scheduling algorithms and concurrency control protocols have been proposed to satisfy transaction timing data temporal constraints. Other design issues important to the performance of a RTDBS are buffer management, index accesses and I/O scheduling. Real-Time Database Systems: Architecture and Techniques summarizes important research results in this area, and serves as an excellent reference for practitioners, researchers and educators of real-time systems and database systems.
Educational Data Analytics (EDA) have been attributed with significant benefits for enhancing on-demand personalized educational support of individual learners as well as reflective course (re)design for achieving more authentic teaching, learning and assessment experiences integrated into real work-oriented tasks. This open access textbook is a tutorial for developing, practicing and self-assessing core competences on educational data analytics for digital teaching and learning. It combines theoretical knowledge on core issues related to collecting, analyzing, interpreting and using educational data, including ethics and privacy concerns. The textbook provides questions and teaching materials/ learning activities as quiz tests of multiple types of questions, added after each section, related to the topic studied or the video(s) referenced. These activities reproduce real-life contexts by using a suitable use case scenario (storytelling), encouraging learners to link theory with practice; self-assessed assignments enabling learners to apply their attained knowledge and acquired competences on EDL. By studying this book, you will know where to locate useful educational data in different sources and understand their limitations; know the basics for managing educational data to make them useful; understand relevant methods; and be able to use relevant tools; know the basics for organising, analysing, interpreting and presenting learner-generated data within their learning context, understand relevant learning analytics methods and be able to use relevant learning analytics tools; know the basics for analysing and interpreting educational data to facilitate educational decision making, including course and curricula design, understand relevant teaching analytics methods and be able to use relevant teaching analytics tools; understand issues related with educational data ethics and privacy. This book is intended for school leaders and teachers engaged in blended (using the flipped classroom model) and online (during COVID-19 crisis and beyond) teaching and learning; e-learning professionals (such as, instructional designers and e-tutors) of online and blended courses; instructional technologists; researchers as well as undergraduate and postgraduate university students studying education, educational technology and relevant fields.
In this introductory textbook the author explains the key topics in cryptography. He takes a modern approach, where defining what is meant by "secure" is as important as creating something that achieves that goal, and security definitions are central to the discussion throughout. The author balances a largely non-rigorous style - many proofs are sketched only - with appropriate formality and depth. For example, he uses the terminology of groups and finite fields so that the reader can understand both the latest academic research and "real-world" documents such as application programming interface descriptions and cryptographic standards. The text employs colour to distinguish between public and private information, and all chapters include summaries and suggestions for further reading. This is a suitable textbook for advanced undergraduate and graduate students in computer science, mathematics and engineering, and for self-study by professionals in information security. While the appendix summarizes most of the basic algebra and notation required, it is assumed that the reader has a basic knowledge of discrete mathematics, probability, and elementary calculus. |
You may like...
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
BTEC Nationals Information Technology…
Jenny Phillips, Alan Jarvis, …
Paperback
R1,018
Discovery Miles 10 180
Cognitive and Soft Computing Techniques…
Akash Kumar Bhoi, Victor Hugo Costa de Albuquerque, …
Paperback
R2,583
Discovery Miles 25 830
Applied Big Data Analytics and Its Role…
Peng Zhao, Xin Wang, …
Hardcover
R6,648
Discovery Miles 66 480
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
|