![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
Relational databases have been predominant for many years and are used throughout various industries. The current system faces challenges related to size and variety of data thus the NoSQL databases emerged. By joining these two database models, there is room for crucial developments in the field of computer science. Bridging Relational and NoSQL Databases is an innovative source of academic content on the convergence process between databases and describes key features of the next database generation. Featuring coverage on a wide variety of topics and perspectives such as BASE approach, CAP theorem, and hybrid and native solutions, this publication is ideally designed for professionals and researchers interested in the features and collaboration of relational and NoSQL databases.
For Database Systems and Database Design and Application courses offered at the junior, senior and graduate levels in Computer Science departments. Written by well-known computer scientists, this introduction to database systems offers a comprehensive approach, focusing on database design, database use, and implementation of database applications and database management systems. The first half of the book provides in-depth coverage of databases from the point of view of the database designer, user, and application programmer. It covers the latest database standards SQL:1999, SQL/PSM, SQL/CLI, JDBC, ODL, and XML, with broader coverage of SQL than most other texts. The second half of the book provides in-depth coverage of databases from the point of view of the DBMS implementor. It focuses on storage structures, query processing, and transaction management. The book covers the main techniques in these areas with broader coverage of query optimisation than most other texts, along with advanced topics including multidimensional and bitmap indexes, distributed transactions, and information integration techniques.
For any organization, analysis of performance and effectiveness through available data allows for informed decision making. Data envelopment analysis, or DEA, is a popular, effective method that can be used to measure productive efficiency in operations management assessment. Data Envelopment Analysis and Effective Performance Assessment addresses the myriad of practical uses and innovative developments of DEA. Emphasizing the importance of analyzing productivity by measuring inputs, goals, economic growth, and performance, this book covers a wide breadth of innovative knowledge. This book is essential reading for managers, business professionals, students of business and ICT, and computer engineers.
Recent innovations have created significant developments in data storage and management. These new technologies now allow for greater security in databases and other applications. Decentralized Computing Using Block Chain Technologies and Smart Contracts: Emerging Research and Opportunities is a concise and informative source of academic research on the latest developments in block chain innovation and their application in contractual agreements. Highlighting pivotal discussions on topics such as cryptography, programming techniques, and decentralized computing, this book is an ideal publication for researchers, academics, professionals, students, and practitioners seeking content on utilizing block chains with smart contracts.
Digital libraries have been established worldwide to make information more readily available, and this innovation has changed the way information seekers interact with the data they are collecting. Faced with decentralized, heterogeneous sources, these users must be familiarized with high-level search activities in order to sift through large amounts of data. Information Seeking Behavior and Challenges in Digital Libraries addresses the problems of usability and search optimization in digital libraries. With topics addressing all aspects of information seeking activity, the research found in this book provides insight into library user experiences and human-computer interaction when searching online databases of all types. This book addresses the challenges faced by professionals in information management, librarians, developers, students of library science, and policy makers.
In recent decades, the industrial revolution has increased economic growth despite its immersion in global environmental issues such as climate change. Researchers emphasize the adoption of circular economy practices in global supply chains and businesses for better socio-environmental sustainability without compromising economic growth. Integrating blockchain technology into business practices could promote the circular economy as well as global environmental sustainability. Integrating Blockchain Technology Into the Circular Economy discusses the technological advancements in circular economy practices, which provide better results for both economic growth and environmental sustainability. It provides relevant theoretical frameworks and the latest empirical research findings in the applications of blockchain technology. Covering topics such as big data analytics, financial market infrastructure, and sustainable performance, this book is an essential resource for managers, operations managers, executives, manufacturers, environmentalists, researchers, industry practitioners, students and educators of higher education, and academicians.
Data mapping in a data warehouse is the process of creating a link between two distinct data models' (source and target) tables/attributes. Data mapping is required at many stages of DW life-cycle to help save processor overhead; every stage has its own unique requirements and challenges. Therefore, many data warehouse professionals want to learn data mapping in order to move from an ETL (extract, transform, and load data between databases) developer to a data modeler role. Data Mapping for Data Warehouse Design provides basic and advanced knowledge about business intelligence and data warehouse concepts including real life scenarios that apply the standard techniques to projects across various domains. After reading this book, readers will understand the importance of data mapping across the data warehouse life cycle.
The world is witnessing the growth of a global movement facilitated by technology and social media. Fueled by information, this movement contains enormous potential to create more accountable, efficient, responsive, and effective governments and businesses, as well as spurring economic growth. Big Data Governance and Perspectives in Knowledge Management is a collection of innovative research on the methods and applications of applying robust processes around data, and aligning organizations and skillsets around those processes. Highlighting a range of topics including data analytics, prediction analysis, and software development, this book is ideally designed for academicians, researchers, information science professionals, software developers, computer engineers, graduate-level computer science students, policymakers, and managers seeking current research on the convergence of big data and information governance as two major trends in information management.
Formative Assessment, Learning Data Analytics and Gamification: An ICT Education discusses the challenges associated with assessing student progress given the explosion of e-learning environments, such as MOOCs and online courses that incorporate activities such as design and modeling. This book shows educators how to effectively garner intelligent data from online educational environments that combine assessment and gamification. This data, when used effectively, can have a positive impact on learning environments and be used for building learner profiles, community building, and as a tactic to create a collaborative team. Using numerous illustrative examples and theoretical and practical results, leading international experts discuss application of automatic techniques for e-assessment of learning activities, methods to collect, analyze, and correctly visualize learning data in educational environments, applications, benefits and challenges of using gamification techniques in academic contexts, and solutions and strategies for increasing student participation and performance.
Faced with the exponential development of Big Data and both its legal and economic repercussions, we are still slightly in the dark concerning the use of digital information. In the perpetual balance between confidentiality and transparency, this data will lead us to call into question how we understand certain paradigms, such as the Hippocratic Oath in medicine. As a consequence, a reflection on the study of the risks associated with the ethical issues surrounding the design and manipulation of this "massive data" seems to be essential. This book provides a direction and ethical value to these significant volumes of data. It proposes an ethical analysis model and recommendations to better keep this data in check. This empirical and ethico-technical approach brings together the first aspects of a moral framework directed toward thought, conscience and the responsibility of citizens concerned by the use of data of a personal nature.
Advances in Computers carries on a tradition of excellence, presenting detailed coverage of innovations in computer hardware, software, theory, design, and applications. The book provides contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles typically allow. The articles included in this book will become standard references, with lasting value in this rapidly expanding field.
"What information do these data reveal?" "Is the information correct?" "How can I make the best use of the information?" The widespread use of computers and our reliance on the data generated by them have made these questions increasingly common and important. Computerized data may be in either digital or analog form and may be relevant to a wide range of applications that include medical monitoring and diagnosis, scientific research, engineering, quality control, seismology, meteorology, political and economic analysis and business and personal financial applications. The sources of the data may be databases that have been developed for specific purposes or may be of more general interest and include those that are accessible on the Internet. In addition, the data may represent either single or multiple parameters. Examining data in its initial form is often very laborious and also makes it possible to "miss the forest for the trees" by failing to notice patterns in the data that are not readily apparent. To address these problems, this monograph describes several accurate and efficient methods for displaying, reviewing and analyzing digital and analog data. The methods may be used either singly or in various combinations to maximize the value of the data to those for whom it is relevant. None of the methods requires special devices and each can be used on common platforms such as personal computers, tablets and smart phones. Also, each of the methods can be easily employed utilizing widely available off-the-shelf software. Using the methods does not require special expertise in computer science or technology, graphical design or statistical analysis. The usefulness and accuracy of all the described methods of data display, review and interpretation have been confirmed in multiple carefully performed studies using independent, objective endpoints. These studies and their results are described in the monograph. Because of their ease of use, accuracy and efficiency, the methods for displaying, reviewing and analyzing data described in this monograph can be highly useful to all who must work with computerized information and make decisions based upon it.
The WWW era made billions of people dramatically dependent on the progress of data technologies, out of which Internet search and Big Data are arguably the most notable. Structured Search paradigm connects them via a fundamental concept of key-objects evolving out of keywords as the units of search. The key-object data model and KeySQL revamp the data independence principle making it applicable for Big Data and complement NoSQL with full-blown structured querying functionality. The ultimate goal is extracting Big Information from the Big Data. As a Big Data Consultant, Mikhail Gilula combines academic background with 20 years of industry experience in the database and data warehousing technologies working as a Sr. Data Architect for Teradata, Alcatel-Lucent, and PayPal, among others. He has authored three books, including The Set Model for Database and Information Systems and holds four US Patents in Structured Search and Data Integration.
Research in the domains of learning analytics and educational data mining has prototyped an approach where methodologies from data science and machine learning are used to gain insights into the learning process by using large amounts of data. As many training and academic institutions are maturing in their data-driven decision making, useful, scalable, and interesting trends are emerging. Organizations can benefit from sharing information on those efforts. Applying Data Science and Learning Analytics Throughout a Learner's Lifespan examines novel and emerging applications of data science and sister disciplines for gaining insights from data to inform interventions into learners' journeys and interactions with academic institutions. Data is collected at various times and places throughout a learner's lifecycle, and the learners and the institution should benefit from the insights and knowledge gained from this data. Covering topics such as learning analytics dashboards, text network analysis, and employment recruitment, this book is an indispensable resource for educators, computer scientists, faculty of higher education, government officials, educational administration, students of higher education, pre-service teachers, business professionals, researchers, and academicians.
Have you ever looked at your Library's key performance indicators and said to yourself "so what!"? Have you found yourself making decisions in a void due to the lack of useful and easily accessible operational data? Have you ever worried that you are being left behind with the emergence of data analytics? Do you feel there are important stories in your operational data that need to be told, but you have no idea how to find these stories? If you answered yes to any of these questions, then this book is for you. How Libraries Should Manage Data provides detailed instructions on how to transform your operational data from a fog of disconnected, unreliable, and inaccessible information - into an exemplar of best practice data management. Like the human brain, most people are only using a very small fraction of the true potential of Excel. Learn how to tap into a greater proportion of Excel's hidden power, and in the process transform your operational data into actionable business intelligence.
High-performance computing (HPC) describes the use of connected computing units to perform complex tasks. It relies on parallelization techniques and algorithms to synchronize these disparate units in order to perform faster than a single processor could, alone. Used in industries from medicine and research to military and higher education, this method of computing allows for users to complete complex data-intensive tasks. This field has undergone many changes over the past decade, and will continue to grow in popularity in the coming years. Innovative Research Applications in Next-Generation High Performance Computing aims to address the future challenges, advances, and applications of HPC and related technologies. As the need for such processors increases, so does the importance of developing new ways to optimize the performance of these supercomputers. This timely publication provides comprehensive information for researchers, students in ICT, program developers, military and government organizations, and business professionals.
Method engineering is a very young field. Generally, method engineering can be considered from engineering of an entire methodology for information systems development to engineering of modeling techniques according to project requirements. Computer aided method engineering is about generation and use of information systems design techniques according to user needs. Some times such environments are called generic tools or MetaCASE. Computer-Aided Method Engineering: Designing Case Repositories for the 21st Century presents a contribution on a methodology and architecture of a CASE repository, forwarding a theory that will bring about the component based development into CASE tool design and development covering a repository construction principle for the 21st century.
|
You may like...
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
R1,024
Discovery Miles 10 240
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
R506
Discovery Miles 5 060
Advances in the Convergence of…
Tiago M. Fernandez-Carames, Paula Fraga-Lamas
Hardcover
R2,555
Discovery Miles 25 550
Applied Big Data Analytics and Its Role…
Peng Zhao, Xin Wang, …
Hardcover
R6,648
Discovery Miles 66 480
|