![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
In recent decades, the industrial revolution has increased economic growth despite its immersion in global environmental issues such as climate change. Researchers emphasize the adoption of circular economy practices in global supply chains and businesses for better socio-environmental sustainability without compromising economic growth. Integrating blockchain technology into business practices could promote the circular economy as well as global environmental sustainability. Integrating Blockchain Technology Into the Circular Economy discusses the technological advancements in circular economy practices, which provide better results for both economic growth and environmental sustainability. It provides relevant theoretical frameworks and the latest empirical research findings in the applications of blockchain technology. Covering topics such as big data analytics, financial market infrastructure, and sustainable performance, this book is an essential resource for managers, operations managers, executives, manufacturers, environmentalists, researchers, industry practitioners, students and educators of higher education, and academicians.
Data mapping in a data warehouse is the process of creating a link between two distinct data models' (source and target) tables/attributes. Data mapping is required at many stages of DW life-cycle to help save processor overhead; every stage has its own unique requirements and challenges. Therefore, many data warehouse professionals want to learn data mapping in order to move from an ETL (extract, transform, and load data between databases) developer to a data modeler role. Data Mapping for Data Warehouse Design provides basic and advanced knowledge about business intelligence and data warehouse concepts including real life scenarios that apply the standard techniques to projects across various domains. After reading this book, readers will understand the importance of data mapping across the data warehouse life cycle.
Data Simplification: Taming Information With Open Source Tools addresses the simple fact that modern data is too big and complex to analyze in its native form. Data simplification is the process whereby large and complex data is rendered usable. Complex data must be simplified before it can be analyzed, but the process of data simplification is anything but simple, requiring a specialized set of skills and tools. This book provides data scientists from every scientific discipline with the methods and tools to simplify their data for immediate analysis or long-term storage in a form that can be readily repurposed or integrated with other data. Drawing upon years of practical experience, and using numerous examples and use cases, Jules Berman discusses the principles, methods, and tools that must be studied and mastered to achieve data simplification, open source tools, free utilities and snippets of code that can be reused and repurposed to simplify data, natural language processing and machine translation as a tool to simplify data, and data summarization and visualization and the role they play in making data useful for the end user.
The effective application of knowledge management principles has proven to be beneficial for modern organizations. When utilized in the academic community, these frameworks can enhance the value and quality of research initiatives. Enhancing Academic Research With Knowledge Management Principles is a pivotal reference source for the latest research on implementing theoretical frameworks of information management in the context of academia and universities. Featuring extensive coverage on relevant areas such as data mining, organizational and academic culture, this publication is an ideal resource for researchers, academics, practitioners, professionals, and students.
The world is witnessing the growth of a global movement facilitated by technology and social media. Fueled by information, this movement contains enormous potential to create more accountable, efficient, responsive, and effective governments and businesses, as well as spurring economic growth. Big Data Governance and Perspectives in Knowledge Management is a collection of innovative research on the methods and applications of applying robust processes around data, and aligning organizations and skillsets around those processes. Highlighting a range of topics including data analytics, prediction analysis, and software development, this book is ideally designed for academicians, researchers, information science professionals, software developers, computer engineers, graduate-level computer science students, policymakers, and managers seeking current research on the convergence of big data and information governance as two major trends in information management.
Formative Assessment, Learning Data Analytics and Gamification: An ICT Education discusses the challenges associated with assessing student progress given the explosion of e-learning environments, such as MOOCs and online courses that incorporate activities such as design and modeling. This book shows educators how to effectively garner intelligent data from online educational environments that combine assessment and gamification. This data, when used effectively, can have a positive impact on learning environments and be used for building learner profiles, community building, and as a tactic to create a collaborative team. Using numerous illustrative examples and theoretical and practical results, leading international experts discuss application of automatic techniques for e-assessment of learning activities, methods to collect, analyze, and correctly visualize learning data in educational environments, applications, benefits and challenges of using gamification techniques in academic contexts, and solutions and strategies for increasing student participation and performance.
Faced with the exponential development of Big Data and both its legal and economic repercussions, we are still slightly in the dark concerning the use of digital information. In the perpetual balance between confidentiality and transparency, this data will lead us to call into question how we understand certain paradigms, such as the Hippocratic Oath in medicine. As a consequence, a reflection on the study of the risks associated with the ethical issues surrounding the design and manipulation of this "massive data" seems to be essential. This book provides a direction and ethical value to these significant volumes of data. It proposes an ethical analysis model and recommendations to better keep this data in check. This empirical and ethico-technical approach brings together the first aspects of a moral framework directed toward thought, conscience and the responsibility of citizens concerned by the use of data of a personal nature.
Advances in Computers carries on a tradition of excellence, presenting detailed coverage of innovations in computer hardware, software, theory, design, and applications. The book provides contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles typically allow. The articles included in this book will become standard references, with lasting value in this rapidly expanding field.
"What information do these data reveal?" "Is the information correct?" "How can I make the best use of the information?" The widespread use of computers and our reliance on the data generated by them have made these questions increasingly common and important. Computerized data may be in either digital or analog form and may be relevant to a wide range of applications that include medical monitoring and diagnosis, scientific research, engineering, quality control, seismology, meteorology, political and economic analysis and business and personal financial applications. The sources of the data may be databases that have been developed for specific purposes or may be of more general interest and include those that are accessible on the Internet. In addition, the data may represent either single or multiple parameters. Examining data in its initial form is often very laborious and also makes it possible to "miss the forest for the trees" by failing to notice patterns in the data that are not readily apparent. To address these problems, this monograph describes several accurate and efficient methods for displaying, reviewing and analyzing digital and analog data. The methods may be used either singly or in various combinations to maximize the value of the data to those for whom it is relevant. None of the methods requires special devices and each can be used on common platforms such as personal computers, tablets and smart phones. Also, each of the methods can be easily employed utilizing widely available off-the-shelf software. Using the methods does not require special expertise in computer science or technology, graphical design or statistical analysis. The usefulness and accuracy of all the described methods of data display, review and interpretation have been confirmed in multiple carefully performed studies using independent, objective endpoints. These studies and their results are described in the monograph. Because of their ease of use, accuracy and efficiency, the methods for displaying, reviewing and analyzing data described in this monograph can be highly useful to all who must work with computerized information and make decisions based upon it.
Data is powerful. It separates leaders from laggards and it drives business disruption, transformation, and reinvention. Today's most progressive companies are using the power of data to propel their industries into new areas of innovation, specialization, and optimization. The horsepower of new tools and technologies have provided more opportunities than ever to harness, integrate, and interact with massive amounts of disparate data for business insights and value - something that will only continue in the era of the Internet of Things. And, as a new breed of tech-savvy and digitally native knowledge workers rise to the ranks of data scientist and visual analyst, the needs and demands of the people working with data are changing, too. The world of data is changing fast. And, it's becoming more visual. Visual insights are becoming increasingly dominant in information management, and with the reinvigorated role of data visualization, this imperative is a driving force to creating a visual culture of data discovery. The traditional standards of data visualizations are making way for richer, more robust and more advanced visualizations and new ways of seeing and interacting with data. However, while data visualization is a critical tool to exploring and understanding bigger and more diverse and dynamic data, by understanding and embracing our human hardwiring for visual communication and storytelling and properly incorporating key design principles and evolving best practices, we take the next step forward to transform data visualizations from tools into unique visual information assets.
The WWW era made billions of people dramatically dependent on the progress of data technologies, out of which Internet search and Big Data are arguably the most notable. Structured Search paradigm connects them via a fundamental concept of key-objects evolving out of keywords as the units of search. The key-object data model and KeySQL revamp the data independence principle making it applicable for Big Data and complement NoSQL with full-blown structured querying functionality. The ultimate goal is extracting Big Information from the Big Data. As a Big Data Consultant, Mikhail Gilula combines academic background with 20 years of industry experience in the database and data warehousing technologies working as a Sr. Data Architect for Teradata, Alcatel-Lucent, and PayPal, among others. He has authored three books, including The Set Model for Database and Information Systems and holds four US Patents in Structured Search and Data Integration.
Learn how to create, train, and tweak large language models (LLMs) by building one from the ground up! In Build a Large Language Model (from Scratch) bestselling author Sebastian Raschka guides you step by step through creating your own LLM. Each stage is explained with clear text, diagrams, and examples. You’ll go from the initial design and creation, to pretraining on a general corpus, and on to fine-tuning for specific tasks. Build a Large Language Model (from Scratch) teaches you how to:
Build a Large Language Model (from Scratch) takes you inside the AI black box to tinker with the internal systems that power generative AI. As you work through each key stage of LLM creation, you’ll develop an in-depth understanding of how LLMs work, their limitations, and their customization methods. Your LLM can be developed on an ordinary laptop, and used as your own personal assistant.
MESH ist ein mathematisches Video ber vielfl chige Netzwerke und ihre Rolle in der Geometrie, der Numerik und der Computergraphik. Der unter Anwendung der neuesten Technologie vollst ndig computergenierte Film spannt einen Bogen von der antiken griechischen Mathematik zum Gebiet der heutigen geometrischen Modellierung. MESH hat zahlreiche wissenschaftliche Preise weltweit gewonnen. Die Autoren sind Konrad Polthier, ein Professor der Mathematik, und Beau Janzen, ein professioneller Filmdirektor. Der Film ist ein ausgezeichnetes Lehrmittel f r Kurse in Geometrie, Visualisierung, wissenschaftlichem Rechnen und geometrischer Modellierung an Universit ten, Zentren f r wissenschaftliches Rechnen, kann jedoch auch an Schulen genutzt werden.
Analyzing data sets has continued to be an invaluable application for numerous industries. By combining different algorithms, technologies, and systems used to extract information from data and solve complex problems, various sectors have reached new heights and have changed our world for the better. The Handbook of Research on Engineering, Business, and Healthcare Applications of Data Science and Analytics is a collection of innovative research on the methods and applications of data analytics. While highlighting topics including artificial intelligence, data security, and information systems, this book is ideally designed for researchers, data analysts, data scientists, healthcare administrators, executives, managers, engineers, IT consultants, academicians, and students interested in the potential of data application technologies.
Internet usage has become a normal and essential aspect of everyday life. Due to the immense amount of information available on the web, it has become obligatory to find ways to sift through and categorize the overload of data while removing redundant material. Collaborative Filtering Using Data Mining and Analysis evaluates the latest patterns and trending topics in the utilization of data mining tools and filtering practices. Featuring emergent research and optimization techniques in the areas of opinion mining, text mining, and sentiment analysis, as well as their various applications, this book is an essential reference source for researchers and engineers interested in collaborative filtering.
The aim of the book is to help students become data scientists. Since this requires a series of courses over a considerable period of time, the book intends to accompany students from the beginning to an advanced understanding of the knowledge and skills that define a modern data scientist. The book presents a comprehensive overview of the mathematical foundations of the programming language R and of its applications to data science.
Research in the domains of learning analytics and educational data mining has prototyped an approach where methodologies from data science and machine learning are used to gain insights into the learning process by using large amounts of data. As many training and academic institutions are maturing in their data-driven decision making, useful, scalable, and interesting trends are emerging. Organizations can benefit from sharing information on those efforts. Applying Data Science and Learning Analytics Throughout a Learner's Lifespan examines novel and emerging applications of data science and sister disciplines for gaining insights from data to inform interventions into learners' journeys and interactions with academic institutions. Data is collected at various times and places throughout a learner's lifecycle, and the learners and the institution should benefit from the insights and knowledge gained from this data. Covering topics such as learning analytics dashboards, text network analysis, and employment recruitment, this book is an indispensable resource for educators, computer scientists, faculty of higher education, government officials, educational administration, students of higher education, pre-service teachers, business professionals, researchers, and academicians.
Have you ever looked at your Library's key performance indicators and said to yourself "so what!"? Have you found yourself making decisions in a void due to the lack of useful and easily accessible operational data? Have you ever worried that you are being left behind with the emergence of data analytics? Do you feel there are important stories in your operational data that need to be told, but you have no idea how to find these stories? If you answered yes to any of these questions, then this book is for you. How Libraries Should Manage Data provides detailed instructions on how to transform your operational data from a fog of disconnected, unreliable, and inaccessible information - into an exemplar of best practice data management. Like the human brain, most people are only using a very small fraction of the true potential of Excel. Learn how to tap into a greater proportion of Excel's hidden power, and in the process transform your operational data into actionable business intelligence.
Due to the tremendous amount of data generated daily from fields such as business, research, and sciences, big data is everywhere. Therefore, alternative management and processing methods have to be created to handle this complex and unstructured data size. Big Data Management, Technologies, and Applications discusses the exponential growth of information size and the innovative methods for data capture, storage, sharing, and analysis for big data. With its prevalence, this collection of articles on big data methodologies and technologies are beneficial for IT workers, researchers, students, and practitioners in this timely field.
High-performance computing (HPC) describes the use of connected computing units to perform complex tasks. It relies on parallelization techniques and algorithms to synchronize these disparate units in order to perform faster than a single processor could, alone. Used in industries from medicine and research to military and higher education, this method of computing allows for users to complete complex data-intensive tasks. This field has undergone many changes over the past decade, and will continue to grow in popularity in the coming years. Innovative Research Applications in Next-Generation High Performance Computing aims to address the future challenges, advances, and applications of HPC and related technologies. As the need for such processors increases, so does the importance of developing new ways to optimize the performance of these supercomputers. This timely publication provides comprehensive information for researchers, students in ICT, program developers, military and government organizations, and business professionals.
Increasingly, human beings are sensors engaging directly with the mobile Internet. Individuals can now share real-time experiences at an unprecedented scale. Social Sensing: Building Reliable Systems on Unreliable Data looks at recent advances in the emerging field of social sensing, emphasizing the key problem faced by application designers: how to extract reliable information from data collected from largely unknown and possibly unreliable sources. The book explains how a myriad of societal applications can be derived from this massive amount of data collected and shared by average individuals. The title offers theoretical foundations to support emerging data-driven cyber-physical applications and touches on key issues such as privacy. The authors present solutions based on recent research and novel ideas that leverage techniques from cyber-physical systems, sensor networks, machine learning, data mining, and information fusion.
Across numerous industries in modern society, there is a constant need to gather precise and relevant data efficiently and quickly. As such, it is imperative to research new methods and approaches to increase productivity in these areas. Next-Generation Information Retrieval and Knowledge Resources Management is a key source on the latest advancements in multidisciplinary research methods and applications and examines effective techniques for managing and utilizing information resources. Featuring extensive coverage across a range of relevant perspectives and topics, such as knowledge discovery, spatial indexing, and data mining, this book is ideally designed for researchers, graduate students, academics, and industry professionals seeking ways to optimize knowledge management processes. |
You may like...
BTEC Nationals Information Technology…
Jenny Phillips, Alan Jarvis, …
Paperback
R1,018
Discovery Miles 10 180
Demystifying Graph Data Science - Graph…
Pethuru Raj, Abhishek Kumar, …
Hardcover
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
|