![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
As enterprise access networks evolve with a larger number of mobile users, a wide range of devices and new cloud-based applications, managing user performance on an end-to-end basis has become rather challenging. Recent advances in big data network analytics combined with AI and cloud computing are being leveraged to tackle this growing problem. AI is becoming further integrated with software that manage networks, storage, and can compute. This edited book focuses on how new network analytics, IoTs and Cloud Computing platforms are being used to ingest, analyse and correlate a myriad of big data across the entire network stack in order to increase quality of service and quality of experience (QoS/QoE) and to improve network performance. From big data and AI analytical techniques for handling the huge amount of data generated by IoT devices, the authors cover cloud storage optimization, the design of next generation access protocols and internet architecture, fault tolerance and reliability in intelligent networks, and discuss a range of emerging applications. This book will be useful to researchers, scientists, engineers, professionals, advanced students and faculty members in ICTs, data science, networking, AI, machine learning and sensing. It will also be of interest to professionals in data science, AI, cloud and IoT start-up companies, as well as developers and designers.
SQL Clearly Explained, Third Edition, provides an in-depth introduction to using SQL (Structured Query Language). Readers will learn not only SQL syntax, but also how SQL works. Although the core of the SQL language remains relatively unchanged, the most recent release of the SQL standard (SQL:2008) includes two sets of extensions: 1) support for object-relational databases and 2) support for XML. As a result, the set of standard SQL commands has been greatly extended and this new edition takes that into account. This new edition includes updated tips and tricks to reflect the current concepts of SQL and XML standards; several new chapters covering object-relational and XML extensions; and an ancillary package that includes case studies, a syllabus, exams and more. This book is intended for working SQL programmers, database administrators, database designers, database analysts, and application system developers as well as those who are developing new features for database management systems who want to know about user needs. This would include anyone working with electronic content in the relational database context but also XML. Web services, etc.
The long-standing debate on public vs. private healthcare systems has forced an examination of these organisations, in particular whether these approaches play corresponding or conflicting roles in service to global citizens. Healthcare Management and Economics: Perspectives on Public and Private Administration discusses public and private healthcare organisations by gathering perspectives on the differences in service, management, delivery, and efficiency. Highlighting the impact of citizens and information technology in these healthcare processes, this book is a vital collection of research for practitioners, academics, and scholars in the healthcare management field.
Education and research in the field of database technology can prove problematic without the proper resources and tools on the most relevant issues, trends, and advancements. Selected Readings on Database Technologies and Applications supplements course instruction and student research with quality chapters focused on key issues concerning the development, design, and analysis of databases. Containing over 30 chapters from authors across the globe, these selected readings in areas such as data warehousing, information retrieval, and knowledge discovery depict the most relevant and important areas of classroom discussion within the categories of Fundamental Concepts and Theories; Development and Design Methodologies; Tools and Technologies; Application and Utilization; Critical Issues; and Emerging Trends.
Big data consists of data sets that are too large and complex for traditional data processing and data management applications. Therefore, to obtain the valuable information within the data, one must use a variety of innovative analytical methods, such as web analytics, machine learning, and network analytics. As the study of big data becomes more popular, there is an urgent demand for studies on high-level computational intelligence and computing services for analyzing this significant area of information science. Big Data Analytics for Sustainable Computing is a collection of innovative research that focuses on new computing and system development issues in emerging sustainable applications. Featuring coverage on a wide range of topics such as data filtering, knowledge engineering, and cognitive analytics, this publication is ideally designed for data scientists, IT specialists, computer science practitioners, computer engineers, academicians, professionals, and students seeking current research on emerging analytical techniques and data processing software.
As data mining is one of the most rapidly changing disciplines with new technologies and concepts continually under development, academicians, researchers, and professionals of the discipline need access to the most current information about the concepts, issues, trends, and technologies in this emerging field.""Social Implications of Data Mining and Information Privacy: Interdisciplinary Frameworks and Solutions"" serves as a critical source of information related to emerging issues and solutions in data mining and the influence of political and socioeconomic factors. An immense breakthrough, this essential reference provides concise coverage of emerging issues and technological solutions in data mining, and covers problems with applicable laws governing such issues.
Big data has presented a number of opportunities across industries. With these opportunities come a number of challenges associated with handling, analyzing, and storing large data sets. One solution to this challenge is cloud computing, which supports a massive storage and computation facility in order to accommodate big data processing. Managing and Processing Big Data in Cloud Computing explores the challenges of supporting big data processing and cloud-based platforms as a proposed solution. Emphasizing a number of crucial topics such as data analytics, wireless networks, mobile clouds, and machine learning, this publication meets the research needs of data analysts, IT professionals, researchers, graduate students, and educators in the areas of data science, computer programming, and IT development.
Without mathematics no science would survive. This especially applies to the engineering sciences which highly depend on the applications of mathematics and mathematical tools such as optimization techniques, finite element methods, differential equations, fluid dynamics, mathematical modelling, and simulation. Neither optimization in engineering, nor the performance of safety-critical system and system security; nor high assurance software architecture and design would be possible without the development of mathematical applications. De Gruyter Series on the Applications of Mathematics in Engineering and Information Sciences (AMEIS) focusses on the latest applications of engineering and information technology that are possible only with the use of mathematical methods. By identifying the gaps in knowledge of engineering applications the AMEIS series fosters the international interchange between the sciences and keeps the reader informed about the latest developments.
Organizations that utilize data mining techniques can amass valuable information on clients habits and preferences, behavior patterns, purchase patterns, sales patterns, and stock forecasts. Ethical Data Mining Applications for Socio-Economic Development provides an overview of data mining techniques under an ethical lens, investigating developments in research and best practices, while evaluating experimental cases to identify potential ethical dilemmas in the information and communications technology sector. The cases and research in this book will benefit scientists, researchers, and practitioners working in the field of data mining, data warehousing, and database management to ensure that data obtained through web-based investigations is properly handled at all organizational levels. This book is part of the Advances in Data Mining and Database Management series collection.
This book brings all of the elements of data mining together in a
single volume, saving the reader the time and expense of making
multiple purchases. It consolidates both introductory and advanced
topics, thereby covering the gamut of data mining and machine
learning tactics ? from data integration and pre-processing, to
fundamental algorithms, to optimization techniques and web mining
methodology.
This book brings all of the elements of database design together in
a single volume, saving the reader the time and expense of making
multiple purchases. It consolidates both introductory and advanced
topics, thereby covering the gamut of database design methodology ?
from ER and UML techniques, to conceptual data modeling and table
transformation, to storing XML and querying moving objects
databases.
This is an overview of the end-to-end data cleaning process. Data quality is one of the most important problems in data management, since dirty data often leads to inaccurate data analytics results and incorrect business decisions. Poor data across businesses and the U.S. government are reported to cost trillions of dollars a year. Multiple surveys show that dirty data is the most common barrier faced by data scientists. Not surprisingly, developing effective and efficient data cleaning solutions is challenging and is rife with deep theoretical and engineering problems. This book is about data cleaning, which is used to refer to all kinds of tasks and activities to detect and repair errors in the data. Rather than focus on a particular data cleaning task, this book describes various error detection and repair methods, and attempts to anchor these proposals with multiple taxonomies and views. Specifically, it covers four of the most common and important data cleaning tasks, namely, outlier detection, data transformation, error repair (including imputing missing values), and data deduplication. Furthermore, due to the increasing popularity and applicability of machine learning techniques, it includes a chapter that specifically explores how machine learning techniques are used for data cleaning, and how data cleaning is used to improve machine learning models. This book is intended to serve as a useful reference for researchers and practitioners who are interested in the area of data quality and data cleaning. It can also be used as a textbook for a graduate course. Although we aim at covering state-of-the-art algorithms and techniques, we recognize that data cleaning is still an active field of research and therefore provide future directions of research whenever appropriate.
Contrary to popular belief, there has never been any shortage of
Macintosh-related security issues. OS9 had issues that warranted
attention. However, due to both ignorance and a lack of research,
many of these issues never saw the light of day. No solid
techniques were published for executing arbitrary code on OS9, and
there are no notable legacy Macintosh exploits. Due to the combined
lack of obvious vulnerabilities and accompanying exploits,
Macintosh appeared to be a solid platform. Threats to Macintosh's
OS X operating system are increasing in sophistication and number.
Whether it is the exploitation of an increasing number of holes,
use of rootkits for post-compromise concealment or disturbed denial
of service, knowing why the system is vulnerable and understanding
how to defend it is critical to computer security.
Offering a structured approach to handling and recovering from a
catastrophic data loss, this book will help both technical and
non-technical professionals put effective processes in place to
secure their business-critical information and provide a roadmap of
the appropriate recovery and notification steps when calamity
strikes.
Daily procedures such as scientific experiments and business processes have the potential to create a huge amount of data every day, hour, or even second, and this may lead to a major problem for the future of efficient data search and retrieval as well as secure data storage for the world's scientists, engineers, doctors, librarians, and business managers.Design, Performance, and Analysis of Innovative Information Retrieval examines a number of emerging technologies that significantly contribute to modern Information Retrieval (IR), as well as fundamental IR theories and concepts that have been adopted into new tools or systems. This reference is essential to researchers, educators, professionals, and students interested in the future of IR.
The series, Contemporary Perspectives on Data Mining, is composed of blind refereed scholarly research methods and applications of data mining. This series will be targeted both at the academic community, as well as the business practitioner. Data mining seeks to discover knowledge from vast amounts of data with the use of statistical and mathematical techniques. The knowledge is extracted from this data by examining the patterns of the data, whether they be associations of groups or things, predictions, sequential relationships between time order events or natural groups. Data mining applications are in finance (banking, brokerage, and insurance), marketing (customer relationships, retailing, logistics, and travel), as well as in manufacturing, health care, fraud detection, homeland security, and law enforcement.
Research and development surrounding the use of data queries is receiving increased attention from computer scientists and data specialists alike. Through the use of query technology, large volumes of data in databases can be retrieved, and information systems built based on databases can support problem solving and decision making across industries. The Handbook of Research on Innovative Database Query Processing Techniques focuses on the growing topic of database query processing methods, technologies, and applications. Aimed at providing an all-inclusive reference source of technologies and practices in advanced database query systems, this book investigates various techniques, including database and XML queries, spatiotemporal data queries, big data queries, metadata queries, and applications of database query systems. This comprehensive handbook is a necessary resource for students, IT professionals, data analysts, and academicians interested in uncovering the latest methods for using queries as a means to extract information from databases. This all-inclusive handbook includes the latest research on topics pertaining to information retrieval, data extraction, data management, design and development of database queries, and database and XM queries.
Electronic discovery refers to a process in which electronic data
is sought, located, secured, and searched with the intent of using
it as evidence in a legal case. Computer forensics is the
application of computer investigation and analysis techniques to
perform an investigation to find out exactly what happened on a
computer and who was responsible. IDC estimates that the U.S.
market for computer forensics will be grow from $252 million in
2004 to $630 million by 2009. Business is strong outside the United
States, as well. By 2011, the estimated international market will
be $1.8 billion dollars. The Techno Forensics Conference has
increased in size by almost 50% in its second year; another example
of the rapid growth in the market.
Digital audio, video, images, and documents are flying through
cyberspace to their respective owners. Unfortunately, along the
way, individuals may choose to intervene and take this content for
themselves. Digital watermarking and steganography technology
greatly reduces the instances of this by limiting or eliminating
the ability of third parties to decipher the content that he has
taken. The many techiniques of digital watermarking (embedding a
code) and steganography (hiding information) continue to evolve as
applications that necessitate them do the same. The authors of this
second edition provide an update on the framework for applying
these techniques that they provided researchers and professionals
in the first well-received edition. Steganography and steganalysis
(the art of detecting hidden information) have been added to a
robust treatment of digital watermarking, as many in each field
research and deal with the other. New material includes
watermarking with side information, QIM, and dirty-paper codes. The
revision and inclusion of new material by these influential authors
has created a must-own book for anyone in this profession. |
You may like...
Advances and Trends in Geodesy…
Sona Molcikova, Viera Hurcikova, …
Hardcover
R3,509
Discovery Miles 35 090
Analogy in Grammar - Form and…
James P. Blevins, Juliette Blevins
Hardcover
R3,534
Discovery Miles 35 340
Artificial Intelligence and Dynamic…
Alexej Gvishiani, Jacques O. Dubois
Hardcover
R4,208
Discovery Miles 42 080
Annotated Bibliography of Scholarship in…
Tony Silva, Colleen Brice, …
Hardcover
Design Of Mission Operations Systems For…
Stephen D. Wall, Kenneth W. Ledbetter
Hardcover
R6,751
Discovery Miles 67 510
|