![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
High Performance Data Mining: Scaling Algorithms, Applications and Systems brings together in one place important contributions and up-to-date research results in this fast moving area. High Performance Data Mining: Scaling Algorithms, Applications and Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
This is a compilation of papers presented at the Information System Concepts conference in Marburg, Germany. The special focus is consolidation and harmonisation of the numerous and widely diverging views in the field of information systems. This issue has become a hot topic, as many leading information system researchers and practitioners come to realise the importance of better communication among the members of the information systems community, and of a better scientific foundation of this rapidly evolving field.
This book provides a new direction in the field of nano-optics and nanophotonics from information and computing-related sciences and technology. Entitled by "Information Physics and Computing in NanosScale Photonics and Materials", IPCN in short, the book aims to bring together recent progresses in the intersection of nano-scale photonics, information, and enabling technologies. The topic will include (1) an overview of information physics in nanophotonics, (2) DNA self-assembled nanophotonic systems, (3) Functional molecular sensing, (4) Smart fold computing, an architecture for nanophotonics, (5) semiconductor nanowire and its photonic applications, (6) single photoelectron manipulation in imaging sensors, (6) hierarchical nanophotonic systems, (8) photonic neuromorphic computing, and (9) SAT solver and decision making based on nanophotonics.
Current database technology and computer hardware allow us to gather, store, access, and manipulate massive volumes of raw data in an efficient and inexpensive manner. In addition, the amount of data collected and warehoused in all industries is growing every year at a phenomenal rate. Nevertheless, our ability to discover critical, non-obvious nuggets of useful information in data that could influence or help in the decision making process, is still limited. Knowledge discovery (KDD) and Data Mining (DM) is a new, multidisciplinary field that focuses on the overall process of information discovery from large volumes of data. The field combines database concepts and theory, machine learning, pattern recognition, statistics, artificial intelligence, uncertainty management, and high-performance computing. To remain competitive, businesses must apply data mining techniques such as classification, prediction, and clustering using tools such as neural networks, fuzzy logic, and decision trees to facilitate making strategic decisions on a daily basis. Knowledge Discovery for Business Information Systems contains a collection of 16 high quality articles written by experts in the KDD and DM field from the following countries: Austria, Australia, Bulgaria, Canada, China (Hong Kong), Estonia, Denmark, Germany, Italy, Poland, Singapore and USA.
Multiobjective Evolutionary Algorithms and Applications provides comprehensive treatment on the design of multiobjective evolutionary algorithms and their applications in domains covering areas such as control and scheduling. Emphasizing both the theoretical developments and the practical implementation of multiobjective evolutionary algorithms, a profound mathematical knowledge is not required. Written for a wide readership, engineers, researchers, senior undergraduates and graduate students interested in the field of evolutionary algorithms and multiobjective optimization with some basic knowledge of evolutionary computation will find this book a useful addition to their book case.
Developers and DBAs use Oracle SQL coding on a daily basis, whether
for application development, finding problems, fine-tuning
solutions to those problems, or other critical DBA tasks. Oracle
SQL: Jumpstart with Examples is the fastest way to get started and
to quickly locate answers to common (and uncommon) questions. It
includes all the basic queries: filtering, sorting, operators,
conditionals, pseudocolumns, single row functions, joins, grouping
and summarizing, grouping functions, subqueries, composite queries,
hierarchies, flashback queries, parallel queries, expressions and
regular expressions, DML, datatypes (including collections), XML in
Oracle, DDL for basic database objects such as tales, views and
indexes, Oracle Partitioning, security, and finally PL/SQL.
The world of text mining is simultaneously a minefield and a gold mine. It is an exciting application field and an area of scientific research that is currently under rapid development. It uses techniques from well-established scientific fields (e.g. data mining, machine learning, information retrieval, natural language processing, case based reasoning, statistics and knowledge management) in an effort to help people gain insight, understand and interpret large quantities of (usually) semi-structured and unstructured data. Despite the advances made during the last few years, many issues remain umesolved. Proper co-ordination activities, dissemination of current trends and standardisation of the procedures have been identified, as key needs. There are many questions still unanswered, especially to the potential users; what is the scope of Text Mining, who uses it and for what purpose, what constitutes the leading trends in the field of Text Mining -especially in relation to IT- and whether there still remain areas to be covered."
This book addresses issues related to managing data across a distributed database system. It is unique because it covers traditional database theory and current research, explaining the difficulties in providing a unified user interface and global data dictionary. The book gives implementers guidance on hiding discrepancies across systems and creating the illusion of a single repository for users. It also includes three sample frameworks--implemented using J2SE with JMS, J2EE, and Microsoft .Net--that readers can use to learn how to implement a distributed database management system. IT and development groups and computer sciences/software engineering graduates will find this guide invaluable.
Databases and database systems in particular, are considered as kerneIs of any Information System (IS). The rapid growth of the web on the Internet has dramatically increased the use of semi-structured data and the need to store and retrieve such data in a database. The database community quickly reacted to these new requirements by providing models for semi-structured data and by integrating database research to XML web services and mobile computing. On the other hand, IS community who never than before faces problems of IS development is seeking for new approaches to IS design. Ontology based approaches are gaining popularity, because of a need for shared conceptualisation by different stakeholders of IS development teams. Many web-based IS would fail without domain ontologies to capture meaning of terms in their web interfaces. This volume contains revised versions of 24 best papers presented at the th 5 International Baltic Conference on Databases and Information Systems (BalticDB&IS'2002). The conference papers present original research results in the novel fields of IS and databases such as web IS, XML and databases, data mining and knowledge management, mobile agents and databases, and UML based IS development methodologies. The book's intended readers are researchers and practitioners who are interested in advanced topics on databases and IS."
Constraints and Databases contains seven contributions on the rapidly evolving research area of constraints and databases. This collection of original research articles has been compiled as a tribute to Paris C. Kanellakis, one of the pioneers in the field. Constraints have long been used for maintaining the integrity of databases. More recently, constraint databases have emerged where databases store and manipulate data in the form of constraints. The generality of constraint databases makes them highly attractive for many applications. Constraints provide a uniform mechanism for describing heterogenous data, and advanced constraint solving methods can be used for efficient manipulation of constraint data. The articles included in this book cover the range of topics involving constraints and databases; join algorithms, evaluation methods, applications (e.g. data mining) and implementations of constraint databases, as well as more traditional topics such as integrity constraint maintenance. Constraints and Databases is an edited volume of original research comprising invited contributions by leading researchers.
As the global economy turns more and more service oriented, Information Technology-Enabled Services (ITeS) require greater understanding. Increasing numbers and varieties of services are provided through IT. Furthermore, IT enables the creation of new services in diverse fields previously untouched. Because of the catalyzing nature of internet technology, ITeS today has become more than "Outsourcing" of services. This book illustrates the enabling nature of ITeS with its entailment of IT, thus contributing to the betterment of humanity. The scope of this book is not only for academia but also for business persons, government practitioners and readers from daily lives. Authors from a variety of nations and regions with various backgrounds provide insightful theories, research, findings and practices in various fields such as commerce, finance, medical services, government and education. This book opens up a new horizon with the application of Internet-based practices in business, government and in daily lives. Information Technology-Enabled Services works as a navigator for those who sail to the new horizon of service oriented economies.
An Introduction to R and Python for Data Analysis helps teach students to code in both R and Python simultaneously. As both R and Python can be used in similar manners, it is useful and efficient to learn both at the same time, helping lecturers and students to teach and learn more, save time, whilst reinforcing the shared concepts and differences of the systems. This tandem learning is highly useful for students, helping them to become literate in both languages, and develop skills which will be handy after their studies. This book presumes no prior experience with computing, and is intended to be used by students from a variety of backgrounds. The side-by-side formatting of this book helps introductory graduate students quickly grasp the basics of R and Python, with the exercises providing helping them to teach themselves the skills they will need upon the completion of their course, as employers now ask for competency in both R and Python. Teachers and lecturers will also find this book useful in their teaching, providing a singular work to help ensure their students are well trained in both computer languages. All data for exercises can be found here: https://github.com/tbrown122387/r_and_python_book/tree/master/data. Key features: - Teaches R and Python in a "side-by-side" way. - Examples are tailored to aspiring data scientists and statisticians, not software engineers. - Designed for introductory graduate students. - Does not assume any mathematical background.
This book represents the combined peer-reviewed proceedings of the Sixth International Symposium on Intelligent Distributed Computing -- IDC~2012, of the International Workshop on Agents for Cloud -- A4C~2012 and of the Fourth International Workshop on Multi-Agent Systems Technology and Semantics -- MASTS~2012. All the events were held in Calabria, Italy during September 24-26, 2012. The 37 contributions published in this book address many topics related to theory and applications of intelligent distributed computing and multi-agent systems, including: adaptive and autonomous distributed systems, agent programming, ambient assisted living systems, business process modeling and verification, cloud computing, coalition formation, decision support systems, distributed optimization and constraint satisfaction, gesture recognition, intelligent energy management in WSNs, intelligent logistics, machine learning, mobile agents, parallel and distributed computational intelligence, parallel evolutionary computing, trust metrics and security, scheduling in distributed heterogenous computing environments, semantic Web service composition, social simulation, and software agents for WSNs.
Autonomous agents or multiagent systems are computational systems in which several computational agents interact or work together to perform some set of tasks. These systems may involve computational agents having common goals or distinct goals. Real-Time Search for Learning Autonomous Agents focuses on extending real-time search algorithms for autonomous agents and for a multiagent world. Although real-time search provides an attractive framework for resource-bounded problem solving, the behavior of the problem solver is not rational enough for autonomous agents. The problem solver always keeps the record of its moves and the problem solver cannot utilize and improve previous experiments. Other problems are that although the algorithms interleave planning and execution, they cannot be directly applied to a multiagent world. The problem solver cannot adapt to the dynamically changing goals and the problem solver cannot cooperatively solve problems with other problem solvers. This book deals with all these issues. Real-Time Search for Learning Autonomous Agents serves as an excellent resource for researchers and engineers interested in both practical references and some theoretical basis for agent/multiagent systems. The book can also be used as a text for advanced courses on the subject.
During the last several years there has been a signi?cant coalescence of interest in Open Source Geospatial (OSG) or, as it is also known and referred to in this book, Free and Open Source for Geospatial (FOSS4G) software technology. This interest has served to fan embers from pre-existing FOSS4G efforts, that were - cused on both standalone desktop geographic information systems (GIS), such as GRASS, libraries of geospatial utilities, such as GDAL, and Web-based mapping applications, such as MapServer. The impetus for the coalescence of disparate and th independent project-based efforts was the formal incorporation on February 27 , 2006 of a non-pro?t organization known as the Open Source Geospatial Foun- tion (OSGeo). Full details concerning the foundation, including its mission sta- ment, goals, evolving governance structure, approved projects, Board of Directors, journal, and much other useful information are available through the Foundation's website (http://www. osgeo. org). This book is not about OSGeo, yet it is dif?cult to produce a text on FOSS4G - proaches to spatial data handling without, in some way or another, encountering the activities and personalities of OSGeo. Of the current books published on this topic the majority are written by authors with very close connections to OSGeo. For - ample, Tyler Mitchell who is the Executive Director of the Foundation, is author of one of the ?rst books on FOSS4G approaches ('Web Mapping Illustrated' (2005)).
Middleware Networks: Concept, Design and Deployment of Internet Infrastructure describes a framework for developing IP Service Platforms and emerging managed IP networks with a reference architecture from the AT&T Labs GeoPlex project. The main goal is to present basic principles that both the telecommunications industry and the Internet community can see as providing benefits for service-related network issues. As this is an emerging technology, the solutions presented are timely and significant. Middleware Networks: Concept, Design and Deployment of Internet Infrastructure illustrates the principles of middleware networks, including Application Program Interfaces (APIs), reference architecture, and a model implementation. Part I begins with fundamentals of transport, and quickly transitions to modern transport and technology. Part II elucidates essential requirements and unifying design principles for the Internet. These fundamental principles establish the basis for consistent behavior in view of the explosive growth underway in large-scale heterogeneous networks. Part III demonstrates and explains the resulting architecture and implementation. Particular emphasis is placed upon the control of resources and behavior. Reference is made to open APIs and sample deployments. Middleware Networks: Concept, Design and Deployment of Internet Infrastructure is intended for a technical audience consisting of students, researchers, network professionals, software developers, system architects and technically-oriented managers involved in the definition and deployment of modern Internet platforms or services. Although the book assumes a basic technical competency, as it does not provide remedial essentials, any practitioner will find this useful, particularly those requiring an overview of the newest software architectures in the field.
Here is the ideal field guide for data warehousing implementation. This book first teaches you how to build a data warehouse, including defining the architecture, understanding the methodology, gathering the requirements, designing the data models, and creating the databases. Coverage then explains how to populate the data warehouse and explores how to present data to users using reports and multidimensional databases and how to use the data in the data warehouse for business intelligence, customer relationship management, and other purposes. It also details testing and how to administer data warehouse operation.
Hypertext/hypermedia systems and user-model-based adaptive systems in the areas of learning and information retrieval have for a long time been considered as two mutually exclusive approaches to information access. Adaptive systems tailor information to the user and may guide the user in the information space to present the most relevant material, taking into account a model of the user's goals, interests and preferences. Hypermedia systems, on the other hand, are `user neutral': they provide the user with the tools and the freedom to explore an information space by browsing through a complex network of information nodes. Adaptive hypertext and hypermedia systems attempt to bridge the gap between these two approaches. Adaptation of hypermedia systems to each individual user is increasingly needed. With the growing size, complexity and heterogeneity of current hypermedia systems, such as the World Wide Web, it becomes virtually impossible to impose guidelines on authors concerning the overall organization of hypermedia information. The networks therefore become so complex and unstructured that the existing navigational tools are no longer powerful enough to provide orientation on where to search for the needed information. It is also not possible to identify appropriate pre-defined paths or subnets for users with certain goals and knowledge backgrounds since the user community of hypermedia systems is usually quite inhomogeneous. This is particularly true for Web-based applications which are expected to be used by a much greater variety of users than any earlier standalone application. A possible remedy for the negative effects of the traditional `one-size-fits-all' approach in the development of hypermedia systems is to equip them with the ability to adapt to the needs of their individual users. A possible way of achieving adaptivity is by modeling the users and tailoring the system's interactions to their goals, tasks and interests. In this sense, the notion of adaptive hypertext/hypermedia comes naturally to denote a hypertext or hypermedia system which reflects some features of the user and/or characteristics of his system usage in a user model, and utilizes this model in order to adapt various behavioral aspects of the system to the user. This book is the first comprehensive publication on adaptive hypertext and hypermedia. It is oriented towards researchers and practitioners in the fields of hypertext and hypermedia, information systems, and personalized systems. It is also an important resource for the numerous developers of Web-based applications. The design decisions, adaptation methods, and experience presented in this book are a unique source of ideas and techniques for developing more usable and more intelligent Web-based systems suitable for a great variety of users. The practitioners will find it important that many of the adaptation techniques presented in this book have proved to be efficient and are ready to be used in various applications.
Databases have been designed to store large volumes of data and to provide efficient query interfaces. Semantic Web formats are geared towards capturing domain knowledge, interlinking annotations, and offering a high-level, machine-processable view of information. However, the gigantic amount of such useful information makes efficient management of it increasingly difficult, undermining the possibility of transforming it into useful knowledge. The research presented by De Virgilio, Giunchiglia and Tanca tries to bridge the two worlds in order to leverage the efficiency and scalability of database-oriented technologies to support an ontological high-level view of data and metadata. The contributions present and analyze techniques for semantic information management, by taking advantage of the synergies between the logical basis of the Semantic Web and the logical foundations of data management. The book's leitmotif is to propose models and methods especially tailored to represent and manage data that is appropriately structured for easier machine processing on the Web. After two introductory chapters on data management and the Semantic Web in general, the remaining contributions are grouped into five parts on Semantic Web Data Storage, Reasoning in the Semantic Web, Semantic Web Data Querying, Semantic Web Applications, and Engineering Semantic Web Systems. The handbook-like presentation makes this volume an important reference on current work and a source of inspiration for future development, targeting academic and industrial researchers as well as graduate students in Semantic Web technologies or database design.
'Natural Language Processing in the Real World' is a practical guide for applying data science and machine learning to build Natural Language Processing (NLP) solutions. Where traditional, academic-taught NLP is often accompanied by a data source or dataset to aid solution building, this book is situated in the real-world where there may not be an existing rich dataset. This book covers the basic concepts behind NLP and text processing and discusses the applications across 15 industry verticals. From data sources and extraction to transformation and modelling, and classic Machine Learning to Deep Learning and Transformers, several popular applications of NLP are discussed and implemented. This book provides a hands-on and holistic guide for anyone looking to build NLP solutions, from students of Computer Science to those involved in large-scale industrial projects. .
Modern medicine generates, almost daily, huge amounts of heterogeneous data. For example, medical data may contain SPECT images, signals lik e ECG, clinical information like temperature, cholesterol levels, etc., as well as the physician's interpretation. Those who deal with such data understand that there is a widening gap between data collection a nd data comprehension. Computerized techniques are needed to help huma ns address this problem. This volume is devoted to the relatively youn g and growing field of medical data mining and knowledge discovery. As more and more medical procedures employ imaging as a preferred diagno stic tool, there is a need to develop methods for efficient mining in databases of images. Other significant features are security and confi dentiality concerns. Moreover, the physician's interpretation of image s, signals, or other technical data, is written in unstructured Englis h which is very difficult to mine. This book addresses all these speci fic features.
This book draws new attention to domain-specific conceptual modeling by presenting the work of thought leaders who have designed and deployed specific modeling methods. It provides hands-on guidance on how to build models in a particular domain, such as requirements engineering, business process modeling or enterprise architecture. In addition to these results, it also puts forward ideas for future developments. All this is enriched with exercises, case studies, detailed references and further related information. All domain-specific methods described in this volume also have a tool implementation within the OMiLAB Collaborative Environment - a dedicated research and experimentation space for modeling method engineering at the University of Vienna, Austria - making these advances accessible to a wider community of further developers and users. The collection of works presented here will benefit experts and practitioners from academia and industry alike, including members of the conceptual modeling community as well as lecturers and students.
This book is the result of a group of researchers from different disciplines asking themselves one question: what does it take to develop a computer interface that listens, talks, and can answer questions in a domain? First, obviously, it takes specialized modules for speech recognition and synthesis, human interaction management (dialogue, input fusion, and multimodal output fusion), basic question understanding, and answer finding. While all modules are researched as independent subfields, this book describes the development of state-of-the-art modules and their integration into a single, working application capable of answering medical (encyclopedic) questions such as "How long is a person with measles contagious?" or "How can I prevent RSI?." The contributions in this book, which grew out of the IMIX project funded by the Netherlands Organisation for Scientific Research, document the development of this system, but also address more general issues in natural language processing, such as the development of multidimensional dialogue systems, the acquisition of taxonomic knowledge from text, answer fusion, sequence processing for domain-specific entity recognition, and syntactic parsing for question answering. Together, they offer an overview of the most important findings and lessons learned in the scope of the IMIX project, making the book of interest to both academic and commercial developers of human-machine interaction systems in Dutch or any other language. Highlights include: integrating multi-modal input fusion in dialogue management (Van Schooten and Op den Akker), state-of-the-art approaches to the extraction of term variants (Van der Plas, Tiedemann, and Fahmi; Tjong Kim Sang, Hofmann, and De Rijke), and multi-modal answer fusion (two chapters by Van Hooijdonk, Bosma, Krahmer, Maes, Theune, and Marsi). Watch the IMIX movie at www.nwo.nl/imix-film. Like IBM's Watson, the IMIX system described in the book gives naturally phrased responses to naturally posed questions. Where Watson can only generate synthetic speech, the IMIX system also recognizes speech. On the other hand, Watson is able to win a television quiz, while the IMIX system is domain-specific, answering only to medical questions. "The Netherlands has always been one of the leaders in the general field of Human Language Technology, and IMIX is no exception. It was a very ambitious program, with a remarkably successful performance leading to interesting results. The teams covered a remarkable amount of territory in the general sphere of multimodal question answering and information delivery, question answering, information extraction and component technologies." Eduard Hovy, USC, USA, Jon Oberlander, University of Edinburgh, Scotland, and Norbert Reithinger, DFKI, Germany"
This book is an essential contribution to the description of fuzziness in information systems. Usually users want to retrieve data or summarized information from a database and are interested in classifying it or building rule-based systems on it. But they are often not aware of the nature of this data and/or are unable to determine clear search criteria. The book examines theoretical and practical approaches to fuzziness in information systems based on statistical data related to territorial units. Chapter 1 discusses the theory of fuzzy sets and fuzzy logic to enable readers to understand the information presented in the book. Chapter 2 is devoted to flexible queries and includes issues like constructing fuzzy sets for query conditions, and aggregation operators for commutative and non-commutative conditions, while Chapter 3 focuses on linguistic summaries. Chapter 4 presents fuzzy logic control architecture adjusted specifically for the aims of business and governmental agencies, and shows fuzzy rules and procedures for solving inference tasks. Chapter 5 covers the fuzzification of classical relational databases with an emphasis on storing fuzzy data in classical relational databases in such a way that existing data and normal forms are not affected. This book also examines practical aspects of user-friendly interfaces for storing, updating, querying and summarizing. Lastly, Chapter 6 briefly discusses possible integration of fuzzy queries, summarization and inference related to crisp and fuzzy databases. The main target audience of the book is researchers and students working in the fields of data analysis, database design and business intelligence. As it does not go too deeply into the foundation and mathematical theory of fuzzy logic and relational algebra, it is also of interest to advanced professionals developing tailored applications based on fuzzy sets.
This book offers a comprehensive guide to implementing SAP and HANA on private, public and hybrid clouds. Cloud computing has transformed the way organizations run their IT infrastructures: the shift from legacy monolithic mainframes and UNIX platforms to cloud based infrastructures offering ubiquitous access to critical information, elastic provisioning and drastic cost savings has made cloud an essential part of every organization's business strategy. Cloud based services have evolved from simple file sharing, email and messaging utilities in the past, to the current situation, where their improved technical capabilities and SLAs make running mission-critical applications such as SAP possible. However, IT professionals must take due care when deploying SAP in a public, private or hybrid cloud environment. As a foundation for core business operations, SAP cloud deployments must satisfy stringent requirements concerning their performance, scale and security, while delivering measurable improvements in IT efficiency and cost savings. The 2nd edition of "SAP on the Cloud" continues the work of its successful predecessor released in 2013, providing updated guidance for deploying SAP in public, private and hybrid clouds. To do so, it discusses the technical requirements and considerations necessary for IT professionals to successfully implement SAP software in a cloud environment, including best-practice architectures for IaaS, PaaS and SaaS deployments. The section on SAP's in-memory database HANA has been significantly extended to cover Suite on HANA (SoH) and the different incarnations of HANA Enterprise Cloud (HEC) and Tailored Datacenter Integration (TDI). As cyber threats are a significant concern, it also explores appropriate security models for defending SAP cloud deployments against modern and sophisticated attacks. The reader will gain the insights needed to understand the respective benefits and drawbacks of various deployment models and how SAP on the cloud can be used to deliver IT efficiency and cost-savings in a secure and agile manner. |
You may like...
Blockchain and AI Technology in the…
Subhendu Kumar Pani, Sian Lun Lau, …
Hardcover
R6,170
Discovery Miles 61 700
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
Event Mining for Explanatory Modeling
Laleh Jalali, Ramesh Jain
Hardcover
R1,302
Discovery Miles 13 020
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
R506
Discovery Miles 5 060
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
R1,024
Discovery Miles 10 240
|