![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Databases have been designed to store large volumes of data and to provide efficient query interfaces. Semantic Web formats are geared towards capturing domain knowledge, interlinking annotations, and offering a high-level, machine-processable view of information. However, the gigantic amount of such useful information makes efficient management of it increasingly difficult, undermining the possibility of transforming it into useful knowledge. The research presented by De Virgilio, Giunchiglia and Tanca tries to bridge the two worlds in order to leverage the efficiency and scalability of database-oriented technologies to support an ontological high-level view of data and metadata. The contributions present and analyze techniques for semantic information management, by taking advantage of the synergies between the logical basis of the Semantic Web and the logical foundations of data management. The book's leitmotif is to propose models and methods especially tailored to represent and manage data that is appropriately structured for easier machine processing on the Web. After two introductory chapters on data management and the Semantic Web in general, the remaining contributions are grouped into five parts on Semantic Web Data Storage, Reasoning in the Semantic Web, Semantic Web Data Querying, Semantic Web Applications, and Engineering Semantic Web Systems. The handbook-like presentation makes this volume an important reference on current work and a source of inspiration for future development, targeting academic and industrial researchers as well as graduate students in Semantic Web technologies or database design.
Middleware Networks: Concept, Design and Deployment of Internet Infrastructure describes a framework for developing IP Service Platforms and emerging managed IP networks with a reference architecture from the AT&T Labs GeoPlex project. The main goal is to present basic principles that both the telecommunications industry and the Internet community can see as providing benefits for service-related network issues. As this is an emerging technology, the solutions presented are timely and significant. Middleware Networks: Concept, Design and Deployment of Internet Infrastructure illustrates the principles of middleware networks, including Application Program Interfaces (APIs), reference architecture, and a model implementation. Part I begins with fundamentals of transport, and quickly transitions to modern transport and technology. Part II elucidates essential requirements and unifying design principles for the Internet. These fundamental principles establish the basis for consistent behavior in view of the explosive growth underway in large-scale heterogeneous networks. Part III demonstrates and explains the resulting architecture and implementation. Particular emphasis is placed upon the control of resources and behavior. Reference is made to open APIs and sample deployments. Middleware Networks: Concept, Design and Deployment of Internet Infrastructure is intended for a technical audience consisting of students, researchers, network professionals, software developers, system architects and technically-oriented managers involved in the definition and deployment of modern Internet platforms or services. Although the book assumes a basic technical competency, as it does not provide remedial essentials, any practitioner will find this useful, particularly those requiring an overview of the newest software architectures in the field.
This carefully edited and reviewed volume addresses the increasingly popular demand for seeking more clarity in the data that we are immersed in. It offers excellent examples of the intelligent ubiquitous computation, as well as recent advances in systems engineering and informatics. The content represents state-of-the-art foundations for researchers in the domain of modern computation, computer science, system engineering and networking, with many examples that are set in industrial application context. The book includes the carefully selected best contributions to APCASE 2014, the 2nd Asia-Pacific Conference on Computer Aided System Engineering, held February 10-12, 2014 in South Kuta, Bali, Indonesia. The book consists of four main parts that cover data-oriented engineering science research in a wide range of applications: computational models and knowledge discovery; communications networks and cloud computing; computer-based systems; and data-oriented and software-intensive systems.
Here is the ideal field guide for data warehousing implementation. This book first teaches you how to build a data warehouse, including defining the architecture, understanding the methodology, gathering the requirements, designing the data models, and creating the databases. Coverage then explains how to populate the data warehouse and explores how to present data to users using reports and multidimensional databases and how to use the data in the data warehouse for business intelligence, customer relationship management, and other purposes. It also details testing and how to administer data warehouse operation.
'Natural Language Processing in the Real World' is a practical guide for applying data science and machine learning to build Natural Language Processing (NLP) solutions. Where traditional, academic-taught NLP is often accompanied by a data source or dataset to aid solution building, this book is situated in the real-world where there may not be an existing rich dataset. This book covers the basic concepts behind NLP and text processing and discusses the applications across 15 industry verticals. From data sources and extraction to transformation and modelling, and classic Machine Learning to Deep Learning and Transformers, several popular applications of NLP are discussed and implemented. This book provides a hands-on and holistic guide for anyone looking to build NLP solutions, from students of Computer Science to those involved in large-scale industrial projects. .
Tools and methods from complex systems science can have a considerable impact on the way in which the quantitative assessment of economic and financial issues is approached, as discussed in this thesis. First it is shown that the self-organization of financial markets is a crucial factor in the understanding of their dynamics. In fact, using an agent-based approach, it is argued that financial markets' stylized facts appear only in the self-organized state. Secondly, the thesis points out the potential of so-called big data science for financial market modeling, investigating how web-driven data can yield a picture of market activities: it has been found that web query volumes anticipate trade volumes. As a third achievement, the metrics developed here for country competitiveness and product complexity is groundbreaking in comparison to mainstream theories of economic growth and technological development. A key element in assessing the intangible variables determining the success of countries in the present globalized economy is represented by the diversification of the productive basket of countries. The comparison between the level of complexity of a country's productive system and economic indicators such as the GDP per capita discloses its hidden growth potential.
This book presents the most recent achievements in some rapidly developing fields within Computer Science. This includes the very latest research in biometrics and computer security systems, and descriptions of the latest inroads in artificial intelligence applications. The book contains over 30 articles by well-known scientists and engineers. The articles are extended versions of works introduced at the ACS-CISIM 2005 conference.
Recent Advances in RSA Cryptography surveys the most important achievements of the last 22 years of research in RSA cryptography. Special emphasis is laid on the description and analysis of proposed attacks against the RSA cryptosystem. The first chapters introduce the necessary background information on number theory, complexity and public key cryptography. Subsequent chapters review factorization algorithms and specific properties that make RSA attractive for cryptographers. Most recent attacks against RSA are discussed in the third part of the book (among them attacks against low-exponent RSA, Hastad's broadcast attack, and Franklin-Reiter attacks). Finally, the last chapter reviews the use of the RSA function in signature schemes. Recent Advances in RSA Cryptography is of interest to graduate level students and researchers who will gain an insight into current research topics in the field and an overview of recent results in a unified way. Recent Advances in RSA Cryptography is suitable as a secondary text for a graduate level course, and as a reference for researchers and practitioners in industry.
This book presents real-world decision support systems, i.e., systems that have been running for some time and as such have been tested in real environments and complex situations; the cases are from various application domains and highlight the best practices in each stage of the system's life cycle, from the initial requirements analysis and design phases to the final stages of the project. Each chapter provides decision-makers with recommendations and insights into lessons learned so that failures can be avoided and successes repeated. For this reason unsuccessful cases, which at some point of their life cycle were deemed as failures for one reason or another, are also included. All decision support systems are presented in a constructive, coherent and deductive manner to enhance the learning effect. It complements the many works that focus on theoretical aspects or individual module design and development by offering 'good' and 'bad' practices when developing and using decision support systems. Combining high-quality research with real-world implementations, it is of interest to researchers and professionals in industry alike.
Data and knowledge play a key role in both current and future Grids. The issues concerning representation, discovery, and integration of data and knowledge in dynamic distributed environments can be addressed by exploiting features offered by Grid Technologies. Current research activities are leveraging the Grid for the provision of generic- and domain-specific solutions and services for data management and knowledge discovery. Knowledge and Data Management in Grids is the third volume of the Core Grid series and brings together scientific contributions by researchers and scientists working on storage, data, and knowledge management in Grid and Peer-to-Peer systems. This volume presents the latest Grid solutions and research results in key areas of knowledge and data management such as distributed storage management, Grid databases, Semantic Grid and Grid-aware data mining. Knowledge and Data Management in Grids is for a professional audience, composed of researchers and practitioners in industry. This book is also suitable for graduate-level students in computer science.
This book is the result of a group of researchers from different disciplines asking themselves one question: what does it take to develop a computer interface that listens, talks, and can answer questions in a domain? First, obviously, it takes specialized modules for speech recognition and synthesis, human interaction management (dialogue, input fusion, and multimodal output fusion), basic question understanding, and answer finding. While all modules are researched as independent subfields, this book describes the development of state-of-the-art modules and their integration into a single, working application capable of answering medical (encyclopedic) questions such as "How long is a person with measles contagious?" or "How can I prevent RSI?." The contributions in this book, which grew out of the IMIX project funded by the Netherlands Organisation for Scientific Research, document the development of this system, but also address more general issues in natural language processing, such as the development of multidimensional dialogue systems, the acquisition of taxonomic knowledge from text, answer fusion, sequence processing for domain-specific entity recognition, and syntactic parsing for question answering. Together, they offer an overview of the most important findings and lessons learned in the scope of the IMIX project, making the book of interest to both academic and commercial developers of human-machine interaction systems in Dutch or any other language. Highlights include: integrating multi-modal input fusion in dialogue management (Van Schooten and Op den Akker), state-of-the-art approaches to the extraction of term variants (Van der Plas, Tiedemann, and Fahmi; Tjong Kim Sang, Hofmann, and De Rijke), and multi-modal answer fusion (two chapters by Van Hooijdonk, Bosma, Krahmer, Maes, Theune, and Marsi). Watch the IMIX movie at www.nwo.nl/imix-film. Like IBM's Watson, the IMIX system described in the book gives naturally phrased responses to naturally posed questions. Where Watson can only generate synthetic speech, the IMIX system also recognizes speech. On the other hand, Watson is able to win a television quiz, while the IMIX system is domain-specific, answering only to medical questions. "The Netherlands has always been one of the leaders in the general field of Human Language Technology, and IMIX is no exception. It was a very ambitious program, with a remarkably successful performance leading to interesting results. The teams covered a remarkable amount of territory in the general sphere of multimodal question answering and information delivery, question answering, information extraction and component technologies." Eduard Hovy, USC, USA, Jon Oberlander, University of Edinburgh, Scotland, and Norbert Reithinger, DFKI, Germany"
YOU HAVE TO OWN THIS BOOK! "Software Exorcism: A Handbook for Debugging and Optimizing Legacy Code" takes an unflinching, no bulls$&# look at behavioral problems in the software engineering industry, shedding much-needed light on the social forces that make it difficult for programmers to do their job. Do you have a co-worker who perpetually writes bad code that "you" are forced to clean up? This is your book. While there are plenty of books on the market that cover debugging and short-term workarounds for bad code, Reverend Bill Blunden takes a revolutionary step beyond them by bringing our attention to the underlying illnesses that plague the software industry as a whole. Further, "Software Exorcism" discusses tools and techniques for effective and aggressive debugging, gives optimization strategies that appeal to all levels of programmers, and presents in-depth treatments of technical issues with honest assessments that are not biased toward proprietary solutions.
This volume directly addresses the complexities involved in data mining and the development of new algorithms, built on an underlying theory consisting of linear and non-linear dynamics, data selection, filtering, and analysis, while including analytical projection and prediction. The results derived from the analysis are then further manipulated such that a visual representation is derived with an accompanying analysis. The book brings very current methods of analysis to the forefront of the discipline, provides researchers and practitioners the mathematical underpinning of the algorithms, and the non-specialist with a visual representation such that a valid understanding of the meaning of the adaptive system can be attained with careful attention to the visual representation. The book presents, as a collection of documents, sophisticated and meaningful methods that can be immediately understood and applied to various other disciplines of research. The content is composed of chapters addressing: An application of adaptive systems methodology in the field of post-radiation treatment involving brain volume differences in children; A new adaptive system for computer-aided diagnosis of the characterization of lung nodules; A new method of multi-dimensional scaling with minimal loss of information; A description of the semantics of point spaces with an application on the analysis of terrorist attacks in Afghanistan; The description of a new family of meta-classifiers; A new method of optimal informational sorting; A general method for the unsupervised adaptive classification for learning; and the presentation of two new theories, one in target diffusion and the other in twisting theory.
This book is an essential contribution to the description of fuzziness in information systems. Usually users want to retrieve data or summarized information from a database and are interested in classifying it or building rule-based systems on it. But they are often not aware of the nature of this data and/or are unable to determine clear search criteria. The book examines theoretical and practical approaches to fuzziness in information systems based on statistical data related to territorial units. Chapter 1 discusses the theory of fuzzy sets and fuzzy logic to enable readers to understand the information presented in the book. Chapter 2 is devoted to flexible queries and includes issues like constructing fuzzy sets for query conditions, and aggregation operators for commutative and non-commutative conditions, while Chapter 3 focuses on linguistic summaries. Chapter 4 presents fuzzy logic control architecture adjusted specifically for the aims of business and governmental agencies, and shows fuzzy rules and procedures for solving inference tasks. Chapter 5 covers the fuzzification of classical relational databases with an emphasis on storing fuzzy data in classical relational databases in such a way that existing data and normal forms are not affected. This book also examines practical aspects of user-friendly interfaces for storing, updating, querying and summarizing. Lastly, Chapter 6 briefly discusses possible integration of fuzzy queries, summarization and inference related to crisp and fuzzy databases. The main target audience of the book is researchers and students working in the fields of data analysis, database design and business intelligence. As it does not go too deeply into the foundation and mathematical theory of fuzzy logic and relational algebra, it is also of interest to advanced professionals developing tailored applications based on fuzzy sets.
Clinical Decision Support and Beyond: Progress and Opportunities in Knowledge-Enhanced Health and Healthcare, now in its third edition, discusses the underpinnings of effective, reliable, and easy-to-use clinical decision support systems at the point of care as a productive way of managing the flood of data, knowledge, and misinformation when providing patient care. Incorporating CDS into electronic health record systems has been underway for decades; however its complexities, costs, and user resistance have lagged its potential. Thus it is of utmost importance to understand the process in detail, to take full advantage of its capabilities. The book expands and updates the content of the previous edition, and discusses topics such as integration of CDS into workflow, context-driven anticipation of needs for CDS, new forms of CDS derived from data analytics, precision medicine, population health, integration of personal monitoring, and patient-facing CDS. In addition, it discusses population health management, public health CDS and CDS to help reduce health disparities. It is a valuable resource for clinicians, practitioners, students and members of medical and biomedical fields who are interested to learn more about the potential of clinical decision support to improve health and wellness and the quality of health care.
Health information about patients is critical; currently, health records are saved in databases controlled by individual users, organizations, or large groups of organizations. As there are many malicious users, this information is not shared with other organizations due to security issues and the chance of the data being modified or tampered with. Blockchain can be used to securely exchange healthcare data, which can be accessed by organizations sharing the same network, allowing doctors/practitioners to provide better care for patients. The key properties of decentralization, such as immutability and transparency, improve healthcare interoperability. This book brings forth the prospects and research trends of Blockchain in healthcare, so that Researchers, Database professionals, Academia, and Healthcare professionals across the world can know/use the concept of Blockchain in healthcare. The book provides the fundamental and technical details of Blockchain, the applications of Blockchain in healthcare, hands-on chapters for graduate/postgraduate/doctoral students/healthcare professionals to secure healthcare data of patients, and research challenges and future work directions for researchers in healthcare.
This book presents an improved design for service provisioning and allocation models that are validated through running genome sequence assembly tasks in a hybrid cloud environment. It proposes approaches for addressing scheduling and performance issues in big data analytics and showcases new algorithms for hybrid cloud scheduling. Scientific sectors such as bioinformatics, astronomy, high-energy physics, and Earth science are generating a tremendous flow of data, commonly known as big data. In the context of growing demand for big data analytics, cloud computing offers an ideal platform for processing big data tasks due to its flexible scalability and adaptability. However, there are numerous problems associated with the current service provisioning and allocation models, such as inefficient scheduling algorithms, overloaded memory overheads, excessive node delays and improper error handling of tasks, all of which need to be addressed to enhance the performance of big data analytics.
Data warehousing is an important topic that is of interest to both the industry and the knowledge engineering research communities. Both data mining and data warehousing technologies have similar objectives and can potentially benefit from each other's methods to facilitate knowledge discovery. Improving Knowledge Discovery through the Integration of Data Mining Techniques provides insight concerning the integration of data mining and data warehousing for enhancing the knowledge discovery process. Decision makers, academicians, researchers, advanced-level students, technology developers, and business intelligence professionals will find this book useful in furthering their research exposure to relevant topics in knowledge discovery.
Database professionals will find that this new edition aids in
mastering the latest version of Microsoft's SQL Server. Developers
and database administrators (DBAs) use SQL on a daily basis in
application development and the subsequent problem solving and fine
tuning. Answers to SQL issues can be quickly located helping the
DBA or developer optimize and tune a database to maximum
efficiency.
With advances and in-depth applications of computer technologies, and the extensive applications of Web technology in various areas, databases have become the repositories of large volumes of data. It is very critical to manage data resources for effective problem solving and decision making. Collecting and presenting the latest research and development results from the leading researchers in the field of intelligent databases, ""Intelligent Databases: Technologies and Applications"" provides a single record of current research and practical applications in this field. ""Intelligent Databases: Technologies and Applications"" integrates data management in databases with intelligent data processing and analysis in artificial intelligence. This book challenges today's database technology and promotes its evolution.
The healthcare industry produces a constant flow of data, creating a need for deep analysis of databases through data mining tools and techniques resulting in expanded medical research, diagnosis, and treatment. ""Data Mining and Medical Knowledge Management: Cases and Applications"" presents case studies on applications of various modern data mining methods in several important areas of medicine, covering classical data mining methods, elaborated approaches related to mining in electroencephalogram and electrocardiogram data, and methods related to mining in genetic data. A premier resource for those involved in data mining and medical knowledge management, this book tackles ethical issues related to cost-sensitive learning in medicine and produces theoretical contributions concerning general problems of data, information, knowledge, and ontologies. |
You may like...
Web Services - Concepts, Methodologies…
Information Reso Management Association
Hardcover
R8,957
Discovery Miles 89 570
Web Services - Concepts, Methodologies…
Information Reso Management Association
Hardcover
R8,957
Discovery Miles 89 570
|