![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
Health information about patients is critical; currently, health records are saved in databases controlled by individual users, organizations, or large groups of organizations. As there are many malicious users, this information is not shared with other organizations due to security issues and the chance of the data being modified or tampered with. Blockchain can be used to securely exchange healthcare data, which can be accessed by organizations sharing the same network, allowing doctors/practitioners to provide better care for patients. The key properties of decentralization, such as immutability and transparency, improve healthcare interoperability. This book brings forth the prospects and research trends of Blockchain in healthcare, so that Researchers, Database professionals, Academia, and Healthcare professionals across the world can know/use the concept of Blockchain in healthcare. The book provides the fundamental and technical details of Blockchain, the applications of Blockchain in healthcare, hands-on chapters for graduate/postgraduate/doctoral students/healthcare professionals to secure healthcare data of patients, and research challenges and future work directions for researchers in healthcare.
Database professionals will find that this new edition aids in
mastering the latest version of Microsoft's SQL Server. Developers
and database administrators (DBAs) use SQL on a daily basis in
application development and the subsequent problem solving and fine
tuning. Answers to SQL issues can be quickly located helping the
DBA or developer optimize and tune a database to maximum
efficiency.
With advances and in-depth applications of computer technologies, and the extensive applications of Web technology in various areas, databases have become the repositories of large volumes of data. It is very critical to manage data resources for effective problem solving and decision making. Collecting and presenting the latest research and development results from the leading researchers in the field of intelligent databases, ""Intelligent Databases: Technologies and Applications"" provides a single record of current research and practical applications in this field. ""Intelligent Databases: Technologies and Applications"" integrates data management in databases with intelligent data processing and analysis in artificial intelligence. This book challenges today's database technology and promotes its evolution.
This book presents real-world decision support systems, i.e., systems that have been running for some time and as such have been tested in real environments and complex situations; the cases are from various application domains and highlight the best practices in each stage of the system's life cycle, from the initial requirements analysis and design phases to the final stages of the project. Each chapter provides decision-makers with recommendations and insights into lessons learned so that failures can be avoided and successes repeated. For this reason unsuccessful cases, which at some point of their life cycle were deemed as failures for one reason or another, are also included. All decision support systems are presented in a constructive, coherent and deductive manner to enhance the learning effect. It complements the many works that focus on theoretical aspects or individual module design and development by offering 'good' and 'bad' practices when developing and using decision support systems. Combining high-quality research with real-world implementations, it is of interest to researchers and professionals in industry alike.
Research in multi-agent systems offers a promising technology for problems with networks, online trading and negotiations but also social structures and communication. This is a book on agent and multi-agent technology for internet and enterprise systems. The book is a pioneer in the combination of the fields and is based on the concept of developing a platform to share ideas and presents research in technology in the field and application to real problems. The chapters range over both applications, illustrating the possible uses of agents in an enterprise domain, and design and analytic methods, needed to provide the solid foundation required for practical systems.
Database Concurrency Control: Methods, Performance and Analysis is a review of developments in concurrency control methods for centralized database systems, with a quick digression into distributed databases and multicomputers, the emphasis being on performance. The main goals of Database Concurrency Control: Methods, Performance and Analysis are to succinctly specify various concurrency control methods; to describe models for evaluating the relative performance of concurrency control methods; to point out problem areas in earlier performance analyses; to introduce queuing network models to evaluate the baseline performance of transaction processing systems; to provide insights into the relative performance of transaction processing systems; to illustrate the application of basic analytic methods to the performance analysis of various concurrency control methods; to review transaction models which are intended to relieve the effect of lock contention; to provide guidelines for improving the performance of transaction processing systems due to concurrency control; and to point out areas for further investigation. This monograph should be of direct interest to computer scientists doing research on concurrency control methods for high performance transaction processing systems, designers of such systems, and professionals concerned with improving (tuning) the performance of transaction processing systems.
Patrick Humphreys Department of Social Psychology London School of Economics and Political Science, Houghton Street, London WC2A 2AE. Email: P. Humphreys@lse. ac. uk This book presents a selection of contributions to the conference on Implementing Systems for Supporting Management Decisions: Concepts, Methods, and Experiences held in London in July, 1996. The conference was organized by the International Federation of Infonnation Processing's Working Group 8. 3 on Decision Support Systems and the London School of Economics and Political Science. (LSE). The Programme Committee for the Conference comprised Liam Bannon, University of Limerick; Patrick Humphreys, LSE, co-chairperson; Andrew McCosh, University of Edinburgh; Piero Migliarese, Politecnico di Milano, co chairperson; Jean-Charles Pomerol, LAFORIA, Universite Paris VI. The chairperson of the organizing committee was Dina Berkeley, LSE. The programme committee members served also as the editors of this book. Each contribution was selected by the editors after peer review and was developed by its authors specifically for inclusion in this volume. Working group 8. 3 was formally established in 1981 on the recommendation ofIFIP's Technical Committee on Information Systems (TC8). The scope of the working group covers: "Development of approaches for applying information systems technology to increase the effectiveness of decision makers in situations where the computer system can support and enhance human judgment in the perfonnance of tasks that have elements that cannot be specified in advance."
This book presents an improved design for service provisioning and allocation models that are validated through running genome sequence assembly tasks in a hybrid cloud environment. It proposes approaches for addressing scheduling and performance issues in big data analytics and showcases new algorithms for hybrid cloud scheduling. Scientific sectors such as bioinformatics, astronomy, high-energy physics, and Earth science are generating a tremendous flow of data, commonly known as big data. In the context of growing demand for big data analytics, cloud computing offers an ideal platform for processing big data tasks due to its flexible scalability and adaptability. However, there are numerous problems associated with the current service provisioning and allocation models, such as inefficient scheduling algorithms, overloaded memory overheads, excessive node delays and improper error handling of tasks, all of which need to be addressed to enhance the performance of big data analytics.
Geographic information systems have developed rapidly in the past decade, and are now a major class of software, with applications that include infrastructure maintenance, resource management, agriculture, Earth science, and planning. But a lack of standards has led to a general inability for one GIS to interoperate with another. It is difficult for one GIS to share data with another, or for people trained on one system to adapt easily to the commands and user interface of another. Failure to interoperate is a problem at many levels, ranging from the purely technical to the semantic and the institutional. Interoperating Geographic Information Systems is about efforts to improve the ability of GISs to interoperate, and has been assembled through a collaboration between academic researchers and the software vendor community under the auspices of the US National Center for Geographic Information and Analysis and the Open GIS Consortium Inc. It includes chapters on the basic principles and the various conceptual frameworks that the research community has developed to think about the problem. Other chapters review a wide range of applications and the experiences of the authors in trying to achieve interoperability at a practical level. Interoperability opens enormous potential for new ways of using GIS and new mechanisms for exchanging data, and these are covered in chapters on information marketplaces, with special reference to geographic information. Institutional arrangements are also likely to be profoundly affected by the trend towards interoperable systems, and nowhere is the impact of interoperability more likely to cause fundamental change than in education, as educators address the needs of a new generation of GIS users with access to a new generation of tools. The book concludes with a series of chapters on education and institutional change. Interoperating Geographic Information Systems is suitable as a secondary text for graduate level courses in computer science, geography, spatial databases, and interoperability and as a reference for researchers and practitioners in industry, commerce and government.
Data Mining for Design and Manufacturing: Methods and Applications is the first book that brings together research and applications for data mining within design and manufacturing. The aim of the book is 1) to clarify the integration of data mining in engineering design and manufacturing, 2) to present a wide range of domains to which data mining can be applied, 3) to demonstrate the essential need for symbiotic collaboration of expertise in design and manufacturing, data mining, and information technology, and 4) to illustrate how to overcome central problems in design and manufacturing environments. The book also presents formal tools required to extract valuable information from design and manufacturing data, and facilitates interdisciplinary problem solving for enhanced decision making. Audience: The book is aimed at both academic and practising audiences. It can serve as a reference or textbook for senior or graduate level students in Engineering, Computer, and Management Sciences who are interested in data mining technologies. The book will be useful for practitioners interested in utilizing data mining techniques in design and manufacturing as well as for computer software developers engaged in developing data mining tools.
The book examines patterns of participation in human rights treaties. International relations theory is divided on what motivates states to participate in treaties, specifically human rights treaties. Instead of examining the specific motivations, this dissertation examines patterns of participation. In doing so, it attempts to match theoretical expectations of state behavior with participation. The conclusion of this study is that the data suggests there are multiple motivations that lead states to participate in human rights treaties. The book is divided into five substantive chapters. After an introduction, the second chapter examines the literature on why states join treaties in general, and human rights treaties in particular. The third chapter reviews the obligations states commit to under the fifteen treaties under consideration. The fourth chapter uses basic quantitative methods to examine any differences in the participation rates between democratic and non-democratic states. The fifth chapter examines reservations, declarations, and objections made in conjuncture with the fifteen treaties. The chapter employs both quantitative and qualitative methods to determine if there are substantial differences between democratic and non-democratic states. Finally, the sixth chapter examines those states that participate in the most human rights treaties to determine if there are characteristics that help to identify these states. Additionally, the chapter examines and evaluates theoretical predictions about participation.
Advances in technology are making massive data sets common in many scientific disciplines, such as astronomy, medical imaging, bio-informatics, combinatorial chemistry, remote sensing, and physics. To find useful information in these data sets, scientists and engineers are turning to data mining techniques. This book is a collection of papers based on the first two in a series of workshops on mining scientific datasets. It illustrates the diversity of problems and application areas that can benefit from data mining, as well as the issues and challenges that differentiate scientific data mining from its commercial counterpart. While the focus of the book is on mining scientific data, the work is of broader interest as many of the techniques can be applied equally well to data arising in business and web applications. Audience: This work would be an excellent text for students and researchers who are familiar with the basic principles of data mining and want to learn more about the application of data mining to their problem in science or engineering.
Video segmentation is the most fundamental process for appropriate index ing and retrieval of video intervals. In general, video streams are composed 1 of shots delimited by physical shot boundaries. Substantial work has been done on how to detect such shot boundaries automatically (Arman et aI. , 1993) (Zhang et aI. , 1993) (Zhang et aI. , 1995) (Kobla et aI. , 1997). Through the inte gration of technologies such as image processing, speech/character recognition and natural language understanding, keywords can be extracted and associated with these shots for indexing (Wactlar et aI. , 1996). A single shot, however, rarely carries enough amount of information to be meaningful by itself. Usu ally, it is a semantically meaningful interval that most users are interested in re trieving. Generally, such meaningful intervals span several consecutive shots. There hardly exists any efficient and reliable technique, either automatic or manual, to identify all semantically meaningful intervals within a video stream. Works by (Smith and Davenport, 1992) (Oomoto and Tanaka, 1993) (Weiss et aI. , 1995) (Hjelsvold et aI. , 1996) suggest manually defining all such inter vals in the database in advance. However, even an hour long video may have an indefinite number of meaningful intervals. Moreover, video data is multi interpretative. Therefore, given a query, what is a meaningful interval to an annotator may not be meaningful to the user who issues the query. In practice, manual indexing of meaningful intervals is labour intensive and inadequate.
This book is ideal for a one- or two-term course in database management or database design in an undergraduate or graduate level course. With its comprehensive coverage, this book can also be used as a reference for IT professionals. This best-selling text introduces the theory behind databases in a concise yet comprehensive manner, providing database design methodology that can be used by both technical and non-technical readers. The methodology for relational Database Management Systems is presented in simple, step-by-step instructions in conjunction with a realistic worked example using three explicit phases-conceptual, logical, and physical database design. Teaching and Learning Experience This program presents a better teaching and learning experience-for you and your students. It provides: *Database Design Methodology that can be Used by Both Technical and Non-technical Readers *A Comprehensive Introduction to the Theory behind Databases *A Clear Presentation that Supports Learning
This practically-focused text presents a hands-on guide to making biometric technology work in real-life scenarios. Extensively revised and updated, this new edition takes a fresh look at what it takes to integrate biometrics into wider applications. An emphasis is placed on the importance of a complete understanding of the broader scenario, covering technical, human and implementation factors. This understanding may then be exercised through interactive chapters dealing with educational software utilities and the BANTAM Program Manager. Features: provides a concise introduction to biometrics; examines both technical issues and human factors; highlights the importance of a broad understanding of biometric technology implementation from both a technical and operational perspective; reviews a selection of freely available utilities including the BANTAM Program Manager; considers the logical next steps on the path from aspiration to implementation, and looks towards the future use of biometrics in context.
In a resolutely practical and data-driven project universe, the digital age changed the way data is collected, stored, analyzed, visualized and protected, transforming business opportunities and strategies. It is important for today's organizations and entrepreneurs to implement a robust data strategy and industrialize a set of "data-driven" solutions to utilize big data analytics to its fullest potential. Big Data Analytics for Entrepreneurial Success provides emerging perspectives on the theoretical and practical aspects of data analysis tools and techniques within business applications. Featuring coverage on a broad range of topics such as algorithms, data collection, and machine learning, this publication provides concrete examples and case studies of successful uses of data-driven projects as well as the challenges and opportunities of generating value from data using analytics. It is ideally designed for entrepreneurs, researchers, business owners, managers, graduate students, academicians, software developers, and IT professionals seeking current research on the essential tools and technologies for organizing, analyzing, and benefiting from big data.
This book represents the combined peer-reviewed proceedings of the ninth International Symposium on Intelligent Distributed Computing - IDC'2015, of the Workshop on Cyber Security and Resilience of Large-Scale Systems - WSRL'2015, and of the International Workshop on Future Internet and Smart Networks - FI&SN'2015. All the events were held in Guimaraes, Portugal during October 7th-9th, 2015. The 46 contributions published in this book address many topics related to theory and applications of intelligent distributed computing, including: Intelligent Distributed Agent-Based Systems, Ambient Intelligence and Social Networks, Computational Sustainability, Intelligent Distributed Knowledge Representation and Processing, Smart Networks, Networked Intelligence and Intelligent Distributed Applications, amongst others.
Distributed and Parallel Database Object Management brings together in one place important contributions and state-of-the-art research results in this rapidly advancing area of computer science. Distributed and Parallel Database Object Management serves as an excellent reference, providing insights into some of the most important issues in the field.
This book is an outcome of the second national conference on Communication, Cloud and Big Data (CCB) held during November 10-11, 2016 at Sikkim Manipal Institute of Technology. The nineteen chapters of the book are some of the accepted papers of CCB 2016. These chapters have undergone review process and then subsequent series of improvements. The book contains chapters on various aspects of communication, computation, cloud and big data. Routing in wireless sensor networks, modulation techniques, spectrum hole sensing in cognitive radio networks, antenna design, network security, Quality of Service issues in routing, medium access control protocol for Internet of Things, and TCP performance over different routing protocols used in mobile ad-hoc networks are some of the topics discussed in different chapters of this book which fall under the domain of communication. Moreover, there are chapters in this book discussing topics like applications of geographic information systems, use of radar for road safety, image segmentation and digital media processing, web content management system, human computer interaction, and natural language processing in the context of Bodo language. These chapters may fall under broader domain of computation. Issues like robot navigation exploring cloud technology, and application of big data analytics in higher education are also discussed in two different chapters. These chapters fall under the domains of cloud and big data, respectively.
This book integrates two areas of computer science, namely data mining and evolutionary algorithms. Both these areas have become increasingly popular in the last few years, and their integration is currently an area of active research.In general, data mining consists of extracting knowledge from data. In this book we particularly emphasize the importance of discovering comprehensible, interesting knowledge, which is potentially useful for the reader for intelligent decision making.In a nutshell, the motivation for applying evolutionary algorithms to data mining is that evolutionary algorithms are robust search methods which perform a global search in the space of candidate solutions. In contrast, most rule induction methods perform a local, greedy search in the space of candidate rules. Intuitively, the global search of evolutionary algorithms can discover interesting rules and patterns that would be missed by the greedy search.
Data warehousing and mining technologies are key assets today in many areas of human knowledge, from scientific to commercial and industrial settings, and the last decades have seen tremendous advances in those fields. ""Evolving Application Domains of Data Warehousing and Mining: Trends and Solutions"" provides insight into the latest findings concerning data warehousing, data mining, and their applications in everyday human activities. Comprising a valuable resource for researchers, practitioners, and academicians, this advanced publication offers insight into recent field trends, techniques on how these technologies operate, and analysis of their effects.
The complexity and sensitivity of modern industrial processes and systems increasingly require adaptable advanced control protocols. These controllers have to be able to deal with circumstances demanding "judgement" rather than simple "yes/no," "on/off" responses, circumstances where an imprecise linguistic description is often more relevant than a cut-and-dried numerical one. The ability of fuzzy systems to handle numeric and linguistic information within a single framework renders them efficacious in this form of expert control system. Divided into two parts, Fuzzy Logic, Identification and Predictive Control first shows you how to construct static and dynamic fuzzy models using the numerical data from a variety of real-world industrial systems and simulations. The second part demonstrates the exploitation of such models to design control systems employing techniques like data mining. Fuzzy Logic, Identification and Predictive Control is a comprehensive introduction to the use of fuzzy methods in many different control paradigms encompassing robust, model-based, PID-like and predictive control. This combination of fuzzy control theory and industrial serviceability will make a telling contribution to your research whether in the academic or industrial sphere and also serves as a fine roundup of the fuzzy control area for the graduate student. Advances in Industrial Control aims to report and encourage the transfer of technology in control engineering. The rapid development of control technology has an impact on all areas of the control discipline. The series offers an opportunity for researchers to present an extended exposition of new work in all aspects of industrialcontrol.
Database Solutions: A step-by-step guide to building databases 2/eAre you responsible for designing and creating the databases that keep your business running? Or are you studying for a module in database design? If so, Database Solutions is for you This fully revised and updated edition will make the database design and build process smoother, quicker and more reliable.Recipe for database success Take one RDMS - any of the major commercial products will do: Oracle, Informix, SQL Server, Access, Paradox Add one thorough reading of Database Solutions if you are an inexperienced database designer, or one recap of the methodology if you are an old hand Use the design and implementation frameworks to plan your timetable, use a common data model that fits your requirements and adapt as necessaryFeatures Includes hints and tips for success with comprehensive guidance on avoiding pitfalls and traps Shows how to create data models using the UML design notation Includes two full-length coded example databases written on Microsoft Access 2002 and Oracle 9i, plus 15 sample data models to adapt to your needs, chosen from seven common business areasNew for this edition New chapters on SQL (St
This book is for database designers and database administrators
using Visio, which is the database component of Microsoft's Visual
Studio .NET for Enterprise Architects suite, also included in MSDN
subscriptions. This is the only guide to this product that tells
DBAs how to get their job done. Although primarily focused on tool
features, the book also provides an introduction to data modeling,
and includes practical advice on managing database projects. The
principal author was the program manager of VEA's database modeling
solutions.
This book provides an insight into IoT intelligence in terms of applications and algorithmic challenges. The book is dedicated to addressing the major challenges in realizing the artificial intelligence in IoT-based applications including challenges that vary from cost and energy efficiency to availability to service quality in multidisciplinary fashion. The aim of this book is hence to focus on both the algorithmic and practical parts of the artificial intelligence approaches in IoT applications that are enabled and supported by wireless sensor networks and cellular networks. Targeted readers are from varying disciplines who are interested in implementing the smart planet/environments vision via intelligent wireless/wired enabling technologies. Includes the most up-to-date research and applications related to IoT artificial intelligence (AI); Provides new and innovative operational ideas regarding the IoT artificial intelligence that help advance the telecommunications industry; Presents AI challenges facing the IoT scientists and provides potential ways to solve them in critical daily life issues. |
You may like...
|