![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
"Hands-On Database "uses a scenario-based approach that shows readers how to build a database by providing them with the context of a running case throughout each step of the process.
Information system design and development is of interest and importance to researchers and practitioners, as advances in this discipline impact a number of other related fields and help to guide future research. Theoretical and Practical Advances in Information Systems Development: Emerging Trends and Approaches contains fundamental concepts, emerging theories, and practical applications in database management, systems analysis and design, and software engineering. Contributions present critical findings in information resources management that inform and advance the field.
This book focuses on the data mining, systems biology, and bioinformatics computational methods that can be used to summarize biological networks. Specifically, it discusses an array of techniques related to biological network clustering, network summarization, and differential network analysis which enable readers to uncover the functional and topological organization hidden in a large biological network. The authors also examine crucial open research problems in this arena. Academics, researchers, and advanced-level students will find this book to be a comprehensive and exceptional resource for understanding computational techniques and their applications for a summary of biological networks.
The Semantic Web has evolved as a blueprint for a knowledge-based framework aimed at crossing the chasm from the current Web of unstructured information resources to a Web equipped with metadata and oriented to delegating tasks to software agents. Semantic Web Personalization and Context Awareness: Management of Personal Identities and Social Networking communicates relevant recent research in Semantic Web-based personalization as applied to the context of information systems. This book reviews knowledge engineering for organizational applications, and Semantic Web approaches to information systems and ontology-based information systems research, as well as the diverse underlying database and knowledge representation aspects that impact personalization and customization.
Synchronizing E-Security is a critical investigation and empirical analysis of studies conducted among companies that support electronic commerce transactions in both advanced and developing economies. This book presents insights into the validity and credibility of current risk assessment methods that support electronic transactions in the global economy. Synchronizing E-Security focuses on a number of case studies of IT companies, within selected countries in West Africa, Europe, Asia and the United States. The foundation of this work is based on previous studies by Williams G., Avudzivi P.V (Hawaii 2002) on the retrospective view of information security management and the impact of tele-banking on the end-user.
This book and sofwtare package provide a complement to the traditional data analysis tools already widely available. It presents an introduction to the analysis of data using neural networks. Neural network functions discussed include multilayer feed-forward networks using error back propagation, genetic algorithm-neural network hybrids, generalized regression neural networks, learning quantizer networks, and self-organizing feature maps. In an easy-to-use, Windows-based environment it offers a wide range of data analytic tools which are not usually found together: these include genetic algorithms, probabilistic networks, as well as a number of related techniques that support these - notably, fractal dimension analysis, coherence analysis, and mutual information analysis. The text presents a number of worked examples and case studies using Simulnet, the software package which comes with the book. Readers are assumed to have a basic understanding of computers and elementary mathematics. With this background, a reader will find themselves quickly conducting sophisticated hands-on analyses of data sets.
As the most comprehensive reference work dealing with knowledge management (KM), this work is essential for the library of every KM practitioner, researcher, and educator. Written by an international array of KM luminaries, its approx. 60 chapters approach knowledge management from a wide variety of perspectives ranging from classic foundations to cutting-edge thought, informative to provocative, theoretical to practical, historical to futuristic, human to technological, and operational to strategic. The chapters are conveniently organized into 8 major sections. The second volume consists of the sections: technologies for knowledge management, outcomes of KM, knowledge management in action, and the KM horizon. Novices and experts alike will refer to the authoritative and stimulating content again and again for years to come.
A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems contains an invaluable collection of quantitative methods that enable real-time system developers to understand, analyze, and predict the timing behavior of many real-time systems. The methods are practical and theoretically sound, and can be used to assess design tradeoffs and to troubleshoot system timing behavior. This collection of methods is called rate monotonic analysis (RMA). The Handbook includes a framework for describing and categorizing the timing aspects of real-time systems, step-by-step techniques for performing timing analysis, numerous examples of real-time situations to which the techniques can be applied, and two case studies. A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems has been created to serve as a definitive source of information and a guide for developers as they analyze and design real-time systems using RMA. The Handbook is an excellent reference, and may be used as the text for advanced courses on the subject.
Foreword by Lars Knudsen
Blockchain Technology Solutions for the Security of IoT-Based Healthcare Systems explores the various benefits and challenges associated with the integration of blockchain with IoT healthcare systems, focusing on designing cognitive-embedded data technologies to aid better decision-making, processing and analysis of large amounts of data collected through IoT. This book series targets the adaptation of decision-making approaches under cognitive computing paradigms to demonstrate how the proposed procedures, as well as big data and Internet of Things (IoT) problems can be handled in practice. Current Internet of Things (IoT) based healthcare systems are incapable of sharing data between platforms in an efficient manner and holding them securely at the logical and physical level. To this end, blockchain technology guarantees a fully autonomous and secure ecosystem by exploiting the combined advantages of smart contracts and global consensus. However, incorporating blockchain technology in IoT healthcare systems is not easy. Centralized networks in their current capacity will be incapable to meet the data storage demands of the incoming surge of IoT based healthcare wearables.
This study, written in the context of its first publication in 1970, discusses and documents the invasion of privacy by the corporation and the social institution in the search for efficiency in information processing. Discussing areas such as the impact of the computer on administration, privacy and the storage on information, the authors assess the technical and social feasibility of constructing integrated data banks to cover the details of populations. The book was hugely influential both in terms of scholarship and legislation, and the years following saw the introduction of the Data Protection Act of 1984, which was then consolidated by the Act of 1998. The topics under discussion remain of great concern to the public in our increasingly web-based world, ensuring the continued relevance of this title to academics and students with an interest in data protection and public privacy.
The need for efficient content-based image retrieval has increased tremendously in areas such as biomedicine, military, commerce, education, and Web image classification and searching. In the biomedical domain, content-based image retrieval can be used in patient digital libraries, clinical diagnosis, searching of 2-D electrophoresis gels, and pathology slides. Integrated Region-Based Image Retrieval presents a wavelet-based approach for feature extraction, combined with integrated region matching. An image in the database, or a portion of an image, is represented by a set of regions, roughly corresponding to objects, which are characterized by color, texture, shape, and location. A measure for the overall similarity between images is developed as a region-matching scheme that integrates properties of all the regions in the images. The advantage of using this "soft matching" is that it makes the metric robust to poor segmentation, an important property that previous research has not solved. Integrated Region-Based Image Retrieval demonstrates an experimental image retrieval system called SIMPLIcity (Semantics-sensitive Integrated Matching for Picture LIbraries). This system validates these methods on various image databases, proving that such methods perform much better and much faster than existing ones. The system is exceptionally robust to image alterations such as intensity variation, sharpness variation, intentional distortions, cropping, shifting, and rotation. These features are extremely important to biomedical image databases since visual features in the query image are not exactly the same as the visual features in the images in the database. Integrated Region-Based ImageRetrieval is an excellent reference for researchers in the fields of image retrieval, multimedia, computer vision and image processing.
Covering some of the most cutting-edge research on the delivery and retrieval of interactive multimedia content, this volume of specially chosen contributions provides the most updated perspective on one of the hottest contemporary topics. The material represents extended versions of papers presented at the 11th International Workshop on Image Analysis for Multimedia Interactive Services, a vital international forum on this fast-moving field. Logically organized in discrete sections that approach the subject from its various angles, the content deals in turn with content analysis, motion and activity analysis, high-level descriptors and video retrieval, 3-D and multi-view, and multimedia delivery. The chapters cover the finest detail of emerging techniques such as the use of high-level audio information in improving scene segmentation and the use of subjective logic for forensic visual surveillance. On content delivery, the book examines both images and video, focusing on key subjects including an efficient pre-fetching strategy for JPEG 2000 image sequences. Further contributions look at new methodologies for simultaneous block reconstruction and provide a trellis-based algorithm for faster motion-vector decision making.
Data stewards in any organization are the backbone of a successful data governance implementation because they do the work to make data trusted, dependable, and high quality. Since the publication of the first edition, there have been critical new developments in the field, such as integrating Data Stewardship into project management, handling Data Stewardship in large international companies, handling "big data" and Data Lakes, and a pivot in the overall thinking around the best way to align data stewardship to the data-moving from business/organizational function to data domain. Furthermore, the role of process in data stewardship is now recognized as key and needed to be covered. Data Stewardship, Second Edition provides clear and concise practical advice on implementing and running data stewardship, including guidelines on how to organize based on organizational/company structure, business functions, and data ownership. The book shows data managers how to gain support for a stewardship effort, maintain that support over the long-term, and measure the success of the data stewardship effort. It includes detailed lists of responsibilities for each type of data steward and strategies to help the Data Governance Program Office work effectively with the data stewards.
Multimedia Mining: A Highway to Intelligent Multimedia Documents brings together experts in digital media content analysis, state-of-art data mining and knowledge discovery in multimedia database systems, knowledge engineers and domain experts from diverse applied disciplines. Multimedia documents are ubiquitous and often required, if not essential, in many applications today. This phenomenon has made multimedia documents widespread and extremely large. There are tools for managing and searching within these collections, but the need for tools to extract hidden useful knowledge embedded within multimedia objects is becoming pressing and central for many decision-making applications. The tools needed today are tools for discovering relationships between objects or segments within multimedia document components, such as classifying images based on their content, extracting patterns in sound, categorizing speech and music, and recognizing and tracking objects in video streams.
This book examines the techniques and applications involved in the Web Mining, Web Personalization and Recommendation and Web Community Analysis domains, including a detailed presentation of the principles, developed algorithms, and systems of the research in these areas. The applications of web mining, and the issue of how to incorporate web mining into web personalization and recommendation systems are also reviewed. Additionally, the volume explores web community mining and analysis to find the structural, organizational and temporal developments of web communities and reveal the societal sense of individuals or communities. The volume will benefit both academic and industry communities interested in the techniques and applications of web search, web data management, web mining and web knowledge discovery, as well as web community and social network analysis.
also in: THE KLUWER INTERNATIONAL SERIES ON ASIAN STUDIES IN COMPUTER AND INFORMATION SCIENCE, Volume 2
The volume "Fuzziness in Database Management Systems" is a highly informative, well-organized and up-to-date collection of contributions authored by many of the leading experts in its field. Among the contributors are the editors, Professors Patrick Bose and Janusz Kacprzyk, both of whom are known internationally. The book is like a movie with an all-star cast. The issue of fuzziness in database management systems has a long history. It begins in 1968 and 1971, when I spent my sabbatical leaves at the IBM Research Laboratory in San Jose, California, as a visiting scholar. During these periods I was associated with Dr. E.F. Codd, the father of relational models of database systems, and came in contact with the developers ofiBMs System Rand SQL. These associations and contacts at a time when the methodology of relational models of data was in its formative stages, made me aware of the basic importance of such models and the desirability of extending them to fuzzy database systems and fuzzy query languages. This perception was reflected in my 1973 ffiM report which led to the paper on the concept of a linguistic variable and later to the paper on the meaning representation language PRUF (Possibilistic Relational Universal Fuzzy). More directly related to database issues during that period were the theses of my students V. Tahani, J. Yang, A. Bolour, M. Shen and R. Sheng, and many subsequent reports by both graduate and undergraduate students at Berkeley.
Cryptography, secret writing, is enjoying a scientific renaissance following the seminal discovery in 1977 of public-key cryptography and applications in computers and communications. This book gives a broad overview of public-key cryptography - its essence and advantages, various public-key cryptosystems, and protocols - as well as a comprehensive introduction to classical cryptography and cryptoanalysis. The second edition has been revised and enlarged especially in its treatment of cryptographic protocols. From a review of the first edition: "This is a comprehensive review ... there can be no doubt that this will be accepted as a standard text. At the same time, it is clearly and entertainingly written ... and can certainly stand alone." Alex M. Andrew, Kybernetes, March 1992
Real-time computer systems are very often subject to dependability requirements because of their application areas. Fly-by-wire airplane control systems, control of power plants, industrial process control systems and others are required to continue their function despite faults. Fault-tolerance and real-time requirements thus constitute a kind of natural combination in process control applications. Systematic fault-tolerance is based on redundancy, which is used to mask failures of individual components. The problem of replica determinism is thereby to ensure that replicated components show consistent behavior in the absence of faults. It might seem trivial that, given an identical sequence of inputs, replicated computer systems will produce consistent outputs. Unfortunately, this is not the case. The problem of replica non-determinism and the presentation of its possible solutions is the subject of Fault-Tolerant Real-Time Systems: The Problem of Replica Determinism. The field of automotive electronics is an important application area of fault-tolerant real-time systems. Systems like anti-lock braking, engine control, active suspension or vehicle dynamics control have demanding real-time and fault-tolerance requirements. These requirements have to be met even in the presence of very limited resources since cost is extremely important. Because of its interesting properties Fault-Tolerant Real-Time Systems gives an introduction to the application area of automotive electronics. The requirements of automotive electronics are a topic of discussion in the remainder of this work and are used as a benchmark to evaluate solutions to the problem of replica determinism.
Data Mining Methods for Knowledge Discovery provides an introduction to the data mining methods that are frequently used in the process of knowledge discovery. This book first elaborates on the fundamentals of each of the data mining methods: rough sets, Bayesian analysis, fuzzy sets, genetic algorithms, machine learning, neural networks, and preprocessing techniques. The book then goes on to thoroughly discuss these methods in the setting of the overall process of knowledge discovery. Numerous illustrative examples and experimental findings are also included. Each chapter comes with an extensive bibliography. Data Mining Methods for Knowledge Discovery is intended for senior undergraduate and graduate students, as well as a broad audience of professionals in computer and information sciences, medical informatics, and business information systems.
Earth date, August 11, 1997 "Beam me up Scottie!" "We cannot do it! This is not Star Trek's Enterprise. This is early years Earth." True, this is not yet the era of Star Trek, we cannot beam captain James T. Kirk or captain Jean Luc Pickard or an apple or anything else anywhere. What we can do though is beam information about Kirk or Pickard or an apple or an insurance agent. We can beam a record of a patient, the status of an engine, a weather report. We can beam this information anywhere, to mobile workers, to field engineers, to a track loading apples, to ships crossing the Oceans, to web surfers. We have reached a point where the promise of information access anywhere and anytime is close to realization. The enabling technology, wireless networks, exists; what remains to be achieved is providing the infrastructure and the software to support the promise. Universal access and management of information has been one of the driving forces in the evolution of computer technology. Central computing gave the ability to perform large and complex computations and advanced information manipulation. Advances in networking connected computers together and led to distributed computing. Web technology and the Internet went even further to provide hyper-linked information access and global computing. However, restricting access stations to physical location limits the boundary of the vision.
Handbook of Economic Expectations discusses the state-of-the-art in the collection, study and use of expectations data in economics, including the modelling of expectations formation and updating, as well as open questions and directions for future research. The book spans a broad range of fields, approaches and applications using data on subjective expectations that allows us to make progress on fundamental questions around the formation and updating of expectations by economic agents and their information sets. The information included will help us study heterogeneity and potential biases in expectations and analyze impacts on behavior and decision-making under uncertainty.
Very little has been written to address the emerging trends in social software and technology. With these technologies and applications being relatively new and evolving rapidly, research is wide open in these fields. Social Software and Web 2.0 Technology Trends fills this critical research need, providing an overview of the current state of Web 2.0 technologies and their impact on organizations and educational institutions. Written for academicians and practicing managers, this estimable book presents business applications as well as implementations for institutions of higher education with numerous examples of how these technologies are currently being used. Delivering authoritative insights to a rapidly evolving domain of technology application, this book is an invaluable resource for both academic libraries and for classroom instruction.
Multivariate data analysis is a central tool whenever several variables need to be considered at the same time. The present book explains a powerful and versatile way to analyse data tables, suitable also for researchers without formal training in statistics. This method for extracting useful information from data is demonstrated for various types of quality assessment, ranging from human quality perception via industrial quality monitoring to health quality and its molecular basis. Key features include:
The book is written with ISO certified businesses and laboratories in mind, to enhance Total Quality Management (TQM). As yet there are no clear guidelines for realistic data analysis of quality in complex systems - this volume bridges the gap. |
You may like...
Biomedical Diagnostics and Clinical…
Manuela Pereira, Mario Freire
Hardcover
R6,154
Discovery Miles 61 540
Patterns for API Design - Simplifying…
Olaf Zimmermann, Mirko Stocker, …
Paperback
XML in Data Management - Understanding…
Peter Aiken, M. David Allen
Paperback
R1,150
Discovery Miles 11 500
|