![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
As the most comprehensive reference work dealing with knowledge management (KM), this work is essential for the library of every KM practitioner, researcher, and educator. Written by an international array of KM luminaries, its approx. 60 chapters approach knowledge management from a wide variety of perspectives ranging from classic foundations to cutting-edge thought, informative to provocative, theoretical to practical, historical to futuristic, human to technological, and operational to strategic. The chapters are conveniently organized into 8 major sections. The second volume consists of the sections: technologies for knowledge management, outcomes of KM, knowledge management in action, and the KM horizon. Novices and experts alike will refer to the authoritative and stimulating content again and again for years to come.
Foreword by Lars Knudsen
Blockchain Technology Solutions for the Security of IoT-Based Healthcare Systems explores the various benefits and challenges associated with the integration of blockchain with IoT healthcare systems, focusing on designing cognitive-embedded data technologies to aid better decision-making, processing and analysis of large amounts of data collected through IoT. This book series targets the adaptation of decision-making approaches under cognitive computing paradigms to demonstrate how the proposed procedures, as well as big data and Internet of Things (IoT) problems can be handled in practice. Current Internet of Things (IoT) based healthcare systems are incapable of sharing data between platforms in an efficient manner and holding them securely at the logical and physical level. To this end, blockchain technology guarantees a fully autonomous and secure ecosystem by exploiting the combined advantages of smart contracts and global consensus. However, incorporating blockchain technology in IoT healthcare systems is not easy. Centralized networks in their current capacity will be incapable to meet the data storage demands of the incoming surge of IoT based healthcare wearables.
This study, written in the context of its first publication in 1970, discusses and documents the invasion of privacy by the corporation and the social institution in the search for efficiency in information processing. Discussing areas such as the impact of the computer on administration, privacy and the storage on information, the authors assess the technical and social feasibility of constructing integrated data banks to cover the details of populations. The book was hugely influential both in terms of scholarship and legislation, and the years following saw the introduction of the Data Protection Act of 1984, which was then consolidated by the Act of 1998. The topics under discussion remain of great concern to the public in our increasingly web-based world, ensuring the continued relevance of this title to academics and students with an interest in data protection and public privacy.
also in: THE KLUWER INTERNATIONAL SERIES ON ASIAN STUDIES IN COMPUTER AND INFORMATION SCIENCE, Volume 2
Synchronizing E-Security is a critical investigation and empirical analysis of studies conducted among companies that support electronic commerce transactions in both advanced and developing economies. This book presents insights into the validity and credibility of current risk assessment methods that support electronic transactions in the global economy. Synchronizing E-Security focuses on a number of case studies of IT companies, within selected countries in West Africa, Europe, Asia and the United States. The foundation of this work is based on previous studies by Williams G., Avudzivi P.V (Hawaii 2002) on the retrospective view of information security management and the impact of tele-banking on the end-user.
Data stewards in any organization are the backbone of a successful data governance implementation because they do the work to make data trusted, dependable, and high quality. Since the publication of the first edition, there have been critical new developments in the field, such as integrating Data Stewardship into project management, handling Data Stewardship in large international companies, handling "big data" and Data Lakes, and a pivot in the overall thinking around the best way to align data stewardship to the data-moving from business/organizational function to data domain. Furthermore, the role of process in data stewardship is now recognized as key and needed to be covered. Data Stewardship, Second Edition provides clear and concise practical advice on implementing and running data stewardship, including guidelines on how to organize based on organizational/company structure, business functions, and data ownership. The book shows data managers how to gain support for a stewardship effort, maintain that support over the long-term, and measure the success of the data stewardship effort. It includes detailed lists of responsibilities for each type of data steward and strategies to help the Data Governance Program Office work effectively with the data stewards.
This thesis primarily focuses on how to carry out intelligent sensing and understand the high-dimensional and low-quality visual information. After exploring the inherent structures of the visual data, it proposes a number of computational models covering an extensive range of mathematical topics, including compressive sensing, graph theory, probabilistic learning and information theory. These computational models are also applied to address a number of real-world problems including biometric recognition, stereo signal reconstruction, natural scene parsing, and SAR image processing.
Real-time computer systems are very often subject to dependability requirements because of their application areas. Fly-by-wire airplane control systems, control of power plants, industrial process control systems and others are required to continue their function despite faults. Fault-tolerance and real-time requirements thus constitute a kind of natural combination in process control applications. Systematic fault-tolerance is based on redundancy, which is used to mask failures of individual components. The problem of replica determinism is thereby to ensure that replicated components show consistent behavior in the absence of faults. It might seem trivial that, given an identical sequence of inputs, replicated computer systems will produce consistent outputs. Unfortunately, this is not the case. The problem of replica non-determinism and the presentation of its possible solutions is the subject of Fault-Tolerant Real-Time Systems: The Problem of Replica Determinism. The field of automotive electronics is an important application area of fault-tolerant real-time systems. Systems like anti-lock braking, engine control, active suspension or vehicle dynamics control have demanding real-time and fault-tolerance requirements. These requirements have to be met even in the presence of very limited resources since cost is extremely important. Because of its interesting properties Fault-Tolerant Real-Time Systems gives an introduction to the application area of automotive electronics. The requirements of automotive electronics are a topic of discussion in the remainder of this work and are used as a benchmark to evaluate solutions to the problem of replica determinism.
st This volume contains the proceedings of two conferences held as part of the 21 IFIP World Computer Congress in Brisbane, Australia, 20-23 September 2010. th The first part of the book presents the proceedings of DIPES 2010, the 7 IFIP Conference on Distributed and Parallel Embedded Systems. The conference, int- duced in a separate preface by the Chairs, covers a range of topics from specification and design of embedded systems through to dependability and fault tolerance. rd The second part of the book contains the proceedings of BICC 2010, the 3 IFIP Conference on Biologically-Inspired Collaborative Computing. The conference is concerned with emerging techniques from research areas such as organic computing, autonomic computing and self-adaptive systems, where inspiraton for techniques - rives from exhibited behaviour in nature and biology. Such techniques require the use of research developed by the DIPES community in supporting collaboration over multiple systems. We hope that the combination of the two proceedings will add value for the reader and advance our related work.
Covering some of the most cutting-edge research on the delivery and retrieval of interactive multimedia content, this volume of specially chosen contributions provides the most updated perspective on one of the hottest contemporary topics. The material represents extended versions of papers presented at the 11th International Workshop on Image Analysis for Multimedia Interactive Services, a vital international forum on this fast-moving field. Logically organized in discrete sections that approach the subject from its various angles, the content deals in turn with content analysis, motion and activity analysis, high-level descriptors and video retrieval, 3-D and multi-view, and multimedia delivery. The chapters cover the finest detail of emerging techniques such as the use of high-level audio information in improving scene segmentation and the use of subjective logic for forensic visual surveillance. On content delivery, the book examines both images and video, focusing on key subjects including an efficient pre-fetching strategy for JPEG 2000 image sequences. Further contributions look at new methodologies for simultaneous block reconstruction and provide a trellis-based algorithm for faster motion-vector decision making.
Very little has been written to address the emerging trends in social software and technology. With these technologies and applications being relatively new and evolving rapidly, research is wide open in these fields. Social Software and Web 2.0 Technology Trends fills this critical research need, providing an overview of the current state of Web 2.0 technologies and their impact on organizations and educational institutions. Written for academicians and practicing managers, this estimable book presents business applications as well as implementations for institutions of higher education with numerous examples of how these technologies are currently being used. Delivering authoritative insights to a rapidly evolving domain of technology application, this book is an invaluable resource for both academic libraries and for classroom instruction.
Data Mining Methods for Knowledge Discovery provides an introduction to the data mining methods that are frequently used in the process of knowledge discovery. This book first elaborates on the fundamentals of each of the data mining methods: rough sets, Bayesian analysis, fuzzy sets, genetic algorithms, machine learning, neural networks, and preprocessing techniques. The book then goes on to thoroughly discuss these methods in the setting of the overall process of knowledge discovery. Numerous illustrative examples and experimental findings are also included. Each chapter comes with an extensive bibliography. Data Mining Methods for Knowledge Discovery is intended for senior undergraduate and graduate students, as well as a broad audience of professionals in computer and information sciences, medical informatics, and business information systems.
The volume "Fuzziness in Database Management Systems" is a highly informative, well-organized and up-to-date collection of contributions authored by many of the leading experts in its field. Among the contributors are the editors, Professors Patrick Bose and Janusz Kacprzyk, both of whom are known internationally. The book is like a movie with an all-star cast. The issue of fuzziness in database management systems has a long history. It begins in 1968 and 1971, when I spent my sabbatical leaves at the IBM Research Laboratory in San Jose, California, as a visiting scholar. During these periods I was associated with Dr. E.F. Codd, the father of relational models of database systems, and came in contact with the developers ofiBMs System Rand SQL. These associations and contacts at a time when the methodology of relational models of data was in its formative stages, made me aware of the basic importance of such models and the desirability of extending them to fuzzy database systems and fuzzy query languages. This perception was reflected in my 1973 ffiM report which led to the paper on the concept of a linguistic variable and later to the paper on the meaning representation language PRUF (Possibilistic Relational Universal Fuzzy). More directly related to database issues during that period were the theses of my students V. Tahani, J. Yang, A. Bolour, M. Shen and R. Sheng, and many subsequent reports by both graduate and undergraduate students at Berkeley.
Cryptography, secret writing, is enjoying a scientific renaissance following the seminal discovery in 1977 of public-key cryptography and applications in computers and communications. This book gives a broad overview of public-key cryptography - its essence and advantages, various public-key cryptosystems, and protocols - as well as a comprehensive introduction to classical cryptography and cryptoanalysis. The second edition has been revised and enlarged especially in its treatment of cryptographic protocols. From a review of the first edition: "This is a comprehensive review ... there can be no doubt that this will be accepted as a standard text. At the same time, it is clearly and entertainingly written ... and can certainly stand alone." Alex M. Andrew, Kybernetes, March 1992
Information and communication technology (ICT) is permeating all aspects of service management; in the public sector, ICT is improving the capacity of government agencies to provide a wide array of innovative services that benefit citizens. E-Government is emerging as a multidisciplinary field of research based initially on empirical insights from practice. Efforts to theoretically anchor the field have opened perspectives from multiple research domains, as demonstrated in Practical Studies in E-Government. In this volume, the editors and contributors consider the evolution of the e-government field from both practical and research perspectives. Featuring in-depth case studies of initiatives in eight countries, the book deals with such technology-oriented issues as interoperability, prototyping, data quality, and advanced interfaces, and management-oriented issues as e-procurement, e-identification, election results verification, and information privacy. The book features best practices, tools for measuring and improving performance, and analytical methods for researchers.
Handbook of Economic Expectations discusses the state-of-the-art in the collection, study and use of expectations data in economics, including the modelling of expectations formation and updating, as well as open questions and directions for future research. The book spans a broad range of fields, approaches and applications using data on subjective expectations that allows us to make progress on fundamental questions around the formation and updating of expectations by economic agents and their information sets. The information included will help us study heterogeneity and potential biases in expectations and analyze impacts on behavior and decision-making under uncertainty.
Multivariate data analysis is a central tool whenever several variables need to be considered at the same time. The present book explains a powerful and versatile way to analyse data tables, suitable also for researchers without formal training in statistics. This method for extracting useful information from data is demonstrated for various types of quality assessment, ranging from human quality perception via industrial quality monitoring to health quality and its molecular basis. Key features include:
The book is written with ISO certified businesses and laboratories in mind, to enhance Total Quality Management (TQM). As yet there are no clear guidelines for realistic data analysis of quality in complex systems - this volume bridges the gap.
Semantics, Web services, and Web processes promise better re-use, universal interoperability and integration. Semantics has been recognized as the primary tool to address the challenges of a broad spectrum of heterogeneity and for improving automation through machine understandable descriptions. Semantic Web Services, Processes and Applications brings contributions from researchers who study, explore and understand the semantic enabling of all phases of semantic Web processes. This encompasses design, annotation, discovery, choreography and composition. Also this book presents fundamental capabilities and techniques associated with ontological modeling or services, annotation, matching and mapping, and reasoning. This is complemented by discussion of applications in e-Government and bioinformatics. Special bulk rates are available for course adoption through Publishing Editor.
In the chapters in Part I of this textbook the author introduces the fundamental ideas of artificial intelligence and computational intelligence. In Part II he explains key AI methods such as search, evolutionary computing, logic-based reasoning, knowledge representation, rule-based systems, pattern recognition, neural networks, and cognitive architectures. Finally, in Part III, he expands the context to discuss theories of intelligence in philosophy and psychology, key applications of AI systems, and the likely future of artificial intelligence. A key feature of the author's approach is historical and biographical footnotes, stressing the multidisciplinary character of the field and its pioneers. The book is appropriate for advanced undergraduate and graduate courses in computer science, engineering, and other applied sciences, and the appendices offer short formal, mathematical models and notes to support the reader.
This book investigates the ways in which these systems can promote public value by encouraging the disclosure and reuse of privately-held data in ways that support collective values such as environmental sustainability. Supported by funding from the National Science Foundation, the authors' research team has been working on one such system, designed to enhance consumers ability to access information about the sustainability of the products that they buy and the supply chains that produce them. Pulled by rapidly developing technology and pushed by budget cuts, politicians and public managers are attempting to find ways to increase the public value of their actions. Policymakers are increasingly acknowledging the potential that lies in publicly disclosing more of the data that they hold, as well as incentivizing individuals and organizations to access, use, and combine it in new ways. Due to technological advances which include smarter phones, better ways to track objects and people as they travel, and more efficient data processing, it is now possible to build systems which use shared, transparent data in creative ways. The book adds to the current conversation among academics and practitioners about how to promote public value through data disclosure, focusing particularly on the roles that governments, businesses and non-profit actors can play in this process, making it of interest to both scholars and policy-makers.
Decision diagrams (DDs) are data structures for efficient (time/space) representations of large discrete functions. In addition to their wide application in engineering practice, DDs are now a standard part of many CAD systems for logic design and a basis for severe signal processing algorithms. "Spectral Interpretation of Decision Diagrams" derives from attempts to classify and uniformly interpret DDs through spectral interpretation methods, relating them to different Fourier-series-like functional expressions for discrete functions and a group-theoretic approach to DD optimization. The book examines DDs found in literature and engineering practice and provides insights into relationships between DDs and different polynomial or spectral expressions for representation of discrete functions. In addition, it offers guidelines and criteria for selection of the most suitable representation in terms of space and time complexity. The work complements theory with numerous illustrative examples from practice. Moreover, the importance of DD representations to the verification and testing of arithmetic circuits is addressed, as well as problems related to various signal processing tasks.
Earth date, August 11, 1997 "Beam me up Scottie!" "We cannot do it! This is not Star Trek's Enterprise. This is early years Earth." True, this is not yet the era of Star Trek, we cannot beam captain James T. Kirk or captain Jean Luc Pickard or an apple or anything else anywhere. What we can do though is beam information about Kirk or Pickard or an apple or an insurance agent. We can beam a record of a patient, the status of an engine, a weather report. We can beam this information anywhere, to mobile workers, to field engineers, to a track loading apples, to ships crossing the Oceans, to web surfers. We have reached a point where the promise of information access anywhere and anytime is close to realization. The enabling technology, wireless networks, exists; what remains to be achieved is providing the infrastructure and the software to support the promise. Universal access and management of information has been one of the driving forces in the evolution of computer technology. Central computing gave the ability to perform large and complex computations and advanced information manipulation. Advances in networking connected computers together and led to distributed computing. Web technology and the Internet went even further to provide hyper-linked information access and global computing. However, restricting access stations to physical location limits the boundary of the vision. |
You may like...
Opinion Mining and Text Analytics on…
Pantea Keikhosrokiani, Moussa Pourya Asl
Hardcover
R9,276
Discovery Miles 92 760
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
R506
Discovery Miles 5 060
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
|