![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Research into Fully Integrated Data Environments (FIDE) has the goal of substantially improving the quality of application systems while reducing the cost of building and maintaining them. Application systems invariably involve the long-term storage of data over months or years. Much unnecessary complexity obstructs the construction of these systems when conventional databases, file systems, operating systems, communication systems, and programming languages are used. This complexity limits the sophistication of the systems that can be built, generates operational and usability problems, and deleteriously impacts both reliability and performance. This book reports on the work of researchers in the Esprit FIDE projects to design and develop a new integrated environment to support the construction and operation of such persistent application systems. It reports on the principles they employed to design it, the prototypes they built to test it, and their experience using it.
It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed "ensemble learning" by researchers in computational intelligence and machine learning, it is known to improve a decision system's robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as "boosting" and "random forest" facilitate solutions to key computational issues such as face recognition and are now being applied in areas as diverse as object tracking and bioinformatics. Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including the random forest skeleton tracking algorithm in the Xbox Kinect sensor, which bypasses the need for game controllers. At once a solid theoretical study and a practical guide, the volume is a windfall for researchers and practitioners alike. "
Sensor network data management poses new challenges outside the scope of conventional systems where data is represented and regulated. Intelligent Techniques for Warehousing and Mining Sensor Network Data presents fundamental and theoretical issues pertaining to data management. Covering a broad range of topics on warehousing and mining sensor networks, this advanced title provides significant industry solutions to those in database, data warehousing, and data mining research communities.
This book paves the way for researchers working on the sustainable interdependent networks spread over the fields of computer science, electrical engineering, and smart infrastructures. It provides the readers with a comprehensive insight to understand an in-depth big picture of smart cities as a thorough example of interdependent large-scale networks in both theory and application aspects. The contributors specify the importance and position of the interdependent networks in the context of developing the sustainable smart cities and provide a comprehensive investigation of recently developed optimization methods for large-scale networks. There has been an emerging concern regarding the optimal operation of power and transportation networks. In the second volume of Sustainable Interdependent Networks book, we focus on the interdependencies of these two networks, optimization methods to deal with the computational complexity of them, and their role in future smart cities. We further investigate other networks, such as communication networks, that indirectly affect the operation of power and transportation networks. Our reliance on these networks as global platforms for sustainable development has led to the need for developing novel means to deal with arising issues. The considerable scale of such networks, due to the large number of buses in smart power grids and the increasing number of electric vehicles in transportation networks, brings a large variety of computational complexity and optimization challenges. Although the independent optimization of these networks lead to locally optimum operation points, there is an exigent need to move towards obtaining the globally-optimum operation point of such networks while satisfying the constraints of each network properly. The book is suitable for senior undergraduate students, graduate students interested in research in multidisciplinary areas related to future sustainable networks, and the researchers working in the related areas. It also covers the application of interdependent networks which makes it a perfect source of study for audience out of academia to obtain a general insight of interdependent networks.
Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
Even in the age of ubiquitous computing, the importance of the Internet will not change and we still need to solve conventional security issues. In addition, we need to deal with new issues such as security in the P2P environment, privacy issues in the use of smart cards, and RFID systems. Security and Privacy in the Age of Ubiquitous Computing addresses these issues and more by exploring a wide scope of topics. The volume presents a selection of papers from the proceedings of the 20th IFIP International Information Security Conference held from May 30 to June 1, 2005 in Chiba, Japan. Topics covered include cryptography applications, authentication, privacy and anonymity, DRM and content security, computer forensics, Internet and web security, security in sensor networks, intrusion detection, commercial and industrial security, authorization and access control, information warfare and critical protection infrastructure. These papers represent the most current research in information security, including research funded in part by DARPA and the National Science Foundation.
This book presents principles and applications to expand the storage space from 2-D to 3-D and even multi-D, including gray scale, color (light with different wavelength), polarization and coherence of light. These actualize the improvements of density, capacity and data transfer rate for optical data storage. Moreover, the applied implementation technologies to make mass data storage devices are described systematically. Some new mediums, which have linear absorption characteristics for different wavelength and intensity to light with high sensitivity, are introduced for multi-wavelength and multi-level optical storage. This book can serve as a useful reference for researchers, engineers, graduate and undergraduate students in material science, information science and optics.
Data mining provides a set of new techniques to integrate, synthesize, and analyze tdata, uncovering the hidden patterns that exist within. Traditionally, techniques such as kernel learning methods, pattern recognition, and data mining, have been the domain of researchers in areas such as artificial intelligence, but leveraging these tools, techniques, and concepts against your data asset to identify problems early, understand interactions that exist and highlight previously unrealized relationships through the combination of these different disciplines can provide significant value for the investigator and her organization.
Some recent fuzzy database modeling advances for the
non-traditional applications are introduced in this book. The focus
is on database models for modeling complex information and
uncertainty at the conceptual, logical, physical design levels and
from integrity constraints defined on the fuzzy relations.
"Foundations of Data Mining and Knowledge Discovery" contains the latest results and new directions in data mining research. Data mining, which integrates various technologies, including computational intelligence, database and knowledge management, machine learning, soft computing, and statistics, is one of the fastest growing fields in computer science. Although many data mining techniques have been developed, further development of the field requires a close examination of its foundations. This volume presents the results of investigations into the foundations of the discipline, and represents the state of the art for much of the current research. This book will prove extremely valuable and fruitful for data mining researchers, no matter whether they would like to uncover the fundamental principles behind data mining, or apply the theories to practical applications.
Real-time systems are now used in a wide variety of applications. Conventionally, they were configured at design to perform a given set of tasks and could not readily adapt to dynamic situations. The concept of imprecise and approximate computation has emerged as a promising approach to providing scheduling flexibility and enhanced dependability in dynamic real-time systems. The concept can be utilized in a wide variety of applications, including signal processing, machine vision, databases, networking, etc. For those who wish to build dynamic real-time systems which must deal safely with resource unavailability while continuing to operate, leading to situations where computations may not be carried through to completion, the techniques of imprecise and approximate computation facilitate the generation of partial results that may enable the system to operate safely and avert catastrophe. Audience: Of special interest to researchers. May be used as a supplementary text in courses on real-time systems.
This book focuses on next generation data technologies in support of collective and computational intelligence. The book brings various next generation data technologies together to capture, integrate, analyze, mine, annotate and visualize distributed data - made available from various community users - in a meaningful and collaborative for the organization manner. A unique perspective on collective computational intelligence is offered by embracing both theory and strategies fundamentals such as data clustering, graph partitioning, collaborative decision making, self-adaptive ant colony, swarm and evolutionary agents. It also covers emerging and next generation technologies in support of collective computational intelligence such as Web 2.0 social networks, semantic web for data annotation, knowledge representation and inference, data privacy and security, and enabling distributed and collaborative paradigms such as P2P, Grid and Cloud Computing due to the geographically dispersed and distributed nature of the data. The book aims to cover in a comprehensive manner the combinatorial effort of utilizing and integrating various next generations collaborative and distributed data technologies for computational intelligence in various scenarios. The book also distinguishes itself by assessing whether utilization and integration of next generation data technologies can assist in the identification of new opportunities, which may also be strategically fit for purpose.
This volume contains the proceedings of IFIPTM 2010, the 4th IFIP WG 11.11 International Conference on Trust Management, held in Morioka, Iwate, Japan during June 16-18, 2010. IFIPTM 2010 provided a truly global platform for the reporting of research, development, policy, and practice in the interdependent arrears of privacy, se- rity, and trust. Building on the traditions inherited from the highly succe- ful iTrust conference series, the IFIPTM 2007 conference in Moncton, New Brunswick, Canada, the IFIPTM 2008 conference in Trondheim, Norway, and the IFIPTM 2009 conference at Purdue University in Indiana, USA, IFIPTM 2010 focused on trust, privacy and security from multidisciplinary persp- tives. The conference is an arena for discussion on relevant problems from both research and practice in the areas of academia, business, and government. IFIPTM 2010 was an open IFIP conference. The program of the conference featured both theoretical research papers and reports of real-world case studies. IFIPTM 2010 received 61 submissions from 25 di?erent countries: Japan (10), UK (6), USA (6), Canada (5), Germany (5), China (3), Denmark (2), India (2), Italy (2), Luxembourg (2), The Netherlands (2), Switzerland (2), Taiwan (2), Austria, Estonia, Finland, France, Ireland, Israel, Korea, Malaysia, Norway, Singapore, Spain, Turkey. The Program Committee selected 18 full papers for presentation and inclusion in the proceedings. In addition, the program and the proceedings include two invited papers by academic experts in the ?elds of trust management, privacy and security, namely, Toshio Yamagishi and Pamela Briggs
Information Retrieval: Algorithms and Heuristics is a comprehensive introduction to the study of information retrieval covering both effectiveness and run-time performance. The focus of the presentation is on algorithms and heuristics used to find documents relevant to the user request and to find them fast. Through multiple examples, the most commonly used algorithms and heuristics needed are tackled. To facilitate understanding and applications, introductions to and discussions of computational linguistics, natural language processing, probability theory and library and computer science are provided. While this text focuses on algorithms and not on commercial product per se, the basic strategies used by many commercial products are described. Techniques that can be used to find information on the Web, as well as in other large information collections, are included. This volume is an invaluable resource for researchers, practitioners, and students working in information retrieval and databases. For instructors, a set of Powerpoint slides, including speaker notes, are available online from the authors.
Multiple processor systems are an important class of parallel systems. Over the years, several architectures have been proposed to build such systems to satisfy the requirements of high performance computing. These architectures span a wide variety of system types. At the low end of the spectrum, we can build a small, shared-memory parallel system with tens of processors. These systems typically use a bus to interconnect the processors and memory. Such systems, for example, are becoming commonplace in high-performance graph ics workstations. These systems are called uniform memory access (UMA) multiprocessors because they provide uniform access of memory to all pro cessors. These systems provide a single address space, which is preferred by programmers. This architecture, however, cannot be extended even to medium systems with hundreds of processors due to bus bandwidth limitations. To scale systems to medium range i. e., to hundreds of processors, non-bus interconnection networks have been proposed. These systems, for example, use a multistage dynamic interconnection network. Such systems also provide global, shared memory like the UMA systems. However, they introduce local and remote memories, which lead to non-uniform memory access (NUMA) architecture. Distributed-memory architecture is used for systems with thousands of pro cessors. These systems differ from the shared-memory architectures in that there is no globally accessible shared memory. Instead, they use message pass ing to facilitate communication among the processors. As a result, they do not provide single address space."
With the growing use of information technology and the recent advances in web systems, the amount of data available to users has increased exponentially. Thus, there is a critical need to understand the content of the data. As a result, data-mining has become a popular research topic in recent years for the treatment of the "data rich and information poor" syndrome. In this carefully edited volume a theoretical foundation as well as important new directions for data-mining research are presented. It brings together a set of well respected data mining theoreticians and researchers with practical data mining experiences. The presented theories will give data mining practitioners a scientific perspective in data mining and thus provide more insight into their problems, and the provided new data mining topics can be expected to stimulate further research in these important directions.
Theoretical Advances in Neural Computation and Learning brings together in one volume some of the recent advances in the development of a theoretical framework for studying neural networks. A variety of novel techniques from disciplines such as computer science, electrical engineering, statistics, and mathematics have been integrated and applied to develop ground-breaking analytical tools for such studies. This volume emphasizes the computational issues in artificial neural networks and compiles a set of pioneering research works, which together establish a general framework for studying the complexity of neural networks and their learning capabilities. This book represents one of the first efforts to highlight these fundamental results, and provides a unified platform for a theoretical exploration of neural computation. Each chapter is authored by a leading researcher and/or scholar who has made significant contributions in this area. Part 1 provides a complexity theoretic study of different models of neural computation. Complexity measures for neural models are introduced, and techniques for the efficient design of networks for performing basic computations, as well as analytical tools for understanding the capabilities and limitations of neural computation are discussed. The results describe how the computational cost of a neural network increases with the problem size. Equally important, these results go beyond the study of single neural elements, and establish to computational power of multilayer networks. Part 2 discusses concepts and results concerning learning using models of neural computation. Basic concepts such as VC-dimension and PAC-learning are introduced, and recentresults relating neural networks to learning theory are derived. In addition, a number of the chapters address fundamental issues concerning learning algorithms, such as accuracy and rate of convergence, selection of training data, and efficient algorithms for learning useful classes of mappings.
The purpose of the 3rd International Conference on Enterprise Information Systems (ICEIS) was to bring together researchers, engineers, and practitioners interested in the advances and business applications of information systems. The research papers published here have been carefully selected from those presented at the conference, and focus on real world applications covering four main themes: database and information systems integration; artificial intelligence and decision support systems; information systems analysis and specification; and internet computing and electronic commerce. Audience: This book will be of interest to information technology professionals, especially those working on systems integration, databases, decision support systems, or electronic commerce. It will also be of use to middle managers who need to work with information systems and require knowledge of current trends in development methods and applications.
The volume contains the papers presented at the fifth working conference on Communications and Multimedia Security (CMS 2001), held on May 21-22, 2001 at (and organized by) the GMD -German National Research Center for Information Technology GMD - Integrated Publication and Information Systems Institute IPSI, in Darmstadt, Germany. The conference is arranged jointly by the Technical Committees 11 and 6 of the International Federation of Information Processing (IFIP) The name "Communications and Multimedia Security" was first used in 1995, Reinhard Posch organized the first in this series of conferences in Graz, Austria, following up on the previously national (Austrian) "IT Sicherheit" conferences held in Klagenfurt (1993) and Vienna (1994). In 1996, the CMS took place in Essen, Germany; in 1997 the conference moved to Athens, Greece. The CMS 1999 was held in Leuven, Belgium. This conference provides a forum for presentations and discussions on issues which combine innovative research work with a highly promising application potential in the area of security for communication and multimedia security. State-of-the-art issues as well as practical experiences and new trends in the areas were topics of interest again, as it has already been the case at previous conferences. This year, the organizers wanted to focus the attention on watermarking and copyright protection for e commerce applications and multimedia data. We also encompass excellent work on recent advances in cryptography and their applications. In recent years, digital media data have enormously gained in importance."
As adoption of Electronic Health Record Systems (EHR-Ss) shifts from early adopters to mainstream, an increasingly large group of decision makers must assess what they want from EHR-Ss and how to go about making their choices. The purpose of this book is to inform that decision. This book explains typical needs of a variety of stakeholders, describes current and imminent technologies, and assesses the available evidence regarding issues in implementing and using EHR-Ss. Divided into four important sections--Needs, Current State, Technology, and Going Forward--the book provides the background and general notions regarding the EHRS and lays out the framework; delves into the historical review; presents a high-level view of EHR systems, focused on the needs of different stakeholders in the health care and the health enterprise; offers practical views of existing systems and current (and short-term future) issues in specifying a EHR system and deciding how to approach the institution of such a system; deals with technology issues, from front- to back-end; and describes where we are and where we should be going with EHR systems. Designed for use by chief information officers, chief medical informatics officers, medical liaisons to hospital systems, private practitioners, and business managers at academic and non-academic hospitals, care management organizations, and practices. The book could be used in any medical or health informatics course, at any level (undergrad, fellowship, MBA).
Rules represent a simplified means of programming, congruent with our understanding of human brain constructs. With the advent of business rules management systems, it has been possible to introduce rule-based programming to nonprogrammers, allowing them to map expert intent into code in applications such as fraud detection, financial transactions, healthcare, retail, and marketing. However, a remaining concern is the quality, safety, and reliability of the resulting programs. This book is on business rules programs, that is, rule programs as handled in business rules management systems. Its conceptual contribution is to present the foundation for treating business rules as a topic of scientific investigation in semantics and program verification, while its technical contribution is to present an approach to the formal verification of business rules programs. The author proposes a method for proving correctness properties for a business rules program in a compositional way, meaning that the proof of a correctness property for a program is built up from correctness properties for the individual rules-thus bridging a gap between the intuitive understanding of rules and the formal semantics of rule programs. With this approach the author enables rule authors and tool developers to understand, express formally, and prove properties of the execution behavior of business rules programs. This work will be of interest to practitioners and researchers in the areas of program verification, enterprise computing, database management, and artificial intelligence.
Integrity and Internal Control in Information Systems is a state-of-the-art book that establishes the basis for an ongoing dialogue between the IT security specialists and the internal control specialists so that both may work more effectively together to assist in creating effective business systems in the future. Building on the issues presented in the preceding volume of this series, this book seeks further answers to the following questions: What precisely do business managers need in order to have confidence in the integrity of their information systems and their data? What is the status quo of research and development in this area? Where are the gaps between business needs on the one hand and research/development on the other; what needs to be done to bridge these gaps? Integrity and Internal Control in Information Systems contains the selected proceedings of the Second Working Conference on Integrity and Internal Control in Information Systems, sponsored by the International Federation for Information Processing (IFIP) and held in Warrenton, Virginia, USA, in November 1998. It will be essential reading for academics and practitioners in computer science, information technology, business informatics, accountancy and edp-auditing.
This book thoroughly covers the remote sensing visualization and analysis techniques based on computational imaging and vision in Earth science. Remote sensing is considered a significant information source for monitoring and mapping natural and man-made land through the development of sensor resolutions that committed different Earth observation platforms. The book includes related topics for the different systems, models, and approaches used in the visualization of remote sensing images. It offers flexible and sophisticated solutions for removing uncertainty from the satellite data. It introduces real time big data analytics to derive intelligence systems in enterprise earth science applications. Furthermore, the book integrates statistical concepts with computer-based geographic information systems (GIS). It focuses on image processing techniques for observing data together with uncertainty information raised by spectral, spatial, and positional accuracy of GPS data. The book addresses several advanced improvement models to guide the engineers in developing different remote sensing visualization and analysis schemes. Highlights on the advanced improvement models of the supervised/unsupervised classification algorithms, support vector machines, artificial neural networks, fuzzy logic, decision-making algorithms, and Time Series Model and Forecasting are addressed. This book guides engineers, designers, and researchers to exploit the intrinsic design remote sensing systems. The book gathers remarkable material from an international experts' panel to guide the readers during the development of earth big data analytics and their challenges. |
You may like...
Demystifying Graph Data Science - Graph…
Pethuru Raj, Abhishek Kumar, …
Hardcover
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
R1,024
Discovery Miles 10 240
Role of 6g Wireless Networks in AI and…
Malaya Dutta Borah, Steven A. Wright, …
Hardcover
R6,206
Discovery Miles 62 060
Machine Learning for Biometrics…
Partha Pratim Sarangi, Madhumita Panda, …
Paperback
R2,570
Discovery Miles 25 700
|