![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
The book reviews methods for the numerical and statistical analysis of astronomical datasets with particular emphasis on the very large databases that arise from both existing and forthcoming projects, as well as current large-scale computer simulation studies. Leading experts give overviews of cutting-edge methods applicable in the area of astronomical data mining. Case studies demonstrate the interplay between these techniques and interesting astronomical problems. The book demonstrates specific new methods for storing, accessing, reducing, analysing, describing and visualising astronomical data which are necessary to fully exploit its potential.
Information Macrodynamics (IMD) belong to an interdisciplinary science that represents a new theoretical and computer-based methodology for a system informational descriptionand improvement, including various activities in such areas as thinking, intelligent processes, communications, management, and other nonphysical subjects with their mutual interactions, informational superimposition, and theinformation transferredbetweeninteractions. The IMD is based on the implementation of a single concept by a unique mathematical principle and formalism, rather than on an artificial combination of many arbitrary, auxiliary concepts and/or postulates and different mathematical subjects, such as the game, automata, catastrophe, logical operations theories, etc. This concept is explored mathematically using classical mathematics as calculus of variation and the probability theory, which are potent enough, without needing to developnew, specifiedmathematical systemicmethods. The formal IMD model automatically includes the related results from other fields, such as linear, nonlinear, collective and chaotic dynamics, stability theory, theory of information, physical analogies of classical and quantum mechanics, irreversible thermodynamics, andkinetics. The main IMD goal is to reveal the information regularities, mathematically expressed by the considered variation principle (VP), as a mathematical tool to extractthe regularities and define the model, whichdescribes theregularities. The IMD regularities and mechanisms are the results of the analytical solutions and are not retained by logical argumentation, rational introduction, and a reasonable discussion. The IMD's information computer modeling formalism includes a human being (as an observer, carrier and producer ofinformation), with a restoration of the model during the objectobservations.
Oracle 10g Developing Media Rich Applications is focused squarely
on database administrators and programmers as the foundation of
multimedia database applications. With the release of Oracle8
Database in 1997, Oracle became the first commercial database with
integrated multimedia technology for application developers. Since
that time, Oracle has enhanced and extended these features to
include native support for image, audio, video and streaming media
storage; indexing, retrieval and processing in the Oracle Database,
Application Server; and development tools. Databases are not only
words and numbers for accountants, but they also should utilize a
full range of media to satisfy customer needs, from race car
engineers, to manufacturing processes to security.
This book presents the application of a comparatively simple nonparametric regression algorithm, known as the multivariate adaptive regression splines (MARS) surrogate model, which can be used to approximate the relationship between the inputs and outputs, and express that relationship mathematically. The book first describes the MARS algorithm, then highlights a number of geotechnical applications with multivariate big data sets to explore the approach's generalization capabilities and accuracy. As such, it offers a valuable resource for all geotechnical researchers, engineers, and general readers interested in big data analysis.
This book carefully defines the technologies involved in web service composition and provides a formal basis for all of the composition approaches and shows the trade-offs among them. By considering web services as a deep formal topic, some surprising results emerge, such as the possibility of eliminating workflows. It examines the immense potential of web services composition for revolutionizing business IT as evidenced by the marketing of Service Oriented Architectures (SOAs). The author begins with informal considerations and builds to the formalisms slowly, with easily-understood motivating examples. Chapters examine the importance of semantics for web services and ways to apply semantic technologies. Topics included range from model checking and Golog to WSDL and AI planning. This book is based upon lectures given to economics students and is suitable for business technologist with some computer science background. The reader can delve as deeply into the technologies as desired.
This book introduces condition-based maintenance (CBM)/data-driven prognostics and health management (PHM) in detail, first explaining the PHM design approach from a systems engineering perspective, then summarizing and elaborating on the data-driven methodology for feature construction, as well as feature-based fault diagnosis and prognosis. The book includes a wealth of illustrations and tables to help explain the algorithms, as well as practical examples showing how to use this tool to solve situations for which analytic solutions are poorly suited. It equips readers to apply the concepts discussed in order to analyze and solve a variety of problems in PHM system design, feature construction, fault diagnosis and prognosis.
Research into Fully Integrated Data Environments (FIDE) has the goal of substantially improving the quality of application systems while reducing the cost of building and maintaining them. Application systems invariably involve the long-term storage of data over months or years. Much unnecessary complexity obstructs the construction of these systems when conventional databases, file systems, operating systems, communication systems, and programming languages are used. This complexity limits the sophistication of the systems that can be built, generates operational and usability problems, and deleteriously impacts both reliability and performance. This book reports on the work of researchers in the Esprit FIDE projects to design and develop a new integrated environment to support the construction and operation of such persistent application systems. It reports on the principles they employed to design it, the prototypes they built to test it, and their experience using it.
Data mining techniques are commonly used to extract meaningful information from the web, such as data from web documents, website usage logs, and hyperlinks. Building on this, modern organizations are focusing on running and improving their business methods and returns by using opinion mining. Extracting Knowledge From Opinion Mining is an essential resource that presents detailed information on web mining, business intelligence through opinion mining, and how to effectively use knowledge retrieved through mining operations. While highlighting relevant topics, including the differences between ontology-based opinion mining and feature-based opinion mining, this book is an ideal reference source for information technology professionals within research or business settings, graduate and post-graduate students, as well as scholars.
It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed "ensemble learning" by researchers in computational intelligence and machine learning, it is known to improve a decision system's robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as "boosting" and "random forest" facilitate solutions to key computational issues such as face recognition and are now being applied in areas as diverse as object tracking and bioinformatics. Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including the random forest skeleton tracking algorithm in the Xbox Kinect sensor, which bypasses the need for game controllers. At once a solid theoretical study and a practical guide, the volume is a windfall for researchers and practitioners alike. "
Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
Even in the age of ubiquitous computing, the importance of the Internet will not change and we still need to solve conventional security issues. In addition, we need to deal with new issues such as security in the P2P environment, privacy issues in the use of smart cards, and RFID systems. Security and Privacy in the Age of Ubiquitous Computing addresses these issues and more by exploring a wide scope of topics. The volume presents a selection of papers from the proceedings of the 20th IFIP International Information Security Conference held from May 30 to June 1, 2005 in Chiba, Japan. Topics covered include cryptography applications, authentication, privacy and anonymity, DRM and content security, computer forensics, Internet and web security, security in sensor networks, intrusion detection, commercial and industrial security, authorization and access control, information warfare and critical protection infrastructure. These papers represent the most current research in information security, including research funded in part by DARPA and the National Science Foundation.
With the growing use of information technology and the recent advances in web systems, the amount of data available to users has increased exponentially. Thus, there is a critical need to understand the content of the data. As a result, data-mining has become a popular research topic in recent years for the treatment of the "data rich and information poor" syndrome. In this carefully edited volume a theoretical foundation as well as important new directions for data-mining research are presented. It brings together a set of well respected data mining theoreticians and researchers with practical data mining experiences. The presented theories will give data mining practitioners a scientific perspective in data mining and thus provide more insight into their problems, and the provided new data mining topics can be expected to stimulate further research in these important directions.
This book presents principles and applications to expand the storage space from 2-D to 3-D and even multi-D, including gray scale, color (light with different wavelength), polarization and coherence of light. These actualize the improvements of density, capacity and data transfer rate for optical data storage. Moreover, the applied implementation technologies to make mass data storage devices are described systematically. Some new mediums, which have linear absorption characteristics for different wavelength and intensity to light with high sensitivity, are introduced for multi-wavelength and multi-level optical storage. This book can serve as a useful reference for researchers, engineers, graduate and undergraduate students in material science, information science and optics.
Semantic Web Services for Web Databases introduces an end-to-end framework for querying Web databases using novel Web service querying techniques. This includes a detailed framework for the query infrastructure for Web databases and services. Case studies are covered in the last section of this book. Semantic Web Services For Web Databases is designed for practitioners and researchers focused on service-oriented computing and Web databases.
Some recent fuzzy database modeling advances for the
non-traditional applications are introduced in this book. The focus
is on database models for modeling complex information and
uncertainty at the conceptual, logical, physical design levels and
from integrity constraints defined on the fuzzy relations.
"Foundations of Data Mining and Knowledge Discovery" contains the latest results and new directions in data mining research. Data mining, which integrates various technologies, including computational intelligence, database and knowledge management, machine learning, soft computing, and statistics, is one of the fastest growing fields in computer science. Although many data mining techniques have been developed, further development of the field requires a close examination of its foundations. This volume presents the results of investigations into the foundations of the discipline, and represents the state of the art for much of the current research. This book will prove extremely valuable and fruitful for data mining researchers, no matter whether they would like to uncover the fundamental principles behind data mining, or apply the theories to practical applications.
Real-time systems are now used in a wide variety of applications. Conventionally, they were configured at design to perform a given set of tasks and could not readily adapt to dynamic situations. The concept of imprecise and approximate computation has emerged as a promising approach to providing scheduling flexibility and enhanced dependability in dynamic real-time systems. The concept can be utilized in a wide variety of applications, including signal processing, machine vision, databases, networking, etc. For those who wish to build dynamic real-time systems which must deal safely with resource unavailability while continuing to operate, leading to situations where computations may not be carried through to completion, the techniques of imprecise and approximate computation facilitate the generation of partial results that may enable the system to operate safely and avert catastrophe. Audience: Of special interest to researchers. May be used as a supplementary text in courses on real-time systems.
Information Retrieval: Algorithms and Heuristics is a comprehensive introduction to the study of information retrieval covering both effectiveness and run-time performance. The focus of the presentation is on algorithms and heuristics used to find documents relevant to the user request and to find them fast. Through multiple examples, the most commonly used algorithms and heuristics needed are tackled. To facilitate understanding and applications, introductions to and discussions of computational linguistics, natural language processing, probability theory and library and computer science are provided. While this text focuses on algorithms and not on commercial product per se, the basic strategies used by many commercial products are described. Techniques that can be used to find information on the Web, as well as in other large information collections, are included. This volume is an invaluable resource for researchers, practitioners, and students working in information retrieval and databases. For instructors, a set of Powerpoint slides, including speaker notes, are available online from the authors.
This book paves the way for researchers working on the sustainable interdependent networks spread over the fields of computer science, electrical engineering, and smart infrastructures. It provides the readers with a comprehensive insight to understand an in-depth big picture of smart cities as a thorough example of interdependent large-scale networks in both theory and application aspects. The contributors specify the importance and position of the interdependent networks in the context of developing the sustainable smart cities and provide a comprehensive investigation of recently developed optimization methods for large-scale networks. There has been an emerging concern regarding the optimal operation of power and transportation networks. In the second volume of Sustainable Interdependent Networks book, we focus on the interdependencies of these two networks, optimization methods to deal with the computational complexity of them, and their role in future smart cities. We further investigate other networks, such as communication networks, that indirectly affect the operation of power and transportation networks. Our reliance on these networks as global platforms for sustainable development has led to the need for developing novel means to deal with arising issues. The considerable scale of such networks, due to the large number of buses in smart power grids and the increasing number of electric vehicles in transportation networks, brings a large variety of computational complexity and optimization challenges. Although the independent optimization of these networks lead to locally optimum operation points, there is an exigent need to move towards obtaining the globally-optimum operation point of such networks while satisfying the constraints of each network properly. The book is suitable for senior undergraduate students, graduate students interested in research in multidisciplinary areas related to future sustainable networks, and the researchers working in the related areas. It also covers the application of interdependent networks which makes it a perfect source of study for audience out of academia to obtain a general insight of interdependent networks.
The purposeofthis book is to providea recordofthe stateofthe art in Topic Detection and Tracking (TDT) in a single place. Research in TDT has been going on for about five years, and publications related to it are scattered all over the place as technical reports, unpublished manuscripts, or in numerous conference proceedings. The third and fourth in a series of on-going TDT evaluations marked a turning point in the research. As such. it provides an excellent time to pause. review the state of the art. gather lessons learned, and describe the open challenges. This book is a collection oftechnical papers. As such, its primary audience is researchers interested in the the current state of TDT research, researchers who hope to leverage that work sothat theirown efforts can avoid pointlessdu plication and false starts. It might also pointthem in the direction ofinteresting unsolved problems within the area. The book is also of interest to practition ers in fields that are related to TDT--e.g., Information Retrieval. Automatic Speech Recognition. Machine Learning, Information Extraction, and so on. In thosecases, TDTmay provide arich application domain for theirown research, or it might address similarenough problems that some lessons learned can be tweaked slightly to answer-perhaps partiallY-"
Data mining provides a set of new techniques to integrate, synthesize, and analyze tdata, uncovering the hidden patterns that exist within. Traditionally, techniques such as kernel learning methods, pattern recognition, and data mining, have been the domain of researchers in areas such as artificial intelligence, but leveraging these tools, techniques, and concepts against your data asset to identify problems early, understand interactions that exist and highlight previously unrealized relationships through the combination of these different disciplines can provide significant value for the investigator and her organization.
Theoretical Advances in Neural Computation and Learning brings together in one volume some of the recent advances in the development of a theoretical framework for studying neural networks. A variety of novel techniques from disciplines such as computer science, electrical engineering, statistics, and mathematics have been integrated and applied to develop ground-breaking analytical tools for such studies. This volume emphasizes the computational issues in artificial neural networks and compiles a set of pioneering research works, which together establish a general framework for studying the complexity of neural networks and their learning capabilities. This book represents one of the first efforts to highlight these fundamental results, and provides a unified platform for a theoretical exploration of neural computation. Each chapter is authored by a leading researcher and/or scholar who has made significant contributions in this area. Part 1 provides a complexity theoretic study of different models of neural computation. Complexity measures for neural models are introduced, and techniques for the efficient design of networks for performing basic computations, as well as analytical tools for understanding the capabilities and limitations of neural computation are discussed. The results describe how the computational cost of a neural network increases with the problem size. Equally important, these results go beyond the study of single neural elements, and establish to computational power of multilayer networks. Part 2 discusses concepts and results concerning learning using models of neural computation. Basic concepts such as VC-dimension and PAC-learning are introduced, and recentresults relating neural networks to learning theory are derived. In addition, a number of the chapters address fundamental issues concerning learning algorithms, such as accuracy and rate of convergence, selection of training data, and efficient algorithms for learning useful classes of mappings.
The purpose of the 3rd International Conference on Enterprise Information Systems (ICEIS) was to bring together researchers, engineers, and practitioners interested in the advances and business applications of information systems. The research papers published here have been carefully selected from those presented at the conference, and focus on real world applications covering four main themes: database and information systems integration; artificial intelligence and decision support systems; information systems analysis and specification; and internet computing and electronic commerce. Audience: This book will be of interest to information technology professionals, especially those working on systems integration, databases, decision support systems, or electronic commerce. It will also be of use to middle managers who need to work with information systems and require knowledge of current trends in development methods and applications.
This volume contains the proceedings of IFIPTM 2010, the 4th IFIP WG 11.11 International Conference on Trust Management, held in Morioka, Iwate, Japan during June 16-18, 2010. IFIPTM 2010 provided a truly global platform for the reporting of research, development, policy, and practice in the interdependent arrears of privacy, se- rity, and trust. Building on the traditions inherited from the highly succe- ful iTrust conference series, the IFIPTM 2007 conference in Moncton, New Brunswick, Canada, the IFIPTM 2008 conference in Trondheim, Norway, and the IFIPTM 2009 conference at Purdue University in Indiana, USA, IFIPTM 2010 focused on trust, privacy and security from multidisciplinary persp- tives. The conference is an arena for discussion on relevant problems from both research and practice in the areas of academia, business, and government. IFIPTM 2010 was an open IFIP conference. The program of the conference featured both theoretical research papers and reports of real-world case studies. IFIPTM 2010 received 61 submissions from 25 di?erent countries: Japan (10), UK (6), USA (6), Canada (5), Germany (5), China (3), Denmark (2), India (2), Italy (2), Luxembourg (2), The Netherlands (2), Switzerland (2), Taiwan (2), Austria, Estonia, Finland, France, Ireland, Israel, Korea, Malaysia, Norway, Singapore, Spain, Turkey. The Program Committee selected 18 full papers for presentation and inclusion in the proceedings. In addition, the program and the proceedings include two invited papers by academic experts in the ?elds of trust management, privacy and security, namely, Toshio Yamagishi and Pamela Briggs |
You may like...
Performance and Dependability in Service…
Valeria Cardellini, Emiliano Casalicchio, …
Hardcover
R5,002
Discovery Miles 50 020
The Chemical Dialogue Between Plants and…
Vivek Sharma, Richa Salwan, …
Paperback
R3,943
Discovery Miles 39 430
Advances in Computer Vision - Volume 1
C Brown, Christopher Brown
Hardcover
R4,500
Discovery Miles 45 000
Binary Bullets - The Ethics of…
Fritz Allhoff, Adam Henschke, …
Hardcover
R3,569
Discovery Miles 35 690
|