![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
In multimedia and communication environments all documents must be protected against attacks. The movie Forrest Gump showed how multimedia documents can be manipulated. The required security can be achieved by a number of different security measures. This book provides an overview of the current research in Multimedia and Communication Security. A broad variety of subjects are addressed including: network security; attacks; cryptographic techniques; healthcare and telemedicine; security infrastructures; payment systems; access control; models and policies; auditing and firewalls. This volume contains the selected proceedings of the joint conference on Communications and Multimedia Security; organized by the International Federation for Information processing and supported by the Austrian Computer Society, Gesellschaft fuer Informatik e.V. and TeleTrust Deutschland e.V. The conference took place in Essen, Germany, in September 1996
Information Macrodynamics (IMD) belong to an interdisciplinary science that represents a new theoretical and computer-based methodology for a system informational descriptionand improvement, including various activities in such areas as thinking, intelligent processes, communications, management, and other nonphysical subjects with their mutual interactions, informational superimposition, and theinformation transferredbetweeninteractions. The IMD is based on the implementation of a single concept by a unique mathematical principle and formalism, rather than on an artificial combination of many arbitrary, auxiliary concepts and/or postulates and different mathematical subjects, such as the game, automata, catastrophe, logical operations theories, etc. This concept is explored mathematically using classical mathematics as calculus of variation and the probability theory, which are potent enough, without needing to developnew, specifiedmathematical systemicmethods. The formal IMD model automatically includes the related results from other fields, such as linear, nonlinear, collective and chaotic dynamics, stability theory, theory of information, physical analogies of classical and quantum mechanics, irreversible thermodynamics, andkinetics. The main IMD goal is to reveal the information regularities, mathematically expressed by the considered variation principle (VP), as a mathematical tool to extractthe regularities and define the model, whichdescribes theregularities. The IMD regularities and mechanisms are the results of the analytical solutions and are not retained by logical argumentation, rational introduction, and a reasonable discussion. The IMD's information computer modeling formalism includes a human being (as an observer, carrier and producer ofinformation), with a restoration of the model during the objectobservations.
Oracle 10g Developing Media Rich Applications is focused squarely
on database administrators and programmers as the foundation of
multimedia database applications. With the release of Oracle8
Database in 1997, Oracle became the first commercial database with
integrated multimedia technology for application developers. Since
that time, Oracle has enhanced and extended these features to
include native support for image, audio, video and streaming media
storage; indexing, retrieval and processing in the Oracle Database,
Application Server; and development tools. Databases are not only
words and numbers for accountants, but they also should utilize a
full range of media to satisfy customer needs, from race car
engineers, to manufacturing processes to security.
Information retrieval is the science concerned with the effective and efficient retrieval of documents starting from their semantic content. It is employed to fulfill some information need from a large number of digital documents. Given the ever-growing amount of documents available and the heterogeneous data structures used for storage, information retrieval has recently faced and tackled novel applications. In this book, Melucci and Baeza-Yates present a wide-spectrum illustration of recent research results in advanced areas related to information retrieval. Readers will find chapters on e.g. aggregated search, digital advertising, digital libraries, discovery of spam and opinions, information retrieval in context, multimedia resource discovery, quantum mechanics applied to information retrieval, scalability challenges in web search engines, and interactive information retrieval evaluation. All chapters are written by well-known researchers, are completely self-contained and comprehensive, and are complemented by an integrated bibliography and subject index. With this selection, the editors provide the most up-to-date survey of topics usually not addressed in depth in traditional (text)books on information retrieval. The presentation is intended for a wide audience of people interested in information retrieval: undergraduate and graduate students, post-doctoral researchers, lecturers, and industrial researchers.
Advances In Digital Government presents a collection of in-depth articles that addresses a representative cross-section of the matrix of issues involved in implementing digital government systems. These articles constitute a survey of both the technical and policy dimensions related to the design, planning and deployment of digital government systems. The research and development projects within the technical dimension represent a wide range of governmental functions, including the provisioning of health and human services, management of energy information, multi-agency integration, and criminal justice applications. The technical issues dealt with in these projects include database and ontology integration, distributed architectures, scalability, and security and privacy. The human factors research emphasizes compliance with access standards for the disabled and the policy articles contain both conceptual models for developing digital government systems as well as real management experiences and results in deploying them. Advances In Digital Government presents digital government issues from the perspectives of different communities and societies. This geographic and social diversity illuminates a unique array of policy and social perspectives, exposing practitioners to new and useful ways of thinking about digital government.
This book presents the application of a comparatively simple nonparametric regression algorithm, known as the multivariate adaptive regression splines (MARS) surrogate model, which can be used to approximate the relationship between the inputs and outputs, and express that relationship mathematically. The book first describes the MARS algorithm, then highlights a number of geotechnical applications with multivariate big data sets to explore the approach's generalization capabilities and accuracy. As such, it offers a valuable resource for all geotechnical researchers, engineers, and general readers interested in big data analysis.
This book carefully defines the technologies involved in web service composition and provides a formal basis for all of the composition approaches and shows the trade-offs among them. By considering web services as a deep formal topic, some surprising results emerge, such as the possibility of eliminating workflows. It examines the immense potential of web services composition for revolutionizing business IT as evidenced by the marketing of Service Oriented Architectures (SOAs). The author begins with informal considerations and builds to the formalisms slowly, with easily-understood motivating examples. Chapters examine the importance of semantics for web services and ways to apply semantic technologies. Topics included range from model checking and Golog to WSDL and AI planning. This book is based upon lectures given to economics students and is suitable for business technologist with some computer science background. The reader can delve as deeply into the technologies as desired.
The book reviews methods for the numerical and statistical analysis of astronomical datasets with particular emphasis on the very large databases that arise from both existing and forthcoming projects, as well as current large-scale computer simulation studies. Leading experts give overviews of cutting-edge methods applicable in the area of astronomical data mining. Case studies demonstrate the interplay between these techniques and interesting astronomical problems. The book demonstrates specific new methods for storing, accessing, reducing, analysing, describing and visualising astronomical data which are necessary to fully exploit its potential.
Earth Observation interacts with space, remote sensing, communication, and information technologies, and plays an increasingly significant role in Earth related scientific studies, resource management, homeland security, topographic mapping, and development of a healthy, sustainable environment and community. Geospatial Technology for Earth Observation provides an in-depth and broad collection of recent progress in Earth observation. Contributed by leading experts in this field, the book covers satellite, airborne and ground remote sensing systems and system integration, sensor orientation, remote sensing physics, image classification and analysis, information extraction, geospatial service, and various application topics, including cadastral mapping, land use change evaluation, water environment monitoring, flood mapping, and decision making support. Geospatial Technology for Earth Observation serves as a valuable training source for researchers, developers, and practitioners in geospatial science and technology industry. It is also suitable as a reference book for upper level college students and graduate students in geospatial technology, geosciences, resource management, and informatics.
Semantic Web Services for Web Databases introduces an end-to-end framework for querying Web databases using novel Web service querying techniques. This includes a detailed framework for the query infrastructure for Web databases and services. Case studies are covered in the last section of this book. Semantic Web Services For Web Databases is designed for practitioners and researchers focused on service-oriented computing and Web databases.
Data mining techniques are commonly used to extract meaningful information from the web, such as data from web documents, website usage logs, and hyperlinks. Building on this, modern organizations are focusing on running and improving their business methods and returns by using opinion mining. Extracting Knowledge From Opinion Mining is an essential resource that presents detailed information on web mining, business intelligence through opinion mining, and how to effectively use knowledge retrieved through mining operations. While highlighting relevant topics, including the differences between ontology-based opinion mining and feature-based opinion mining, this book is an ideal reference source for information technology professionals within research or business settings, graduate and post-graduate students, as well as scholars.
Research into Fully Integrated Data Environments (FIDE) has the goal of substantially improving the quality of application systems while reducing the cost of building and maintaining them. Application systems invariably involve the long-term storage of data over months or years. Much unnecessary complexity obstructs the construction of these systems when conventional databases, file systems, operating systems, communication systems, and programming languages are used. This complexity limits the sophistication of the systems that can be built, generates operational and usability problems, and deleteriously impacts both reliability and performance. This book reports on the work of researchers in the Esprit FIDE projects to design and develop a new integrated environment to support the construction and operation of such persistent application systems. It reports on the principles they employed to design it, the prototypes they built to test it, and their experience using it.
It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed "ensemble learning" by researchers in computational intelligence and machine learning, it is known to improve a decision system's robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as "boosting" and "random forest" facilitate solutions to key computational issues such as face recognition and are now being applied in areas as diverse as object tracking and bioinformatics. Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including the random forest skeleton tracking algorithm in the Xbox Kinect sensor, which bypasses the need for game controllers. At once a solid theoretical study and a practical guide, the volume is a windfall for researchers and practitioners alike. "
Sensor network data management poses new challenges outside the scope of conventional systems where data is represented and regulated. Intelligent Techniques for Warehousing and Mining Sensor Network Data presents fundamental and theoretical issues pertaining to data management. Covering a broad range of topics on warehousing and mining sensor networks, this advanced title provides significant industry solutions to those in database, data warehousing, and data mining research communities.
This book paves the way for researchers working on the sustainable interdependent networks spread over the fields of computer science, electrical engineering, and smart infrastructures. It provides the readers with a comprehensive insight to understand an in-depth big picture of smart cities as a thorough example of interdependent large-scale networks in both theory and application aspects. The contributors specify the importance and position of the interdependent networks in the context of developing the sustainable smart cities and provide a comprehensive investigation of recently developed optimization methods for large-scale networks. There has been an emerging concern regarding the optimal operation of power and transportation networks. In the second volume of Sustainable Interdependent Networks book, we focus on the interdependencies of these two networks, optimization methods to deal with the computational complexity of them, and their role in future smart cities. We further investigate other networks, such as communication networks, that indirectly affect the operation of power and transportation networks. Our reliance on these networks as global platforms for sustainable development has led to the need for developing novel means to deal with arising issues. The considerable scale of such networks, due to the large number of buses in smart power grids and the increasing number of electric vehicles in transportation networks, brings a large variety of computational complexity and optimization challenges. Although the independent optimization of these networks lead to locally optimum operation points, there is an exigent need to move towards obtaining the globally-optimum operation point of such networks while satisfying the constraints of each network properly. The book is suitable for senior undergraduate students, graduate students interested in research in multidisciplinary areas related to future sustainable networks, and the researchers working in the related areas. It also covers the application of interdependent networks which makes it a perfect source of study for audience out of academia to obtain a general insight of interdependent networks.
Extracting content from text continues to be an important research problem for information processing and management. Approaches to capture the semantics of text-based document collections may be based on Bayesian models, probability theory, vector space models, statistical models, or even graph theory. As the volume of digitized textual media continues to grow, so does the need for designing robust, scalable indexing and search strategies (software) to meet a variety of user needs. Knowledge extraction or creation from text requires systematic yet reliable processing that can be codified and adapted for changing needs and environments. This book will draw upon experts in both academia and industry to recommend practical approaches to the purification, indexing, and mining of textual information. It will address document identification, clustering and categorizing documents, cleaning text, and visualizing semantic models of text.
Even in the age of ubiquitous computing, the importance of the Internet will not change and we still need to solve conventional security issues. In addition, we need to deal with new issues such as security in the P2P environment, privacy issues in the use of smart cards, and RFID systems. Security and Privacy in the Age of Ubiquitous Computing addresses these issues and more by exploring a wide scope of topics. The volume presents a selection of papers from the proceedings of the 20th IFIP International Information Security Conference held from May 30 to June 1, 2005 in Chiba, Japan. Topics covered include cryptography applications, authentication, privacy and anonymity, DRM and content security, computer forensics, Internet and web security, security in sensor networks, intrusion detection, commercial and industrial security, authorization and access control, information warfare and critical protection infrastructure. These papers represent the most current research in information security, including research funded in part by DARPA and the National Science Foundation.
This book presents principles and applications to expand the storage space from 2-D to 3-D and even multi-D, including gray scale, color (light with different wavelength), polarization and coherence of light. These actualize the improvements of density, capacity and data transfer rate for optical data storage. Moreover, the applied implementation technologies to make mass data storage devices are described systematically. Some new mediums, which have linear absorption characteristics for different wavelength and intensity to light with high sensitivity, are introduced for multi-wavelength and multi-level optical storage. This book can serve as a useful reference for researchers, engineers, graduate and undergraduate students in material science, information science and optics.
Data mining provides a set of new techniques to integrate, synthesize, and analyze tdata, uncovering the hidden patterns that exist within. Traditionally, techniques such as kernel learning methods, pattern recognition, and data mining, have been the domain of researchers in areas such as artificial intelligence, but leveraging these tools, techniques, and concepts against your data asset to identify problems early, understand interactions that exist and highlight previously unrealized relationships through the combination of these different disciplines can provide significant value for the investigator and her organization.
Some recent fuzzy database modeling advances for the
non-traditional applications are introduced in this book. The focus
is on database models for modeling complex information and
uncertainty at the conceptual, logical, physical design levels and
from integrity constraints defined on the fuzzy relations.
"Foundations of Data Mining and Knowledge Discovery" contains the latest results and new directions in data mining research. Data mining, which integrates various technologies, including computational intelligence, database and knowledge management, machine learning, soft computing, and statistics, is one of the fastest growing fields in computer science. Although many data mining techniques have been developed, further development of the field requires a close examination of its foundations. This volume presents the results of investigations into the foundations of the discipline, and represents the state of the art for much of the current research. This book will prove extremely valuable and fruitful for data mining researchers, no matter whether they would like to uncover the fundamental principles behind data mining, or apply the theories to practical applications.
Real-time systems are now used in a wide variety of applications. Conventionally, they were configured at design to perform a given set of tasks and could not readily adapt to dynamic situations. The concept of imprecise and approximate computation has emerged as a promising approach to providing scheduling flexibility and enhanced dependability in dynamic real-time systems. The concept can be utilized in a wide variety of applications, including signal processing, machine vision, databases, networking, etc. For those who wish to build dynamic real-time systems which must deal safely with resource unavailability while continuing to operate, leading to situations where computations may not be carried through to completion, the techniques of imprecise and approximate computation facilitate the generation of partial results that may enable the system to operate safely and avert catastrophe. Audience: Of special interest to researchers. May be used as a supplementary text in courses on real-time systems.
This book focuses on next generation data technologies in support of collective and computational intelligence. The book brings various next generation data technologies together to capture, integrate, analyze, mine, annotate and visualize distributed data - made available from various community users - in a meaningful and collaborative for the organization manner. A unique perspective on collective computational intelligence is offered by embracing both theory and strategies fundamentals such as data clustering, graph partitioning, collaborative decision making, self-adaptive ant colony, swarm and evolutionary agents. It also covers emerging and next generation technologies in support of collective computational intelligence such as Web 2.0 social networks, semantic web for data annotation, knowledge representation and inference, data privacy and security, and enabling distributed and collaborative paradigms such as P2P, Grid and Cloud Computing due to the geographically dispersed and distributed nature of the data. The book aims to cover in a comprehensive manner the combinatorial effort of utilizing and integrating various next generations collaborative and distributed data technologies for computational intelligence in various scenarios. The book also distinguishes itself by assessing whether utilization and integration of next generation data technologies can assist in the identification of new opportunities, which may also be strategically fit for purpose.
This volume contains the proceedings of IFIPTM 2010, the 4th IFIP WG 11.11 International Conference on Trust Management, held in Morioka, Iwate, Japan during June 16-18, 2010. IFIPTM 2010 provided a truly global platform for the reporting of research, development, policy, and practice in the interdependent arrears of privacy, se- rity, and trust. Building on the traditions inherited from the highly succe- ful iTrust conference series, the IFIPTM 2007 conference in Moncton, New Brunswick, Canada, the IFIPTM 2008 conference in Trondheim, Norway, and the IFIPTM 2009 conference at Purdue University in Indiana, USA, IFIPTM 2010 focused on trust, privacy and security from multidisciplinary persp- tives. The conference is an arena for discussion on relevant problems from both research and practice in the areas of academia, business, and government. IFIPTM 2010 was an open IFIP conference. The program of the conference featured both theoretical research papers and reports of real-world case studies. IFIPTM 2010 received 61 submissions from 25 di?erent countries: Japan (10), UK (6), USA (6), Canada (5), Germany (5), China (3), Denmark (2), India (2), Italy (2), Luxembourg (2), The Netherlands (2), Switzerland (2), Taiwan (2), Austria, Estonia, Finland, France, Ireland, Israel, Korea, Malaysia, Norway, Singapore, Spain, Turkey. The Program Committee selected 18 full papers for presentation and inclusion in the proceedings. In addition, the program and the proceedings include two invited papers by academic experts in the ?elds of trust management, privacy and security, namely, Toshio Yamagishi and Pamela Briggs |
You may like...
Design Of Mission Operations Systems For…
Stephen D. Wall, Kenneth W. Ledbetter
Hardcover
R6,751
Discovery Miles 67 510
Information Technologies for Remote…
Vladimir Krapivin, Anatolij M. Shutko
Hardcover
R4,119
Discovery Miles 41 190
Advances and Trends in Geodesy…
Sona Molcikova, Viera Hurcikova, …
Hardcover
R3,509
Discovery Miles 35 090
|