![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Current research in Visual Database Systems can be characterized by scalability, multi-modality of interaction, and higher semantic levels of data. Visual interfaces that allow users to interact with large databases must scale to web and distributed applications. Interaction with databases must employ multiple and more diversified interaction modalities, such as speech and gesture, in addition to visual exploitation. Finally, the basic elements managed in modern databases are rapidly evolving, from text, images, sound, and video, to compositions and now annotations of these media, thus incorporating ever-higher levels and different facets of semantics. In addition to visual interfaces and multimedia databases, Visual and Multimedia Information Management includes research in the following areas: Speech and aural interfaces to databases; Visualization of web applications and database structure; Annotation and retrieval of image databases; Visual querying in geographical information systems; Video databases; and Virtual environment and modeling of complex shapes. Visual and Multimedia Information Management comprises the proceedings of the sixth International Conference on Visual Database Systems, which was sponsored by the International Federation for Information Processing (IFIP), and held in Brisbane, Australia, in May 2002. This volume will be essential for researchers in the field of management of visual and multimedia information, as well as for industrial practitioners concerned with building IT products for managing visual and multimedia information.
From environmental management to land planning and geo-marketing, the number of application domains that may greatly benefit from using data enriched with spatio-temporal features is expanding very rapidly. Unfortunately, development of new spatio-temporal applications is hampered by the lack of conceptual design methods suited to cope with the additional complexity of spatio-temporal data. This complexity is obviously due to the particular semantics of space and time, but also to the need for multiple representations of the same reality to address the diversity of requirements from highly heterogeneous user communities. Conceptual design methods are also needed to facilitate the exchange and reuse of existing data sets, a must in geographical data management due to the high collection costs of the data. Yet, current practice in areas like geographical information systems or moving objects databases does not include conceptual design methods very well, if at all. This book shows that a conceptual design approach for spatio-temporal databases is both feasible and easy to apprehend. While providing a firm basis through extensive discussion of traditional data modeling concepts, the major focus of the book is on modeling spatial and temporal information. Parent, Spaccapietra and Zimanyi provide a detailed and comprehensive description of an approach that fills the gap between application conceptual requirements and system capabilities, covering both data modeling and data manipulation features. The ideas presented summarize several years of research on the characteristics and description of space, time, and perception. In addition to the authors' own data modeling approach, MADS (Modeling of Application Data with Spatio-temporal features), the book also surveys alternative data models and approaches (from industry and academia) that target support of spatio-temporal modeling. The reader will acquire intimate knowledge of both the traditional and innovative features that form a consistent data modeling approach. Visual notations and examples are employed extensively to illustrate the use of the various constructs. Therefore, this book is of major importance and interest to advanced professionals, researchers, and graduate or post-graduate students in the areas of spatio-temporal databases and geographical information systems. "For anyone thinking of doing research in this field, or who is developing a system based on spatio-temporal data, this text is essential reading." (Mike Worboys, U Maine, Orono, ME, USA) "The high-level semantic model presented and validated in this book provides essential guidance to researchers and implementers when improving the capabilities of data systems to serve the actual needs of applications and their users in the temporal and spatial domains that are so prevalent today." (Gio Wiederhold, Stanford U, CA, USA)"
Welcome to the 6th International Conference on Open Source Systems of the IFIP Working Group 2. 13. This year was the ?rst time this international conf- ence was held in North America. We had a large number of high-quality papers, highlyrelevantpanelsandworkshops, acontinuationofthepopulardoctoralc- sortium, and multiple distinguished invited speakers. The success of OSS 2010 was only possible because an Organizing Committee, a Program Committee, Workshop and Doctoral Committees, and authors of research manuscripts from over 25 countries contributed their time and interest to OSS 2010. In the spirit of the communities we study, you self-organized, volunteered, and contributed to this important research forum studying free, libre, open source software and systems. We thank you Despite our modest success, we have room to improve and grow our conf- ence and community. At OSS 2010 we saw little or no participation from large portions of the world, including Latin America, Africa, China, and India. But opportunitiestoexpandarepossible. InJapan, weseeahotspotofparticipation led by Tetsuo Noda and his colleagues, both with full-paper submissions and a workshopon"OpenSourcePolicyandPromotionofITIndustries inEastAsia. " The location of OSS 2011 in Salvador, Brazil, will hopefully result in signi?cant participation from researchers in Brazil - already a strong user of OSS - and otherSouthAmericancountries. UndertheleadershipofMeganSquire, Publicity Chair, we recruited RegionalPublicity Co-chairscovering Japan (Tetsuo Noda), Africa(SulaymanSowe), the MiddleEastandSouthAsia(FaheenAhmed), R- sia and Eastern Europe (Alexey Khoroshilov), Western Europe (Yeliz Eseryel), UK and Ireland (Andrea Capiluppi), and the Nordic countries (Bj] orn Lundell)."
The present book outlines a new approach to possibilistic clustering in which the sought clustering structure of the set of objects is based directly on the formal definition of fuzzy cluster and the possibilistic memberships are determined directly from the values of the pairwise similarity of objects. The proposed approach can be used for solving different classification problems. Here, some techniques that might be useful at this purpose are outlined, including a methodology for constructing a set of labeled objects for a semi-supervised clustering algorithm, a methodology for reducing analyzed attribute space dimensionality and a methods for asymmetric data processing. Moreover, a technique for constructing a subset of the most appropriate alternatives for a set of weak fuzzy preference relations, which are defined on a universe of alternatives, is described in detail, and a method for rapidly prototyping the Mamdani s fuzzy inference systems is introduced. This book addresses engineers, scientists, professors, students and post-graduate students, who are interested in and work with fuzzy clustering and its applications
During the past few years, data mining has grown rapidly in visibility and importance within information processing and decision analysis. This is par ticularly true in the realm of e-commerce, where data mining is moving from a "nice-to-have" to a "must-have" status. In a different though related context, a new computing methodology called granular computing is emerging as a powerful tool for the conception, analysis and design of information/intelligent systems. In essence, data mining deals with summarization of information which is resident in large data sets, while granular computing plays a key role in the summarization process by draw ing together points (objects) which are related through similarity, proximity or functionality. In this perspective, granular computing has a position of centrality in data mining. Another methodology which has high relevance to data mining and plays a central role in this volume is that of rough set theory. Basically, rough set theory may be viewed as a branch of granular computing. However, its applications to data mining have predated that of granular computing."
The most important use of computing in the future will be in the context of the global "digital convergence" where everything becomes digital and every thing is inter-networked. The application will be dominated by storage, search, retrieval, analysis, exchange and updating of information in a wide variety of forms. Heavy demands will be placed on systems by many simultaneous re quests. And, fundamentally, all this shall be delivered at much higher levels of dependability, integrity and security. Increasingly, large parallel computing systems and networks are providing unique challenges to industry and academia in dependable computing, espe cially because of the higher failure rates intrinsic to these systems. The chal lenge in the last part of this decade is to build a systems that is both inexpensive and highly available. A machine cluster built of commodity hardware parts, with each node run ning an OS instance and a set of applications extended to be fault resilient can satisfy the new stringent high-availability requirements. The focus of this book is to present recent techniques and methods for im plementing fault-tolerant parallel and distributed computing systems. Section I, Fault-Tolerant Protocols, considers basic techniques for achieving fault-tolerance in communication protocols for distributed systems, including synchronous and asynchronous group communication, static total causal order ing protocols, and fail-aware datagram service that supports communications by time."
Multimedia is changing the design of database and information retrieval systems. The accumulation of audio, image, and video content is of little use in these systems if the content cannot be retrieved on demand, a critical requirement that has led to the development of new technologies for the analysis and indexing of media data. In turn, these technologies seek to derive information or features from a data type that can facilitate rapid retrieval, efficient compression, and logical presentation of the data. Significant work that has not been addressed, however, is the benefits of analyzing more than one data type simultaneously. Computed Synchronization for Multimedia Applications presents a new framework for the simultaneous analysis of multiple media data objects. The primary benefit of this analysis is computed synchronization, a temporal and spatial alignment of multiple media objects. Computed Synchronization for Multimedia Applications also presents several specific applications and a general structure for the solution of computed synchronization problems. The applications demonstrate the use of this structure. Two applications in particular are described in detail: the alignment of text to speech audio, and the alignment of simultaneous English language translations of ancient texts. Many additional applications are discussed as future uses of the technology. Computed Synchronization for Multimedia Applications is useful to researchers, students, and developers seeking to apply computed synchronization in many fields. It is also suitable as a reference for a graduate-level course in multimedia data retrieval.
There is a growing interest in integrating databases and programming languages. In recent years the programming language community has developed new models of computation such as logic programming, object-oriented programming and functional programming, to add to the well established von Neumann model. The data base community has almost independently developed more and more sophisticated data models to solve the problems of large scale data organisation. To make use of these new models in programming languages there must be an awareness of the problems of large scale data. The data base designers can also learn much about language interfaces from programming language designers. The purpose of this book is to present the state of the art in integrating both approaches. The book evolved from the proceedings of a workshop held at the Appin in August 1985. It consists of three sections. The first, "Data Types and Persistence," discusses the issues of data abstraction in a persistent environment. Type systems, modules and binding mechanisms that are appropriate for programming in the large are proposed. Type checking for polymorphic systems and across innovations of the type checker are also discussed. The second section, "Database Types in Programming Languages," introduces the concept of inheritance as a method of polymorphic modelling. It is shown how inheritance can be used as a method of computation in logic programming and how it is appropriate for modelling large scale data in databases. The last section discusses the issues of controlled access to large scale data in a concurrent and distributed persistent environment. Finally methods of how we may implement persistence and buildmachine architectures for persistent data round off the book.
Business rules are everywhere. Every enterprise process, task, activity, or function is governed by rules. However, some of these rules are implicit and thus poorly enforced, others are written but not enforced, and still others are perhaps poorly written and obscurely enforced. The business rule approach looks for ways to elicit, communicate, and manage business rules in a way that all stakeholders can understand, and to enforce them within the IT infrastructure in a way that supports their traceability and facilitates their maintenance. Boyer and Mili will help you to adopt the business rules approach effectively. While most business rule development methodologies put a heavy emphasis on up-front business modeling and analysis, agile business rule development (ABRD) as introduced in this book is incremental, iterative, and test-driven. Rather than spending weeks discovering and analyzing rules for a complete business function, ABRD puts the emphasis on producing executable, tested rule sets early in the project without jeopardizing the quality, longevity, and maintainability of the end result. The authors presentation covers all four aspects required for a successful application of the business rules approach: (1) foundations, to understand what business rules are (and are not) and what they can do for you; (2) methodology, to understand how to apply the business rules approach; (3) architecture, to understand how rule automation impacts your application; (4) implementation, to actually deliver the technical solution within the context of a particular business rule management system (BRMS). Throughout the book, the authors use an insurance case study that deals with claim processing. Boyer and Mili cater to different audiences: Project managers will find a pragmatic, proven methodology for delivering and maintaining business rule applications. Business analysts and rule authors will benefit from guidelines and best practices for rule discovery and analysis. Application architects and software developers will appreciate an exploration of the design space for business rule applications, proven architectural and design patterns, and coding guidelines for using JRules.
This book reports on advanced theories and cutting-edge applications in the field of soft computing. The individual chapters, written by leading researchers, are based on contributions presented during the 4th World Conference on Soft Computing, held May 25-27, 2014, in Berkeley. The book covers a wealth of key topics in soft computing, focusing on both fundamental aspects and applications. The former include fuzzy mathematics, type-2 fuzzy sets, evolutionary-based optimization, aggregation and neural networks, while the latter include soft computing in data analysis, image processing, decision-making, classification, series prediction, economics, control, and modeling. By providing readers with a timely, authoritative view on the field, and by discussing thought-provoking developments and challenges, the book will foster new research directions in the diverse areas of soft computing.
Recent years have seen a dramatic growth of natural language text data, including web pages, news articles, scientific literature, emails, enterprise documents, and social media such as blog articles, forum posts, product reviews, and tweets. This has led to an increasing demand for powerful software tools to help people analyze and manage vast amounts of text data effectively and efficiently. Unlike data generated by a computer system or sensors, text data are usually generated directly by humans, and are accompanied by semantically rich content. As such, text data are especially valuable for discovering knowledge about human opinions and preferences, in addition to many other kinds of knowledge that we encode in text. In contrast to structured data, which conform to well-defined schemas (thus are relatively easy for computers to handle), text has less explicit structure, requiring computer processing toward understanding of the content encoded in text. The current technology of natural language processing has not yet reached a point to enable a computer to precisely understand natural language text, but a wide range of statistical and heuristic approaches to analysis and management of text data have been developed over the past few decades. They are usually very robust and can be applied to analyze and manage text data in any natural language, and about any topic. This book provides a systematic introduction to all these approaches, with an emphasis on covering the most useful knowledge and skills required to build a variety of practically useful text information systems. The focus is on text mining applications that can help users analyze patterns in text data to extract and reveal useful knowledge. Information retrieval systems, including search engines and recommender systems, are also covered as supporting technology for text mining applications. The book covers the major concepts, techniques, and ideas in text data mining and information retrieval from a practical viewpoint, and includes many hands-on exercises designed with a companion software toolkit (i.e., MeTA) to help readers learn how to apply techniques of text mining and information retrieval to real-world text data and how to experiment with and improve some of the algorithms for interesting application tasks. The book can be used as a textbook for a computer science undergraduate course or a reference book for practitioners working on relevant problems in analyzing and managing text data.
This book explores the concepts of data mining and data warehousing, a promising and flourishing frontier in database systems, and presents a broad, yet in-depth overview of the field of data mining. Data mining is a multidisciplinary field, drawing work from areas including database technology, artificial intelligence, machine learning, neural networks, statistics, pattern recognition, knowledge based systems, knowledge acquisition, information retrieval, high performance computing and data visualization.
Multimedia Cartography provides a contemporary overview of the issues related to multimedia cartography and the design and production elements that are unique to this area of mapping. The book has been written for professional cartographers interested in moving into multimedia mapping, for cartographers already involved in producing multimedia titles who wish to discover the approaches that other practitioners in multimedia cartography have taken and for students and academics in the mapping sciences and related geographical fields wishing to update their knowledge about current issues related to cartographic design and production. It provides a new approach to cartography one based on the exploitation of the many rich media components and avant-garde approach that multimedia offers."
Universal navigation is accessible primarily through smart phones providing users with navigation information regardless of the environment (i.e., outdoor or indoor). Universal Navigation for Smart Phones provide the most up-to-date navigation technologies and systems for both outdoor and indoor navigation. It also provides a comparison of the similarities and differences between outdoor and indoor navigation systems from both a technological stand point and user 's perspective. All aspects of navigation systems including geo-positioning, wireless communication, databases, and functions will be introduced. The main thrust of this book presents new approaches and techniques for future navigation systems including social networking, as an emerging approach for navigation.
A collection of the most up-to-date research-oriented chapters on information systems development and database, this book provides an understanding of the capabilities and features of new ideas and concepts in information systems development, databases, and forthcoming technologies.
The book covers a decade of work with some of the largest
commercial and government agencies around the world in addressing
cyber security related to malicious insiders (trusted employees,
contractors, and partners). It explores organized crime, terrorist
threats, and hackers. It addresses the steps organizations must
take to address insider threats at a people, process, and
technology level.
The authors focus on the mathematical models and methods that support most data mining applications and solution techniques.
This book presents an overview of techniques for discovering high-utility patterns (patterns with a high importance) in data. It introduces the main types of high-utility patterns, as well as the theory and core algorithms for high-utility pattern mining, and describes recent advances, applications, open-source software, and research opportunities. It also discusses several types of discrete data, including customer transaction data and sequential data. The book consists of twelve chapters, seven of which are surveys presenting the main subfields of high-utility pattern mining, including itemset mining, sequential pattern mining, big data pattern mining, metaheuristic-based approaches, privacy-preserving pattern mining, and pattern visualization. The remaining five chapters describe key techniques and applications, such as discovering concise representations and regular patterns.
Despite its explosive growth over the last decade, the Web remains essentially a tool to allow humans to access information. Semantic Web technologies like RDF, OWL and other W3C standards aim to extend the Web's capability through increased availability of machine-processable information. Davies, Grobelnik and Mladenic have grouped contributions from renowned researchers into four parts: technology; integration aspects of knowledge management; knowledge discovery and human language technologies; and case studies. Together, they offer a concise vision of semantic knowledge management, ranging from knowledge acquisition to ontology management to knowledge integration, and their applications in domains such as telecommunications, social networks and legal information processing. This book is an excellent combination of fundamental research, tools and applications in Semantic Web technologies. It serves the fundamental interests of researchers and developers in this field in both academia and industry who need to track Web technology developments and to understand their business implications.
In today's market, emerging technologies are continually assisting in common workplace practices as companies and organizations search for innovative ways to solve modern issues that arise. Prevalent applications including internet of things, big data, and cloud computing all have noteworthy benefits, but issues remain when separately integrating them into the professional practices. Significant research is needed on converging these systems and leveraging each of their advantages in order to find solutions to real-time problems that still exist. Challenges and Opportunities for the Convergence of IoT, Big Data, and Cloud Computing is a pivotal reference source that provides vital research on the relation between these technologies and the impact they collectively have in solving real-world challenges. While highlighting topics such as cloud-based analytics, intelligent algorithms, and information security, this publication explores current issues that remain when attempting to implement these systems as well as the specific applications IoT, big data, and cloud computing have in various professional sectors. This book is ideally designed for academicians, researchers, developers, computer scientists, IT professionals, practitioners, scholars, students, and engineers seeking research on the integration of emerging technologies to solve modern societal issues.
Content-based multimedia retrieval is a challenging research field with many unsolved problems. This monograph details concepts and algorithms for robust and efficient information retrieval of two different types of multimedia data: waveform-based music data and human motion data. It first examines several approaches in music information retrieval, in particular general strategies as well as efficient algorithms. The book then introduces a general and unified framework for motion analysis, retrieval, and classification, highlighting the design of suitable features, the notion of similarity used to compare data streams, and data organization.
Background InformationRetrieval (IR) has become, mainly as aresultofthe huge impact of the World Wide Web (WWW) and CD-ROM industry, one of the most important theoretical and practical research topics in Information and Computer Science. Since the inception ofits first theoretical roots about 40 years ago, IR has made avariety ofpractical, experimental and technological advances. It is usually defined as being concerned with the organisation, storage, retrieval and evaluation of information (stored in computer databases) that is likely to be relevant to users' informationneeds (expressed in queries). A huge number ofarticles published in specialisedjournals and at conferences (such as, for example, the Journal of the American Society for Information Science, Information Processing and Management, The Computer Journal, Information Retrieval, Journal of Documentation, ACM TOIS, ACM SIGIR Conferences, etc. ) deal with many different aspects of IR. A number of books have also been written about IR, for example: van Rijsbergen, 1979; Salton and McGill, 1983; Korfhage, 1997; Kowalski, 1997;Baeza-Yates and Ribeiro-Neto, 1999; etc. . IR is typically divided and presented in a structure (models, data structures, algorithms, indexing, evaluation, human-eomputer interaction, digital libraries, WWW-related aspects, and so on) thatreflects its interdisciplinarynature. All theoretical and practical research in IR is ultimately based on a few basic models (or types) which have been elaborated over time. Every model has a formal (mathematical, algorithmic, logical) description of some sort, and these decriptions are scattered all over the literature.
This book shows C# developers how to use C# 2008 and ADO.NET 3.5 to develop database applications the way the best professionals do. After an introductory section, section 2 shows how to use data sources and datasets for Rapid Application Development and prototyping of Windows Forms applications. Section 3 shows how to build professional 3-layer applications that consist of presentation, business, and database classes. Section 4 shows how to use the new LINQ feature to work with data structures like datasets, SQL Server databases, and XML documents. And section 5 shows how to build database applications by using the new Entity Framework to map business objects to database objects. To ensure mastery, this book presents 23 complete database applications that demonstrate best programming practices. And it's all done in the distinctive Murach style that has been training professional developers for 35 years.
Temporal Information Systems in Medicine introduces the engineering of information systems for medically-related problems and applications. The chapters are organized into four parts; fundamentals, temporal reasoning & maintenance in medicine, time in clinical tasks, and the display of time-oriented clinical information. The chapters are self-contained with pointers to other relevant chapters or sections in this book when necessary. Time is of central importance and is a key component of the engineering process for information systems. This book is designed as a secondary text or reference book for upper -undergraduate level students and graduate level students concentrating on computer science, biomedicine and engineering. Industry professionals and researchers working in health care management, information systems in medicine, medical informatics, database management and AI will also find this book a valuable asset.
Given its effective techniques and theories from various sources and fields, data science is playing a vital role in transportation research and the consequences of the inevitable switch to electronic vehicles. This fundamental insight provides a step towards the solution of this important challenge. Data Science and Simulation in Transportation Research highlights entirely new and detailed spatial-temporal micro-simulation methodologies for human mobility and the emerging dynamics of our society. Bringing together novel ideas grounded in big data from various data mining and transportation science sources, this book is an essential tool for professionals, students, and researchers in the fields of transportation research and data mining. |
You may like...
Advancing Information Management through…
Patricia Ordonez De Pablos, Hector Oscar Nigro, …
Hardcover
R4,854
Discovery Miles 48 540
|