![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > General
This book addresses the major challenges in realizing unmanned aerial vehicles (UAVs) in IoT-based Smart Cities. The challenges tackled vary from cost and energy efficiency to availability and service quality. The aim of this book is to focus on both the design and implementation aspects of the UAV-based approaches in IoT-enabled smart cities' applications that are enabled and supported by wireless sensor networks, 5G, and beyond. The contributors mainly focus on data delivery approaches and their performability aspects. This book is meant for readers of varying disciplines who are interested in implementing the smart planet/environments vision via wireless/wired enabling technologies. Involves the most up to date unmanned aerial vehicles (UAV) assessment and evaluation approaches Includes innovative operational ideas in agriculture, surveillance, rescue, etc. Pertains researchers, scientists, engineers and practitioners in the field of smart cities, IoT, and communications Fadi Al-Turjman received his Ph.D. from Queen's University, Canada. He is a full professor and a research center director at Near East University, Nicosia. He is a leading authority in the area of IoT and intelligent systems. His publication history spans over 250 publications in addition to his editorialship in top journals such as the IEEE Communication Surveys and Tutorials, and the Elsevier Sustaibable Cities and Society.
This comprehensive book offers a full picture of the cutting edge technologies in the area of "Multimedia Retrieval and Management". It addresses graduate students and scientists in electrical engineering and in computer science as well as system designers, engineers, programmers and other technical managers in the IT industries. The book provides a complete set of theories and technologies necessary for a profound introduction to the field. It includes multimedia low-level feature extraction and high-level semantic description in addition to multimedia authentication and watermarking, and the most up-to-date MPEG-7 standard. A broad range of practical applications is covered, e.g., digital libraries, medical images, biometrics, human palm-print and face-for-security, living plants data management and video-on-demand service.
Fundamentals of Information Systems contains articles from the 7th International Workshop on Foundations of Models and Languages for Data and Objects (FoMLaDO '98), which was held in Timmel, Germany. These articles capture various aspects of database and information systems theory: identification as a primitive of database models deontic action programs marked nulls in queries topological canonization in spatial databases complexity of search queries complexity of Web queries attribute grammars for structured document queries hybrid multi-level concurrency control efficient navigation in persistent object stores formal semantics of UML reengineering of object bases and integrity dependence . Fundamentals of Information Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
This book provides a comprehensive view of the methods and approaches for performance evaluation of computer networks. It offers a clear and logical introduction to the topic, covering both fundamental concepts and practical aspects. It enables the reader to answer a series of questions regarding performance evaluation in modern computer networking scenarios, such as 'What, where, and when to measure?', 'Which time scale is more appropriate for a particular measurement and analysis?', 'Experimentation, simulation or emulation? Why?', and 'How do I best design a sound performance evaluation plan?'. The book includes concrete examples and applications in the important aspects of experimentation, simulation and emulation, and analytical modeling, with strong support from the scientific literature. It enables the identification of common shortcomings and highlights where students, researchers, and engineers should focus to conduct sound performance evaluation. This book is a useful guide to advanced undergraduates and graduate students, network engineers, and researchers who plan and design proper performance evaluation of computer networks and services. Previous knowledge of computer networks concepts, mechanisms, and protocols is assumed. Although the book provides a quick review on applied statistics in computer networking, familiarity with basic statistics is an asset. It is suitable for advanced courses on computer networking as well as for more specific courses as a secondary textbook.
This book presents a specific and unified approach to Knowledge Discovery and Data Mining, termed IFN for Information Fuzzy Network methodology. Data Mining (DM) is the science of modelling and generalizing common patterns from large sets of multi-type data. DM is a part of KDD, which is the overall process for Knowledge Discovery in Databases. The accessibility and abundance of information today makes this a topic of particular importance and need. The book has three main parts complemented by appendices as well as software and project data that are accessible from the book's web site (http: //www.eng.tau.ac.iV-maimonlifn-kdg ). Part I (Chapters 1-4) starts with the topic of KDD and DM in general and makes reference to other works in the field, especially those related to the information theoretic approach. The remainder of the book presents our work, starting with the IFN theory and algorithms. Part II (Chapters 5-6) discusses the methodology of application and includes case studies. Then in Part III (Chapters 7-9) a comparative study is presented, concluding with some advanced methods and open problems. The IFN, being a generic methodology, applies to a variety of fields, such as manufacturing, finance, health care, medicine, insurance, and human resources. The appendices expand on the relevant theoretical background and present descriptions of sample projects (including detailed results)."
This book assembles contributions from computer scientists and librarians that altogether encompass the complete range of tools, tasks and processes needed to successfully preserve the cultural heritage of the Web. It combines the librarian 's application knowledge with the computer scientist 's implementation knowledge, and serves as a standard introduction for everyone involved in keeping alive the immense amount of online information.
Fuzzy Databases: Modeling, Design and Implementation focuses on some semantic aspects which have not been studied in previous works and extends the EER model with fuzzy capabilities. The exposed model is called FuzzyEER model, and some of the studied extensions are: fuzzy attributes, fuzzy aggregations and different aspects on specializations, such as fuzzy degrees, fuzzy constraints, etc. All these fuzzy extensions offer greater expressiveness in conceptual design. This book, while providing a global and integrated view of fuzzy database constructions, serves as an introduction to fuzzy logic, fuzzy databases and fuzzy modeling in databases.
This proceedings volume brings together the results of a corporate discussion on research, academic teaching and education in the field of business and economics in the context of globalization. The contributions examine leadership and sustainability, quality and governance and the internationalization of higher education. With a particular focus on business education and business schools, the book discusses the labor market and modernization as well as contemporary trends and challenges. By including both academic papers and contributions from industry, it forges research links between academia, business and industry.
This book provides a summary of the manifold audio- and web-based approaches to music information retrieval (MIR) research. In contrast to other books dealing solely with music signal processing, it addresses additional cultural and listener-centric aspects and thus provides a more holistic view. Consequently, the text includes methods operating on features extracted directly from the audio signal, as well as methods operating on features extracted from contextual information, either the cultural context of music as represented on the web or the user and usage context of music. Following the prevalent document-centered paradigm of information retrieval, the book addresses models of music similarity that extract computational features to describe an entity that represents music on any level (e.g., song, album, or artist), and methods to calculate the similarity between them. While this perspective and the representations discussed cannot describe all musical dimensions, they enable us to effectively find music of similar qualities by providing abstract summarizations of musical artifacts from different modalities. The text at hand provides a comprehensive and accessible introduction to the topics of music search, retrieval, and recommendation from an academic perspective. It will not only allow those new to the field to quickly access MIR from an information retrieval point of view but also raise awareness for the developments of the music domain within the greater IR community. In this regard, Part I deals with content-based MIR, in particular the extraction of features from the music signal and similarity calculation for content-based retrieval. Part II subsequently addresses MIR methods that make use of the digitally accessible cultural context of music. Part III addresses methods of collaborative filtering and user-aware and multi-modal retrieval, while Part IV explores current and future applications of music retrieval and recommendation.>
Searching for Semantics: Data Mining, Reverse Engineering Stefano Spaccapietra Fred M aryanski Swiss Federal Institute of Technology University of Connecticut Lausanne, Switzerland Storrs, CT, USA REVIEW AND FUTURE DIRECTIONS In the last few years, database semantics research has turned sharply from a highly theoretical domain to one with more focus on practical aspects. The DS- 7 Working Conference held in October 1997 in Leysin, Switzerland, demon strated the more pragmatic orientation of the current generation of leading researchers. The papers presented at the meeting emphasized the two major areas: the discovery of semantics and semantic data modeling. The work in the latter category indicates that although object-oriented database management systems have emerged as commercially viable prod ucts, many fundamental modeling issues require further investigation. Today's object-oriented systems provide the capability to describe complex objects and include techniques for mapping from a relational database to objects. However, we must further explore the expression of information regarding the dimensions of time and space. Semantic models possess the richness to describe systems containing spatial and temporal data. The challenge of in corporating these features in a manner that promotes efficient manipulation by the subject specialist still requires extensive development."
Data compression is now indispensable to products and services of many industries including computers, communications, healthcare, publishing and entertainment. This invaluable resource introduces this area to information system managers and others who need to understand how it is changing the world of digital systems. For those who know the technology well, it reveals what happens when data compression is used in real-world applications and provides guidance for future technology development.
This book introduces quantitative intertextuality, a new approach to the algorithmic study of information reuse in text, sound and images. Employing a variety of tools from machine learning, natural language processing, and computer vision, readers will learn to trace patterns of reuse across diverse sources for scholarly work and practical applications. The respective chapters share highly novel methodological insights in order to guide the reader through the basics of intertextuality. In Part 1, "Theory", the theoretical aspects of intertextuality are introduced, leading to a discussion of how they can be embodied by quantitative methods. In Part 2, "Practice", specific quantitative methods are described to establish a set of automated procedures for the practice of quantitative intertextuality. Each chapter in Part 2 begins with a general introduction to a major concept (e.g., lexical matching, sound matching, semantic matching), followed by a case study (e.g., detecting allusions to a popular television show in tweets, quantifying sound reuse in Romantic poetry, identifying influences in fan faction by thematic matching), and finally the development of an algorithm that can be used to reveal parallels in the relevant contexts. Because this book is intended as a "gentle" introduction, the emphasis is often on simple yet effective algorithms for a given matching task. A set of exercises is included at the end of each chapter, giving readers the chance to explore more cutting-edge solutions and novel aspects to the material at hand. Additionally, the book's companion website includes software (R and C++ library code) and all of the source data for the examples in the book, as well as supplemental content (slides, high-resolution images, additional results) that may prove helpful for exploring the different facets of quantitative intertextuality that are presented in each chapter. Given its interdisciplinary nature, the book will appeal to a broad audience. From practitioners specializing in forensics to students of cultural studies, readers with diverse backgrounds (e.g., in the social sciences, natural language processing, or computer vision) will find valuable insights.
Software product lines represent perhaps the most exciting paradigm shift in software development since the advent of high-level programming languages. Nowhere else in software engineering have we seen such breathtaking improvements in cost, quality, time to market, and developer productivity, often registering in the order-of-magnitude range. Here, the authors combine academic research results with real-world industrial experiences, thus presenting a broad view on product line engineering so that both managers and technical specialists will benefit from exposure to this work. They capture the wealth of knowledge that eight companies have gathered during the introduction of the software product line engineering approach in their daily practice.
Database and database systems have become an essential part of everyday life, such as in banking activities, online shopping, or reservations of airline tickets and hotels. These trends place more demands on the capabilities of future database systems, which need to evolve into decision making systems based on data from multiple sources with varying reliability. In this book a model for the next generation of database systems is presented. It is demonstrated how to quantize favorable and unfavorable qualitative facts so that they can be stored and processed efficiently, as well as how to use the reliability of the contributing sources in our decision makings. The concept of a confidence index set (ciset), is introduced in order to mathematically model the above issues. A simple introduction to relational database systems is given allowing anyone with no background in database theory to appreciate the further contents of this work, especially the extended relational operations and semantics of the ciset relational database model.
This book provides a comprehensive picture of mobile big data starting from data sources to mobile data driven applications. Mobile Big Data comprises two main components: an overview of mobile big data, and the case studies based on real-world data recently collected by one of the largest mobile network carriers in China. In the first component, four areas of mobile big data life cycle are surveyed: data source and collection, transmission, computing platform and applications. In the second component, two case studies are provided, based on the signaling data collected in the cellular core network in terms of subscriber privacy evaluation and demand forecasting for network management. These cases respectively give a vivid demonstration of what mobile big data looks like, and how it can be analyzed and mined to generate useful and meaningful information and knowledge. This book targets researchers, practitioners and professors relevant to this field. Advanced-level students studying computer science and electrical engineering will also be interested in this book as supplemental reading.
The IFIP World Computer Congress (WCC) is one of the most important conferences in the area of computer science at the worldwide level and it has a federated structure, which takes into account the rapidly growing and expanding interests in this area. Informatics is rapidly changing and becoming more and more connected to a number of human and social science disciplines. Human-computer interaction is now a mature and still dynamically evolving part of this area, which is represented in IFIP by the Technical Committee 13 on HCI. In this WCC edition it was interesting and useful to have again a Symposium on Human-Computer Interaction in order to p- sent and discuss a number of contributions in this field. There has been increasing awareness among designers of interactive systems of the importance of designing for usability, but we are still far from having products that are really usable, and usability can mean different things depending on the app- cation domain. We are all aware that too many users of current technology often feel frustrated because computer systems are not compatible with their abilities and needs in existing work practices. As designers of tomorrow's technology, we have the - sponsibility of creating computer artifacts that would permit better user experience with the various computing devices, so that users may enjoy more satisfying expe- ences with information and communications technologies.
Proceedings of the 2012 International Conference on Information Technology and Software Engineering presents selected articles from this major event, which was held in Beijing, December 8-10, 2012. This book presents the latest research trends, methods and experimental results in the fields of information technology and software engineering, covering various state-of-the-art research theories and approaches. The subjects range from intelligent computing to information processing, software engineering, Web, unified modeling language (UML), multimedia, communication technologies, system identification, graphics and visualizing, etc. The proceedings provide a major interdisciplinary forum for researchers and engineers to present the most innovative studies and advances, which can serve as an excellent reference work for researchers and graduate students working on information technology and software engineering. Prof. Wei Lu, Dr. Guoqiang Cai, Prof. Weibin Liu and Dr. Weiwei Xing all work at Beijing Jiaotong University.
Concurrency in Dependable Computing focuses on concurrency related issues in the area of dependable computing. Failures of system components, be hardware units or software modules, can be viewed as undesirable events occurring concurrently with a set of normal system events. Achieving dependability therefore is closely related to, and also benefits from, concurrency theory and formalisms. This beneficial relationship appears to manifest into three strands of work. Application level structuring of concurrent activities. Concepts such as atomic actions, conversations, exception handling, view synchrony, etc., are useful in structuring concurrent activities so as to facilitate attempts at coping with the effects of component failures. Replication induced concurrency management. Replication is a widely used technique for achieving reliability. Replica management essentially involves ensuring that replicas perceive concurrent events identically. Application of concurrency formalisms for dependability assurance. Fault-tolerant algorithms are harder to verify than their fault-free counterparts due to the fact that the impact of component faults at each state need to be considered in addition to valid state transitions. CSP, Petri nets, CCS are useful tools to specify and verify fault-tolerant designs and protocols. Concurrency in Dependable Computing explores many significant issues in all three strands. To this end, it is composed as a collection of papers written by authors well-known in their respective areas of research. To ensure quality, the papers are reviewed by a panel of at least three experts in the relevant area.
Automation is nothing new to industry. It has a long tradition on the factory floor, where its constant objective has been to increase the productivity of manufacturing processes. Only with the advent of computers could the focus of automation widen to include administrative and information-handling tasks. More recently, automation has been extended to the more intellectual tasks of production planning and control, material and resource planning, engineering design, and quality control. New challenges arise in the form of flexible manu facturing, assembly automation, and automated floor vehicles, to name just a few. The sheer complexity of the problems as well as the state of the art has led scientists and engineers to concentrate on issues that could easily be isolated. For example, it was much simpler to build CAD systems whose sole objective was to ease the task of drawing, rather than to worry at the same time about how the design results could be interfaced with the manufacturing or assembly processes. It was less problematic to gather statistics from quality control and to print reports than to react immediately to first hints of irregularities by inter facing with the designers or manufacturing control, or, even better, by auto matically diagnosing the causes from the design and planning data. A heav- though perhaps unavoidable - price must today be paid whenever one tries to assemble these isolated solutions into a larger, integrated system."
The book serves as a collection of multi-disciplinary contributions related to Geographic Hypermedia and highlights the technological aspects of GIS. Specifically, it focuses on its database and database management system. The methodologies for modeling and handling geographic data are described. It presents the novel models, methods and tools applied in Spatial Decision Support paradigm.
Video on Demand Systems brings together in one place important contributions and up-to-date research results in this fast moving area. Video on Demand Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Advanced visual analysis and problem solving has been conducted successfully for millennia. The Pythagorean Theorem was proven using visual means more than 2000 years ago. In the 19th century, John Snow stopped a cholera epidemic in London by proposing that a specific water pump be shut down. He discovered that pump by visually correlating data on a city map. The goal of this book is to present the current trends in visual and spatial analysis for data mining, reasoning, problem solving and decision-making. This is the first book to focus on visual decision making and problem solving in general with specific applications in the geospatial domain - combining theory with real-world practice. The book is unique in its integration of modern symbolic and visual approaches to decision making and problem solving. As such, it ties together much of the monograph and textbook literature in these emerging areas. This book contains 21 chapters that have been grouped into five parts: (1) visual problem solving and decision making, (2) visual and heterogeneous reasoning, (3) visual correlation, (4) visual and spatial data mining, and (5) visual and spatial problem solving in geospatial domains. Each chapter ends with a summary and exercises. The book is intended for professionals and graduate students in computer science, applied mathematics, imaging science and Geospatial Information Systems (GIS). In addition to being a state-of-the-art research compilation, this book can be used a text for advanced courses on the subjects such as modeling, computer graphics, visualization, image processing, data mining, GIS, and algorithm analysis.
"JDBC Metadata, MySQL, and Oracle Recipes" is the only book that focuses on metadata or annotation-based code recipes for JDBC API for use with Oracle and MySQL. It continues where the authors other book, "JDBC Recipes: A Problem-Solution Approach," leaves off. This edition is also a Java EE 5-compliant book, perfect for lightweight Java database development. And it provides cut-and-paste code templates that can be immediately customized and applied in each developer's application development.
Data mining involves the non-trivial extraction of implicit, previously unknown, and potentially useful information from databases. Genetic Programming (GP) and Inductive Logic Programming (ILP) are two of the approaches for data mining. This book first sets the necessary backgrounds for the reader, including an overview of data mining, evolutionary algorithms and inductive logic programming. It then describes a framework, called GGP (Generic Genetic Programming), that integrates GP and ILP based on a formalism of logic grammars. The formalism is powerful enough to represent context- sensitive information and domain-dependent knowledge. This knowledge can be used to accelerate the learning speed and/or improve the quality of the knowledge induced. A grammar-based genetic programming system called LOGENPRO (The LOGic grammar based GENetic PROgramming system) is detailed and tested on many problems in data mining. It is found that LOGENPRO outperforms some ILP systems. We have also illustrated how to apply LOGENPRO to emulate Automatically Defined Functions (ADFs) to discover problem representation primitives automatically. By employing various knowledge about the problem being solved, LOGENPRO can find a solution much faster than ADFs and the computation required by LOGENPRO is much smaller than that of ADFs. Moreover, LOGENPRO can emulate the effects of Strongly Type Genetic Programming and ADFs simultaneously and effortlessly. Data Mining Using Grammar Based Genetic Programming and Applications is appropriate for researchers, practitioners and clinicians interested in genetic programming, data mining, and the extraction of data from databases.
In 2013, the International Conference on Advance Information Systems Engineering (CAiSE) turns 25. Initially launched in 1989, for all these years the conference has provided a broad forum for researchers working in the area of Information Systems Engineering. To reflect on the work done so far and to examine prospects for future work, the CAiSE Steering Committee decided to present a selection of seminal papers published for the conference during these years and to ask their authors, all prominent researchers in the field, to comment on their work and how it has developed over the years. The scope of the papers selected covers a broad range of topics related to modeling and designing information systems, collecting and managing requirements, and with special attention to how information systems are engineered towards their final development and deployment as software components.With this approach, the book provides not only a historical analysis on how information systems engineering evolved over the years, but also a fascinating social network analysis of the research community. Additionally, many inspiring ideas for future research and new perspectives in this area are sparked by the intriguing comments of the renowned authors. |
![]() ![]() You may like...
Helping Deaf and Hard of Hearing…
Susan Easterbrooks, Ellen L. Estes
Paperback
R1,350
Discovery Miles 13 500
Working with Deaf Children - Sign…
Pamela Knight, Ruth Swanwick
Hardcover
R4,019
Discovery Miles 40 190
Chaos and Complex Systems - Proceedings…
Stavros G. Stavrinides, Mehmet Ozer
Hardcover
R5,577
Discovery Miles 55 770
|