![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
Foreword by Lars Knudsen
"Proceedings of the 2012 International Conference on Information
Technology and Software Engineering" presents selected articles
from this major event, which was held in Beijing, December 8-10,
2012. This book presents the latest research trends, methods and
experimental results in the fields of information technology and
software engineering, covering various state-of-the-art research
theories and approaches. The subjects range from intelligent
computing to information processing, software engineering, Web,
unified modeling language (UML), multimedia, communication
technologies, system identification, graphics and visualizing,
etc.
A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems contains an invaluable collection of quantitative methods that enable real-time system developers to understand, analyze, and predict the timing behavior of many real-time systems. The methods are practical and theoretically sound, and can be used to assess design tradeoffs and to troubleshoot system timing behavior. This collection of methods is called rate monotonic analysis (RMA). The Handbook includes a framework for describing and categorizing the timing aspects of real-time systems, step-by-step techniques for performing timing analysis, numerous examples of real-time situations to which the techniques can be applied, and two case studies. A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems has been created to serve as a definitive source of information and a guide for developers as they analyze and design real-time systems using RMA. The Handbook is an excellent reference, and may be used as the text for advanced courses on the subject.
This book focuses on the data mining, systems biology, and bioinformatics computational methods that can be used to summarize biological networks. Specifically, it discusses an array of techniques related to biological network clustering, network summarization, and differential network analysis which enable readers to uncover the functional and topological organization hidden in a large biological network. The authors also examine crucial open research problems in this arena. Academics, researchers, and advanced-level students will find this book to be a comprehensive and exceptional resource for understanding computational techniques and their applications for a summary of biological networks.
The Semantic Web has evolved as a blueprint for a knowledge-based framework aimed at crossing the chasm from the current Web of unstructured information resources to a Web equipped with metadata and oriented to delegating tasks to software agents. Semantic Web Personalization and Context Awareness: Management of Personal Identities and Social Networking communicates relevant recent research in Semantic Web-based personalization as applied to the context of information systems. This book reviews knowledge engineering for organizational applications, and Semantic Web approaches to information systems and ontology-based information systems research, as well as the diverse underlying database and knowledge representation aspects that impact personalization and customization.
The need for efficient content-based image retrieval has increased tremendously in areas such as biomedicine, military, commerce, education, and Web image classification and searching. In the biomedical domain, content-based image retrieval can be used in patient digital libraries, clinical diagnosis, searching of 2-D electrophoresis gels, and pathology slides. Integrated Region-Based Image Retrieval presents a wavelet-based approach for feature extraction, combined with integrated region matching. An image in the database, or a portion of an image, is represented by a set of regions, roughly corresponding to objects, which are characterized by color, texture, shape, and location. A measure for the overall similarity between images is developed as a region-matching scheme that integrates properties of all the regions in the images. The advantage of using this "soft matching" is that it makes the metric robust to poor segmentation, an important property that previous research has not solved. Integrated Region-Based Image Retrieval demonstrates an experimental image retrieval system called SIMPLIcity (Semantics-sensitive Integrated Matching for Picture LIbraries). This system validates these methods on various image databases, proving that such methods perform much better and much faster than existing ones. The system is exceptionally robust to image alterations such as intensity variation, sharpness variation, intentional distortions, cropping, shifting, and rotation. These features are extremely important to biomedical image databases since visual features in the query image are not exactly the same as the visual features in the images in the database. Integrated Region-Based ImageRetrieval is an excellent reference for researchers in the fields of image retrieval, multimedia, computer vision and image processing.
This proceedings volume presents selected papers from the 7th International Conference on Emerging Databases: Technologies, Applications, and Theory (EDB 2017), which was held in Busan, Korea from 7 to 9 August, 2017. This conference series was launched by the Korean Institute of Information Scientists and Engineers (KIISE) Database Society of Korea as an annual forum for exploring novel technologies, applications, and research advances in the field of emerging databases. This forum has evolved into the premier international venue for researchers and practitioners to discuss current research issues, challenges, new technologies, and solutions.
Synchronizing E-Security is a critical investigation and empirical analysis of studies conducted among companies that support electronic commerce transactions in both advanced and developing economies. This book presents insights into the validity and credibility of current risk assessment methods that support electronic transactions in the global economy. Synchronizing E-Security focuses on a number of case studies of IT companies, within selected countries in West Africa, Europe, Asia and the United States. The foundation of this work is based on previous studies by Williams G., Avudzivi P.V (Hawaii 2002) on the retrospective view of information security management and the impact of tele-banking on the end-user.
Multimedia Mining: A Highway to Intelligent Multimedia Documents brings together experts in digital media content analysis, state-of-art data mining and knowledge discovery in multimedia database systems, knowledge engineers and domain experts from diverse applied disciplines. Multimedia documents are ubiquitous and often required, if not essential, in many applications today. This phenomenon has made multimedia documents widespread and extremely large. There are tools for managing and searching within these collections, but the need for tools to extract hidden useful knowledge embedded within multimedia objects is becoming pressing and central for many decision-making applications. The tools needed today are tools for discovering relationships between objects or segments within multimedia document components, such as classifying images based on their content, extracting patterns in sound, categorizing speech and music, and recognizing and tracking objects in video streams.
As the first to focus on the issue of Data Warehouse Requirements Engineering, this book introduces a model-driven requirements process used to identify requirements granules and incrementally develop data warehouse fragments. In addition, it presents an approach to the pair-wise integration of requirements granules for consolidating multiple data warehouse fragments. The process is systematic and does away with the fuzziness associated with existing techniques. Thus, consolidation is treated as a requirements engineering issue. The notion of a decision occupies a central position in the decision-based approach. On one hand, information relevant to a decision must be elicited from stakeholders; modeled; and transformed into multi-dimensional form. On the other, decisions themselves are to be obtained from decision applications. For the former, the authors introduce a suite of information elicitation techniques specific to data warehousing. This information is subsequently converted into multi-dimensional form. For the latter, not only are decisions obtained from decision applications for managing operational businesses, but also from applications for formulating business policies and for defining rules for enforcing policies, respectively. In this context, the book presents a broad range of models, tools and techniques. For readers from academia, the book identifies the scientific/technological problems it addresses and provides cogent arguments for the proposed solutions; for readers from industry, it presents an approach for ensuring that the product meets its requirements while ensuring low lead times in delivery.
also in: THE KLUWER INTERNATIONAL SERIES ON ASIAN STUDIES IN COMPUTER AND INFORMATION SCIENCE, Volume 2
This thesis primarily focuses on how to carry out intelligent sensing and understand the high-dimensional and low-quality visual information. After exploring the inherent structures of the visual data, it proposes a number of computational models covering an extensive range of mathematical topics, including compressive sensing, graph theory, probabilistic learning and information theory. These computational models are also applied to address a number of real-world problems including biometric recognition, stereo signal reconstruction, natural scene parsing, and SAR image processing.
The volume "Fuzziness in Database Management Systems" is a highly informative, well-organized and up-to-date collection of contributions authored by many of the leading experts in its field. Among the contributors are the editors, Professors Patrick Bose and Janusz Kacprzyk, both of whom are known internationally. The book is like a movie with an all-star cast. The issue of fuzziness in database management systems has a long history. It begins in 1968 and 1971, when I spent my sabbatical leaves at the IBM Research Laboratory in San Jose, California, as a visiting scholar. During these periods I was associated with Dr. E.F. Codd, the father of relational models of database systems, and came in contact with the developers ofiBMs System Rand SQL. These associations and contacts at a time when the methodology of relational models of data was in its formative stages, made me aware of the basic importance of such models and the desirability of extending them to fuzzy database systems and fuzzy query languages. This perception was reflected in my 1973 ffiM report which led to the paper on the concept of a linguistic variable and later to the paper on the meaning representation language PRUF (Possibilistic Relational Universal Fuzzy). More directly related to database issues during that period were the theses of my students V. Tahani, J. Yang, A. Bolour, M. Shen and R. Sheng, and many subsequent reports by both graduate and undergraduate students at Berkeley.
This book analyzes techniques that use the direct and inverse fuzzy transform for image processing and data analysis. The book is divided into two parts, the first of which describes methods and techniques that use the bi-dimensional fuzzy transform method in image analysis. In turn, the second describes approaches that use the multidimensional fuzzy transform method in data analysis. An F-transform in one variable is defined as an operator which transforms a continuous function f on the real interval [a,b] in an n-dimensional vector by using n-assigned fuzzy sets A1, ... , An which constitute a fuzzy partition of [a,b]. Then, an inverse F-transform is defined in order to convert the n-dimensional vector output in a continuous function that equals f up to an arbitrary quantity . We may limit this concept to the finite case by defining the discrete F-transform of a function f in one variable, even if it is not known a priori. A simple extension of this concept to functions in two variables allows it to be used for the coding/decoding and processing of images. Moreover, an extended version with multidimensional functions can be used to address a host of topics in data analysis, including the analysis of large and very large datasets. Over the past decade, many researchers have proposed applications of fuzzy transform techniques for various image processing topics, such as image coding/decoding, image reduction, image segmentation, image watermarking and image fusion; and for such data analysis problems as regression analysis, classification, association rule extraction, time series analysis, forecasting, and spatial data analysis. The robustness, ease of use, and low computational complexity of fuzzy transforms make them a powerful fuzzy approximation tool suitable for many computer science applications. This book presents methods and techniques based on the use of fuzzy transforms in various applications of image processing and data analysis, including image segmentation, image tamper detection, forecasting, and classification, highlighting the benefits they offer compared with traditional methods. Emphasis is placed on applications of fuzzy transforms to innovative problems, such as massive data mining, and image and video security in social networks based on the application of advanced fragile watermarking systems. This book is aimed at researchers, students, computer scientists and IT developers to acquire the knowledge and skills necessary to apply and implement fuzzy transforms-based techniques in image and data analysis applications.
Cryptography, secret writing, is enjoying a scientific renaissance following the seminal discovery in 1977 of public-key cryptography and applications in computers and communications. This book gives a broad overview of public-key cryptography - its essence and advantages, various public-key cryptosystems, and protocols - as well as a comprehensive introduction to classical cryptography and cryptoanalysis. The second edition has been revised and enlarged especially in its treatment of cryptographic protocols. From a review of the first edition: "This is a comprehensive review ... there can be no doubt that this will be accepted as a standard text. At the same time, it is clearly and entertainingly written ... and can certainly stand alone." Alex M. Andrew, Kybernetes, March 1992
Real-time computer systems are very often subject to dependability requirements because of their application areas. Fly-by-wire airplane control systems, control of power plants, industrial process control systems and others are required to continue their function despite faults. Fault-tolerance and real-time requirements thus constitute a kind of natural combination in process control applications. Systematic fault-tolerance is based on redundancy, which is used to mask failures of individual components. The problem of replica determinism is thereby to ensure that replicated components show consistent behavior in the absence of faults. It might seem trivial that, given an identical sequence of inputs, replicated computer systems will produce consistent outputs. Unfortunately, this is not the case. The problem of replica non-determinism and the presentation of its possible solutions is the subject of Fault-Tolerant Real-Time Systems: The Problem of Replica Determinism. The field of automotive electronics is an important application area of fault-tolerant real-time systems. Systems like anti-lock braking, engine control, active suspension or vehicle dynamics control have demanding real-time and fault-tolerance requirements. These requirements have to be met even in the presence of very limited resources since cost is extremely important. Because of its interesting properties Fault-Tolerant Real-Time Systems gives an introduction to the application area of automotive electronics. The requirements of automotive electronics are a topic of discussion in the remainder of this work and are used as a benchmark to evaluate solutions to the problem of replica determinism.
st This volume contains the proceedings of two conferences held as part of the 21 IFIP World Computer Congress in Brisbane, Australia, 20-23 September 2010. th The first part of the book presents the proceedings of DIPES 2010, the 7 IFIP Conference on Distributed and Parallel Embedded Systems. The conference, int- duced in a separate preface by the Chairs, covers a range of topics from specification and design of embedded systems through to dependability and fault tolerance. rd The second part of the book contains the proceedings of BICC 2010, the 3 IFIP Conference on Biologically-Inspired Collaborative Computing. The conference is concerned with emerging techniques from research areas such as organic computing, autonomic computing and self-adaptive systems, where inspiraton for techniques - rives from exhibited behaviour in nature and biology. Such techniques require the use of research developed by the DIPES community in supporting collaboration over multiple systems. We hope that the combination of the two proceedings will add value for the reader and advance our related work.
Very little has been written to address the emerging trends in social software and technology. With these technologies and applications being relatively new and evolving rapidly, research is wide open in these fields. Social Software and Web 2.0 Technology Trends fills this critical research need, providing an overview of the current state of Web 2.0 technologies and their impact on organizations and educational institutions. Written for academicians and practicing managers, this estimable book presents business applications as well as implementations for institutions of higher education with numerous examples of how these technologies are currently being used. Delivering authoritative insights to a rapidly evolving domain of technology application, this book is an invaluable resource for both academic libraries and for classroom instruction.
Data Mining Methods for Knowledge Discovery provides an introduction to the data mining methods that are frequently used in the process of knowledge discovery. This book first elaborates on the fundamentals of each of the data mining methods: rough sets, Bayesian analysis, fuzzy sets, genetic algorithms, machine learning, neural networks, and preprocessing techniques. The book then goes on to thoroughly discuss these methods in the setting of the overall process of knowledge discovery. Numerous illustrative examples and experimental findings are also included. Each chapter comes with an extensive bibliography. Data Mining Methods for Knowledge Discovery is intended for senior undergraduate and graduate students, as well as a broad audience of professionals in computer and information sciences, medical informatics, and business information systems.
Decision diagrams (DDs) are data structures for efficient (time/space) representations of large discrete functions. In addition to their wide application in engineering practice, DDs are now a standard part of many CAD systems for logic design and a basis for severe signal processing algorithms. "Spectral Interpretation of Decision Diagrams" derives from attempts to classify and uniformly interpret DDs through spectral interpretation methods, relating them to different Fourier-series-like functional expressions for discrete functions and a group-theoretic approach to DD optimization. The book examines DDs found in literature and engineering practice and provides insights into relationships between DDs and different polynomial or spectral expressions for representation of discrete functions. In addition, it offers guidelines and criteria for selection of the most suitable representation in terms of space and time complexity. The work complements theory with numerous illustrative examples from practice. Moreover, the importance of DD representations to the verification and testing of arithmetic circuits is addressed, as well as problems related to various signal processing tasks.
The book examines patterns of participation in human rights treaties. International relations theory is divided on what motivates states to participate in treaties, specifically human rights treaties. Instead of examining the specific motivations, this dissertation examines patterns of participation. In doing so, it attempts to match theoretical expectations of state behavior with participation. This book provides significant evidence that there are multiple motivations that lead states to participate in human rights treaties.
This book focuses on recent advances in the Internet of Things (IoT) in biomedical and healthcare technologies, presenting theoretical, methodological, well-established, and validated empirical work in these fields. Artificial intelligence and IoT are set to revolutionize all industries, but perhaps none so much as health care. Both biomedicine and machine learning applications are capable of analyzing data stored in national health databases in order to identify potential health problems, complications and effective protocols, and a range of wearable devices for biomedical and healthcare applications far beyond tracking individuals' steps each day has emerged. These prosthetic technologies have made significant strides in recent decades with the advances in materials and development. As a result, more flexible, more mobile chip-enabled prosthetics or other robotic devices are on the horizon. For example, IoT-enabled wireless ECG sensors that reduce healthcare cost, and lead to better quality of life for cardiac patients. This book focuses on three current trends that are likely to have a significant impact on future healthcare: Advanced Medical Imaging and Signal Processing; Biomedical Sensors; and Biotechnological and Healthcare Advances. It also presents new methods of evaluating medical data, and diagnosing diseases in order to improve general quality of life.
Information and communication technology (ICT) is permeating all aspects of service management; in the public sector, ICT is improving the capacity of government agencies to provide a wide array of innovative services that benefit citizens. E-Government is emerging as a multidisciplinary field of research based initially on empirical insights from practice. Efforts to theoretically anchor the field have opened perspectives from multiple research domains, as demonstrated in Practical Studies in E-Government. In this volume, the editors and contributors consider the evolution of the e-government field from both practical and research perspectives. Featuring in-depth case studies of initiatives in eight countries, the book deals with such technology-oriented issues as interoperability, prototyping, data quality, and advanced interfaces, and management-oriented issues as e-procurement, e-identification, election results verification, and information privacy. The book features best practices, tools for measuring and improving performance, and analytical methods for researchers. |
![]() ![]() You may like...
RFID and Wireless Sensors Using…
Angel Ramos, Antonio Lazaro, …
Hardcover
Internet of Things. A Confluence of Many…
Augusto Casaca, Srinivas Katkoori, …
Hardcover
R1,566
Discovery Miles 15 660
Systems Engineering and Artificial…
William F. Lawless, Ranjeev Mittu, …
Hardcover
R4,621
Discovery Miles 46 210
Deep Learning Applications for…
Monica R. Mundada, Seema S., …
Hardcover
R7,211
Discovery Miles 72 110
Ontology Management - Semantic Web…
Martin Hepp, Pieter De Leenheer, …
Hardcover
R2,910
Discovery Miles 29 100
Bayesian Statistics in Action - BAYSM…
Raffaele Argiento, Ettore Lanzarone, …
Hardcover
R4,945
Discovery Miles 49 450
Pyomo - Optimization Modeling in Python
Michael L. Bynum, Gabriel A. Hackebeil, …
Hardcover
R1,906
Discovery Miles 19 060
Analytical Methods in Statistics…
Matus Maciak, Michal Pesta, …
Hardcover
R2,873
Discovery Miles 28 730
Internet of Things. Technology and…
Luis M. Camarinha-Matos, Geert Heijenk, …
Hardcover
R2,655
Discovery Miles 26 550
|