![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
Mohamed Medhat Gaber "It is not my aim to surprise or shock you - but the simplest way I can summarise is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until - in a visible future - the range of problems they can handle will be coextensive with the range to which the human mind has been applied" by Herbert A. Simon (1916-2001) 1Overview This book suits both graduate students and researchers with a focus on discovering knowledge from scienti c data. The use of computational power for data analysis and knowledge discovery in scienti c disciplines has found its roots with the re- lution of high-performance computing systems. Computational science in physics, chemistry, and biology represents the rst step towards automation of data analysis tasks. The rational behind the developmentof computationalscience in different - eas was automating mathematical operations performed in those areas. There was no attention paid to the scienti c discovery process. Automated Scienti c Disc- ery (ASD) [1-3] represents the second natural step. ASD attempted to automate the process of theory discovery supported by studies in philosophy of science and cognitive sciences. Although early research articles have shown great successes, the area has not evolved due to many reasons. The most important reason was the lack of interaction between scientists and the automating systems.
th I3E 2010 marked the 10 anniversary of the IFIP Conference on e-Business, e- Services, and e-Society, continuing a tradition that was invented in 1998 during the International Conference on Trends in Electronic Commerce, TrEC 1998, in Hamburg (Germany). Three years later the inaugural I3E 2001 conference was held in Zurich (Switzerland). Since then I3E has made its journey through the world: 2002 Lisbon (Portugal), 2003 Sao Paulo (Brazil), 2004 Toulouse (France), 2005 Poznan (Poland), 2006 Turku (Finland), 2007 Wuhan (China), 2008 Tokyo (Japan), and 2009 Nancy (France). I3E 2010 took place in Buenos Aires (Argentina) November 3-5, 2010. Known as "The Pearl" of South America, Buenos Aires is a cosmopolitan, colorful, and vibrant city, surprising its visitors with a vast variety of cultural and artistic performances, European architecture, and the passion for tango, coffee places, and football disc- sions. A cultural reference in Latin America, the city hosts 140 museums, 300 theaters, and 27 public libraries including the National Library. It is also the main educational center in Argentina and home of renowned universities including the U- versity of Buenos Aires, created in 1821. Besides location, the timing of I3E 2010 is th also significant--it coincided with the 200 anniversary celebration of the first local government in Argentina.
Hyperspectral Image Fusion is the first text dedicated to the fusion techniques for such a huge volume of data consisting of a very large number of images. This monograph brings out recent advances in the research in the area of visualization of hyperspectral data. It provides a set of pixel-based fusion techniques, each of which is based on a different framework and has its own advantages and disadvantages. The techniques are presented with complete details so that practitioners can easily implement them. It is also demonstrated how one can select only a few specific bands to speed up the process of fusion by exploiting spatial correlation within successive bands of the hyperspectral data. While the techniques for fusion of hyperspectral images are being developed, it is also important to establish a framework for objective assessment of such techniques. This monograph has a dedicated chapter describing various fusion performance measures that are applicable to hyperspectral image fusion. This monograph also presents a notion of consistency of a fusion technique which can be used to verify the suitability and applicability of a technique for fusion of a very large number of images. This book will be a highly useful resource to the students, researchers, academicians and practitioners in the specific area of hyperspectral image fusion, as well as generic image fusion.
This book aims to identify promising future developmental opportunities and applications for Tech Mining. Specifically, the enclosed contributions will pursue three converging themes: The increasing availability of electronic text data resources relating to Science, Technology and Innovation (ST&I). The multiple methods that are able to treat this data effectively and incorporate means to tap into human expertise and interests. Translating those analyses to provide useful intelligence on likely future developments of particular emerging S&T targets. Tech Mining can be defined as text analyses of ST&I information resources to generate Competitive Technical Intelligence (CTI). It combines bibliometrics and advanced text analytic, drawing on specialized knowledge pertaining to ST&I. Tech Mining may also be viewed as a special form of "Big Data" analytics because it searches on a target emerging technology (or key organization) of interest in global databases. One then downloads, typically, thousands of field-structured text records (usually abstracts), and analyses those for useful CTI. Forecasting Innovation Pathways (FIP) is a methodology drawing on Tech Mining plus additional steps to elicit stakeholder and expert knowledge to link recent ST&I activity to likely future development. A decade ago, we demeaned Management of Technology (MOT) as somewhat self-satisfied and ignorant. Most technology managers relied overwhelmingly on casual human judgment, largely oblivious of the potential of empirical analyses to inform R&D management and science policy. CTI, Tech Mining, and FIP are changing that. The accumulation of Tech Mining research over the past decade offers a rich resource of means to get at emerging technology developments and organizational networks to date. Efforts to bridge from those recent histories of development to project likely FIP, however, prove considerably harder. One focus of this volume is to extend the repertoire of information resources; that will enrich FIP. Featuring cases of novel approaches and applications of Tech Mining and FIP, this volume will present frontier advances in ST&I text analytics that will be of interest to students, researchers, practitioners, scholars and policy makers in the fields of R&D planning, technology management, science policy and innovation strategy.
This book introduces the basic methodologies for successful data analytics. Matrix optimization and approximation are explained in detail and extensively applied to dimensionality reduction by principal component analysis and multidimensional scaling. Diffusion maps and spectral clustering are derived as powerful tools. The methodological overlap between data science and machine learning is emphasized by demonstrating how data science is used for classification as well as supervised and unsupervised learning.
Uncertain data is inherent in many important applications, such as environmental surveillance, market analysis, and quantitative economics research. Due to the importance of those applications and rapidly increasing amounts of uncertain data collected and accumulated, analyzing large collections of uncertain data has become an important task. Ranking queries (also known as top-k queries) are often natural and useful in analyzing uncertain data. "Ranking Queries on Uncertain Data" discusses the motivations/applications, challenging problems, the fundamental principles, and the evaluation algorithms of ranking queries on uncertain data. Theoretical and algorithmic results of ranking queries on uncertain data are presented in the last section of this book. "Ranking Queries on Uncertain Data" is the first book to systematically discuss the problem of ranking queries on uncertain data.
This book presents statistical processes for health care delivery and covers new ideas, methods and technologies used to improve health care organizations. It gathers the proceedings of the Third International Conference on Health Care Systems Engineering (HCSE 2017), which took place in Florence, Italy from May 29 to 31, 2017. The Conference provided a timely opportunity to address operations research and operations management issues in health care delivery systems. Scientists and practitioners discussed new ideas, methods and technologies for improving the operations of health care systems, developed in close collaborations with clinicians. The topics cover a broad spectrum of concrete problems that pose challenges for researchers and practitioners alike: hospital drug logistics, operating theatre management, home care services, modeling, simulation, process mining and data mining in patient care and health care organizations.
Offering a structured approach to handling and recovering from a
catastrophic data loss, this book will help both technical and
non-technical professionals put effective processes in place to
secure their business-critical information and provide a roadmap of
the appropriate recovery and notification steps when calamity
strikes.
This book describes the latest methods and tools for the management of information within facility management services and explains how it is possible to collect, organize, and use information over the life cycle of a building in order to optimize the integration of these services and improve the efficiency of processes. The coverage includes presentation and analysis of basic concepts, procedures, and international standards in the development and management of real estate inventories, building registries, and information systems for facility management. Models of strategic management are discussed and the functions and roles of the strategic management center, explained. Detailed attention is also devoted to building information modeling (BIM) for facility management and potential interactions between information systems and BIM applications. Criteria for evaluating information system performance are identified, and guidelines of value in developing technical specifications for facility management services are proposed. The book will aid clients and facility managers in ensuring that information bases are effectively compiled and used in order to enhance building maintenance and facility management.
This book offers a coherent and comprehensive approach to feature subset selection in the scope of classification problems, explaining the foundations, real application problems and the challenges of feature selection for high-dimensional data. The authors first focus on the analysis and synthesis of feature selection algorithms, presenting a comprehensive review of basic concepts and experimental results of the most well-known algorithms. They then address different real scenarios with high-dimensional data, showing the use of feature selection algorithms in different contexts with different requirements and information: microarray data, intrusion detection, tear film lipid layer classification and cost-based features. The book then delves into the scenario of big dimension, paying attention to important problems under high-dimensional spaces, such as scalability, distributed processing and real-time processing, scenarios that open up new and interesting challenges for researchers. The book is useful for practitioners, researchers and graduate students in the areas of machine learning and data mining.
The rate at which geospatial data is being generated exceeds our computational capabilities to extract patterns for the understanding of a dynamically changing world. Geoinformatics and data mining focuses on the development and implementation of computational algorithms to solve these problems. This unique volume contains a collection of chapters on state-of-the-art data mining techniques applied to geoinformatic problems of high complexity and important societal value. Data Mining for Geoinformatics addresses current concerns and developments relating to spatio-temporal data mining issues in remotely-sensed data, problems in meteorological data such as tornado formation, estimation of radiation from the Fukushima nuclear power plant, simulations of traffic data using OpenStreetMap, real time traffic applications of data stream mining, visual analytics of traffic and weather data and the exploratory visualization of collective, mobile objects such as the flocking behavior of wild chickens. This book is designed for researchers and advanced-level students focused on computer science, earth science and geography as a reference or secondary text book. Practitioners working in the areas of data mining and geoscience will also find this book to be a valuable reference.
Understanding the latest capabilities in the cyber threat landscape as well as the cyber forensic challenges and approaches is the best way users and organizations can prepare for potential negative events. Adopting an experiential learning approach, this book describes how cyber forensics researchers, educators and practitioners can keep pace with technological advances, and acquire the essential knowledge and skills, ranging from IoT forensics, malware analysis, and CCTV and cloud forensics to network forensics and financial investigations. Given the growing importance of incident response and cyber forensics in our digitalized society, this book will be of interest and relevance to researchers, educators and practitioners in the field, as well as students wanting to learn about cyber forensics.
This book provides comprehensive coverage of fundamentals of database management system. It contains a detailed description on Relational Database Management System Concepts. There are a variety of solved examples and review questions with solutions. This book is for those who require a better understanding of relational data modeling, its purpose, its nature, and the standards used in creating relational data model.
Advances in hardware technology have increased the capability to store and record personal data about consumers and individuals. This has caused concerns that personal data may be used for a variety of intrusive or malicious purposes. Privacy Preserving Data Mining: Models and Algorithms proposes a number of techniques to perform the data mining tasks in a privacy-preserving way. These techniques generally fall into the following categories: data modification techniques, cryptographic methods and protocols for data sharing, statistical techniques for disclosure and inference control, query auditing methods, randomization and perturbation-based techniques. This edited volume contains surveys by distinguished researchers in the privacy field. Each survey includes the key research content as well as future research directions of a particular topic in privacy. Privacy Preserving Data Mining: Models and Algorithms is designed for researchers, professors, and advanced-level students in computer science. This book is also suitable for practitioners in industry.
This book provides the state-of-the-art development on security and privacy for fog/edge computing, together with their system architectural support and applications. This book is organized into five parts with a total of 15 chapters. Each area corresponds to an important snapshot. The first part of this book presents an overview of fog/edge computing, focusing on its relationship with cloud technology and the future with the use of 5G communication. Several applications of edge computing are discussed. The second part of this book considers several security issues in fog/edge computing, including the secure storage and search services, collaborative intrusion detection method on IoT-fog computing, and the feasibility of deploying Byzantine agreement protocols in untrusted environments. The third part of this book studies the privacy issues in fog/edge computing. It first investigates the unique privacy challenges in fog/edge computing, and then discusses a privacy-preserving framework for the edge-based video analysis, a popular machine learning application on fog/edge. This book also covers the security architectural design of fog/edge computing, including a comprehensive overview of vulnerabilities in fog/edge computing within multiple architectural levels, the security and intelligent management, the implementation of network-function-virtualization-enabled multicasting in part four. It explains how to use the blockchain to realize security services. The last part of this book surveys applications of fog/edge computing, including the fog/edge computing in Industrial IoT, edge-based augmented reality, data streaming in fog/edge computing, and the blockchain-based application for edge-IoT. This book is designed for academics, researchers and government officials, working in the field of fog/edge computing and cloud computing. Practitioners, and business organizations (e.g., executives, system designers, and marketing professionals), who conduct teaching, research, decision making, and designing fog/edge technology will also benefit from this book The content of this book will be particularly useful for advanced-level students studying computer science, computer technology, and information systems, but also applies to students in business, education, and economics, who would benefit from the information, models, and case studies therein.
This book develops an IT strategy for cloud computing that helps businesses evaluate their readiness for cloud services and calculate the ROI. The framework provided helps reduce risks involved in transitioning from traditional "on site" IT strategy to virtual "cloud computing." Since the advent of cloud computing, many organizations have made substantial gains implementing this innovation. Cloud computing allows companies to focus more on their core competencies, as IT enablement is taken care of through cloud services. Cloud Computing and ROI includes case studies covering retail, automobile and food processing industries. Each of these case studies have successfully implemented the cloud computing framework and their strategies are explained. As cloud computing may not be ideal for all businesses, criteria are also offered to help determine if this strategy should be adopted.
This book examines recent developments in semantic systems that can respond to situations and environments and events. The contributors to this book cover how to design, implement and utilize disruptive technologies. The editor discusses the two fundamental sets of disruptive technologies: the development of semantic technologies including description logics, ontologies and agent frameworks; and the development of semantic information rendering and graphical forms of displays of high-density time-sensitive data to improve situational awareness. Beyond practical illustrations of emerging technologies, the editor proposes to utilize an incremental development method called knowledge scaffolding -a proven educational psychology technique for learning a subject matter thoroughly. The goal of this book is to help readers learn about managing information resources, from the ground up and reinforcing the learning as they read on.
This book presents the Recommender System for Improving Customer Loyalty. New and innovative products have begun appearing from a wide variety of countries, which has increased the need to improve the customer experience. When a customer spends hundreds of thousands of dollars on a piece of equipment, keeping it running efficiently is critical to achieving the desired return on investment. Moreover, managers have discovered that delivering a better customer experience pays off in a number of ways. A study of publicly traded companies conducted by Watermark Consulting found that from 2007 to 2013, companies with a better customer service generated a total return to shareholders that was 26 points higher than the S&P 500. This is only one of many studies that illustrate the measurable value of providing a better service experience. The Recommender System presented here addresses several important issues. (1) It provides a decision framework to help managers determine which actions are likely to have the greatest impact on the Net Promoter Score. (2) The results are based on multiple clients. The data mining techniques employed in the Recommender System allow users to "learn" from the experiences of others, without sharing proprietary information. This dramatically enhances the power of the system. (3) It supplements traditional text mining options. Text mining can be used to identify the frequency with which topics are mentioned, and the sentiment associated with a given topic. The Recommender System allows users to view specific, anonymous comments associated with actual customers. Studying these comments can provide highly accurate insights into the steps that can be taken to improve the customer experience. (4) Lastly, the system provides a sensitivity analysis feature. In some cases, certain actions can be more easily implemented than others. The Recommender System allows managers to "weigh" these actions and determine which ones would have a greater impact.
Every day millions of people capture, store, transmit, and manipulate digital data. Unfortunately free access digital multimedia communication also provides virtually unprecedented opportunities to pirate copyrighted material. Providing the theoretical background needed to develop and implement advanced techniques and algorithms, Digital Watermarking and Steganography- - Demonstrates how to develop and implement methods to guarantee the authenticity of digital media - Explains the categorization of digital watermarking techniques based on characteristics as well as applications - Presents cutting-edge techniques such as the GA-based breaking algorithm on the frequency-domain steganalytic system. The popularity of digital media continues to soar. The theoretical foundation presented within this valuable reference will facilitate the creation on new techniques and algorithms to combat present and potential threats against information security.
NewInternetdevelopmentsposegreaterandgreaterprivacydilemmas. Inthe- formation Society, the need for individuals to protect their autonomy and retain control over their personal information is becoming more and more important. Today, informationandcommunicationtechnologies-andthepeopleresponsible for making decisions about them, designing, and implementing them-scarcely consider those requirements, thereby potentially putting individuals' privacy at risk. The increasingly collaborative character of the Internet enables anyone to compose services and contribute and distribute information. It may become hard for individuals to manage and control information that concerns them and particularly how to eliminate outdated or unwanted personal information, thus leavingpersonalhistoriesexposedpermanently. Theseactivitiesraisesubstantial new challenges for personal privacy at the technical, social, ethical, regulatory, and legal levels: How can privacy in emerging Internet applications such as c- laborative scenarios and virtual communities be protected? What frameworks and technical tools could be utilized to maintain life-long privacy? DuringSeptember3-10,2009, IFIP(InternationalFederationforInformation Processing)workinggroups9. 2 (Social Accountability),9. 6/11. 7(IT Misuseand theLaw),11. 4(NetworkSecurity)and11. 6(IdentityManagement)heldtheir5th InternationalSummerSchoolincooperationwiththeEUFP7integratedproject PrimeLife in Sophia Antipolis and Nice, France. The focus of the event was on privacy and identity managementfor emerging Internet applications throughout a person's lifetime. The aim of the IFIP Summer Schools has been to encourage young a- demic and industry entrants to share their own ideas about privacy and identity management and to build up collegial relationships with others. As such, the Summer Schools havebeen introducing participants to the social implications of information technology through the process of informed discussion.
The Semantic Web proposes the mark-up of content on the Web using formal ontologies that structure underlying data for the purpose of comprehensive and transportable machine understanding. ""Semantic Web Services: Theory, Tools and Applications"" brings contributions from researchers, scientists from both industry and academia, and representatives from different communities to study, understand, and explore the theory, tools, and applications of the Semantic Web. ""Semantic Web Services: Theory, Tools and Applications"" binds computing involving the Semantic Web, ontologies, knowledge management, Web services, and Web processes into one fully comprehensive resource, serving as the platform for exchange of both practical technologies and far reaching research.
This textbook introduces new business concepts on cloud environments such as secure, scalable anonymity and practical payment protocols for the Internet of things and Blockchain technology. The protocol uses electronic cash for payment transactions. In this new protocol, from the viewpoint of banks, consumers can improve anonymity if they are worried about disclosure of their identities in the cloud. Currently, there is not a book available that has reported the techniques covering the protocols with anonymizations and Blockchain technology. Thus this will be a useful book for universities to purchase. This textbook provides new direction for access control management and online business, with new challenges within Blockchain technology that may arise in cloud environments. One is related to the authorization granting process. For example, when a role is granted to a user, this role may conflict with other roles of the user or together with this role; the user may have or derive a high level of authority. Another is related to authorization revocation. For instance, when a role is revoked from a user, the user may still have the role. Experts will get benefits from these challenges through the developed methodology for authorization granting algorithm, and weak revocation and strong revocation algorithms.
The three-volume set IFIP AICT 368-370 constitutes the refereed post-conference proceedings of the 5th IFIP TC 5, SIG 5.1 International Conference on Computer and Computing Technologies in Agriculture, CCTA 2011, held in Beijing, China, in October 2011. The 189 revised papers presented were carefully selected from numerous submissions. They cover a wide range of interesting theories and applications of information technology in agriculture, including simulation models and decision-support systems for agricultural production, agricultural product quality testing, traceability and e-commerce technology, the application of information and communication technology in agriculture, and universal information service technology and service systems development in rural areas. The 59 papers included in the third volume focus on simulation, optimization, monitoring, and control technology.
Based on more than 10 years of teaching experience, Blanken and his coeditors have assembled all the topics that should be covered in advanced undergraduate or graduate courses on multimedia retrieval and multimedia databases. The single chapters of this textbook explain the general architecture of multimedia information retrieval systems and cover various metadata languages such as Dublin Core, RDF, or MPEG. The authors emphasize high-level features and show how these are used in mathematical models to support the retrieval process. For each chapter, there 's detail on further reading, and additional exercises and teaching material is available online.
The creation and consumption of content, especially visual content, is ingrained into our modern world. This book contains a collection of texts centered on the evaluation of image retrieval systems. To enable reproducible evaluation we must create standardized benchmarks and evaluation methodologies. The individual chapters in this book highlight major issues and challenges in evaluating image retrieval systems and describe various initiatives that provide researchers with the necessary evaluation resources. In particular they describe activities within ImageCLEF, an initiative to evaluate cross-language image retrieval systems which has been running as part of the Cross Language Evaluation Forum (CLEF) since 2003. To this end, the editors collected contributions from a range of people: those involved directly with ImageCLEF, such as the organizers of specific image retrieval or annotation tasks; participants who have developed techniques to tackle the challenges set forth by the organizers; and people from industry and academia involved with image retrieval and evaluation generally. Mostly written for researchers in academia and industry, the book stresses the importance of combing textual and visual information - a multimodal approach - for effective retrieval. It provides the reader with clear ideas about information retrieval and its evaluation in contexts and domains such as healthcare, robot vision, press photography, and the Web. |
![]() ![]() You may like...
Topology and Geometric Group Theory…
Michael W. Davis, James Fowler, …
Hardcover
High-Level Models of Unconventional…
Andrew Schumann, Krzysztof Pancerz
Hardcover
Symmetries and Applications of…
Albert C.J. Luo, Rafail K. Gazizov
Hardcover
R3,643
Discovery Miles 36 430
Flexible Bayesian Regression Modelling
Yanan Fan, David Nott, …
Paperback
R2,576
Discovery Miles 25 760
|