![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
These are exciting times in the fields of Fuzzy Logic and the Semantic Web, and this book will add to the excitement, as it is the first volume to focus on the growing connections between these two fields. This book is expected to be a valuable aid to anyone considering the application of Fuzzy Logic to the Semantic Web, because it contains a number of detailed accounts of these combined fields, written by leading authors in several countries. The Fuzzy Logic field has been maturing for forty years. These years have witnessed a tremendous growth in the number and variety of applications, with a real-world impact across a wide variety of domains with humanlike behavior and reasoning. And we believe that in the coming years, the Semantic Web will be major field of applications of Fuzzy Logic.
This book explores models and concepts of trust in a digitized world. Trust is a core concept that comes into play in multiple social and economic relations of our modern life. The book provides insights into the current state of research while presenting the viewpoints of a variety of disciplines such as communication studies, information systems, educational and organizational psychology, sports psychology and economics. Focusing on an investigation of how the Internet is changing the relationship between trust and communication, and the impact this change has on trust research, this volume facilitates a greater understanding of these topics, thus enabling their employment in social relations.
The book at hand gives an overview of the state of the art research in Computational Sustainability as well as case studies of different application scenarios. This covers topics such as renewable energy supply, energy storage and e-mobility, efficiency in data centers and networks, sustainable food and water supply, sustainable health, industrial production and quality, etc. The book describes computational methods and possible application scenarios.
This book represents the combined peer-reviewed
proceedings The 41 contributions published in this book address many
topics
This book covers topics like big data analyses, services, and smart data. It contains (i) invited papers, (ii) selected papers from the Sixth International Conference on Big Data Applications and Services (BigDAS 2018), as well as (iii) extended papers from the Sixth IEEE International Conference on Big Data and Smart Computing (IEEE BigComp 2019). The aim of BigDAS is to present innovative results, encourage academic and industrial interaction, and promote collaborative research in the field of big data worldwide. BigDAS 2018 was held in Zhengzhou, China, on August 19-22, 2018, and organized by the Korea Big Data Service Society and TusStar. The goal of IEEE BigComp, initiated by Korean Institute of Information Scientists and Engineers (KIISE), is to provide an international forum for exchanging ideas and information on current studies, challenges, research results, system developments, and practical experiences in the emerging fields of big data and smart computing. IEEE BigComp 2019 was held in Kyoto, Japan, on February 27-March 02, 2019, and co-sponsored by IEEE and KIISE.
This book describes the application of modern information technology to reservoir modeling and well management in shale. While covering Shale Analytics, it focuses on reservoir modeling and production management of shale plays, since conventional reservoir and production modeling techniques do not perform well in this environment. Topics covered include tools for analysis, predictive modeling and optimization of production from shale in the presence of massive multi-cluster, multi-stage hydraulic fractures. Given the fact that the physics of storage and fluid flow in shale are not well-understood and well-defined, Shale Analytics avoids making simplifying assumptions and concentrates on facts (Hard Data - Field Measurements) to reach conclusions. Also discussed are important insights into understanding completion practices and re-frac candidate selection and design. The flexibility and power of the technique is demonstrated in numerous real-world situations.
This book brings all of the elements of data mining together in a
single volume, saving the reader the time and expense of making
multiple purchases. It consolidates both introductory and advanced
topics, thereby covering the gamut of data mining and machine
learning tactics ? from data integration and pre-processing, to
fundamental algorithms, to optimization techniques and web mining
methodology.
Disaster management is a process or strategy that is implemented when any type of catastrophic event takes place. The process may be initiated when anything threatens to disrupt normal operations or puts the lives of human beings at risk. Governments on all levels as well as many businesses create some sort of disaster plan that make it possible to overcome the catastrophe and return to normal function as quickly as possible. Response to natural disasters (e.g., floods, earthquakes) or technological disaster (e.g., nuclear, chemical) is an extreme complex process that involves severe time pressure, various uncertainties, high non-linearity and many stakeholders. Disaster management often requires several autonomous agencies to collaboratively mitigate, prepare, respond, and recover from heterogeneous and dynamic sets of hazards to society. Almost all disasters involve high degrees of novelty to deal with most unexpected various uncertainties and dynamic time pressures. Existing studies and approaches within disaster management have mainly been focused on some specific type of disasters with certain agency oriented. There is a lack of a general framework to deal with similarities and synergies among different disasters by taking their specific features into account. This book provides with various decisions analysis theories and support tools in complex systems in general and in disaster management in particular. The book is also generated during a long-term preparation of a European project proposal among most leading experts in the areas related to the book title. Chapters are evaluated based on quality and originality in theory and methodology, application oriented, relevance to the title of the book.
This book features both cutting-edge contributions on managing knowledge in transformational contexts and a selection of real-world case studies. It analyzes how the disruptive power of digitization is becoming a major challenge for knowledge-based value creation worldwide, and subsequently examines the changes in how we manage information and knowledge, communicate, collaborate, learn and decide within and across organizations. The book highlights the opportunities provided by disruptive renewal, while also stressing the need for knowledge workers and organizations to transform governance, leadership and work organization. Emerging new business models and digitally enabled co-creation are presented as drivers that can help establish new ways of managing knowledge. In turn, a number of carefully selected and interpreted case studies provide a link to practice in organizations.
This book brings all of the elements of database design together in
a single volume, saving the reader the time and expense of making
multiple purchases. It consolidates both introductory and advanced
topics, thereby covering the gamut of database design methodology ?
from ER and UML techniques, to conceptual data modeling and table
transformation, to storing XML and querying moving objects
databases.
This book addresses the topic of exploiting enterprise-linked data with a particular focus on knowledge construction and accessibility within enterprises. It identifies the gaps between the requirements of enterprise knowledge consumption and "standard" data consuming technologies by analysing real-world use cases, and proposes the enterprise knowledge graph to fill such gaps. It provides concrete guidelines for effectively deploying linked-data graphs within and across business organizations. It is divided into three parts, focusing on the key technologies for constructing, understanding and employing knowledge graphs. Part 1 introduces basic background information and technologies, and presents a simple architecture to elucidate the main phases and tasks required during the lifecycle of knowledge graphs. Part 2 focuses on technical aspects; it starts with state-of-the art knowledge-graph construction approaches, and then discusses exploration and exploitation techniques as well as advanced question-answering topics concerning knowledge graphs. Lastly, Part 3 demonstrates examples of successful knowledge graph applications in the media industry, healthcare and cultural heritage, and offers conclusions and future visions.
The first of a two volume set on novel methods in harmonic analysis, this book draws on a number of original research and survey papers from well-known specialists detailing the latest innovations and recently discovered links between various fields. Along with many deep theoretical results, these volumes contain numerous applications to problems in signal processing, medical imaging, geodesy, statistics, and data science. The chapters within cover an impressive range of ideas from both traditional and modern harmonic analysis, such as: the Fourier transform, Shannon sampling, frames, wavelets, functions on Euclidean spaces, analysis on function spaces of Riemannian and sub-Riemannian manifolds, Fourier analysis on manifolds and Lie groups, analysis on combinatorial graphs, sheaves, co-sheaves, and persistent homologies on topological spaces. Volume I is organized around the theme of frames and other bases in abstract and function spaces, covering topics such as: The advanced development of frames, including Sigma-Delta quantization for fusion frames, localization of frames, and frame conditioning, as well as applications to distributed sensor networks, Galerkin-like representation of operators, scaling on graphs, and dynamical sampling. A systematic approach to shearlets with applications to wavefront sets and function spaces. Prolate and generalized prolate functions, spherical Gauss-Laguerre basis functions, and radial basis functions. Kernel methods, wavelets, and frames on compact and non-compact manifolds.
Forecasting is a crucial function for companies in the fashion industry, but for many real-life forecasting applications in the, the data patterns are notorious for being highly volatile and it is very difficult, if not impossible, to analytically learn about the underlying patterns. As a result, many traditional methods (such as pure statistical models) will fail to make a sound prediction. Over the past decade, advances in artificial intelligence and computing technologies have provided an alternative way of generating precise and accurate forecasting results for fashion businesses. Despite being an important and timely topic, there is currently an absence of a comprehensive reference source that provides up-to-date theoretical and applied research findings on the subject of intelligent fashion forecasting systems. This three-part handbook fulfills this need and covers materials ranging from introductory studies and technical reviews, theoretical modeling research, to intelligent fashion forecasting applications and analysis. This book is suitable for academic researchers, graduate students, senior undergraduate students and practitioners who are interested in the latest research on fashion forecasting.
Integrating Security and Software Engineering: Advances and Future Vision provides the first step towards narrowing the gap between security and software engineering. This book introduces the field of secure software engineering, which is a branch of research investigating the integration of security concerns into software engineering practices. ""Integrating Security and Software Engineering: Advances and Future Vision"" discusses problems and challenges of considering security during the development of software systems, and also presents the predominant theoretical and practical approaches that integrate security and software engineering.
The problem of mining patterns is becoming a very active research area and efficient techniques have been widely applied to problems in industry, government, and science. From the initial definition and motivated by real-applications, the problem of mining patterns not only addresses the finding of itemsets but also more and more complex patterns.
Electrical energy usage is increasing every year due to population growth and new forms of consumption. As such, it is increasingly imperative to research methods of energy control and safe use. Security Solutions and Applied Cryptography in Smart Grid Communications is a pivotal reference source for the latest research on the development of smart grid technology and best practices of utilization. Featuring extensive coverage across a range of relevant perspectives and topics, such as threat detection, authentication, and intrusion detection, this book is ideally designed for academicians, researchers, engineers and students seeking current research on ways in which to implement smart grid platforms all over the globe.
This book describes analytical techniques for optimizing knowledge acquisition, processing, and propagation, especially in the contexts of cyber-infrastructure and big data. Further, it presents easy-to-use analytical models of knowledge-related processes and their applications. The need for such methods stems from the fact that, when we have to decide where to place sensors, or which algorithm to use for processing the data-we mostly rely on experts' opinions. As a result, the selected knowledge-related methods are often far from ideal. To make better selections, it is necessary to first create easy-to-use models of knowledge-related processes. This is especially important for big data, where traditional numerical methods are unsuitable. The book offers a valuable guide for everyone interested in big data applications: students looking for an overview of related analytical techniques, practitioners interested in applying optimization techniques, and researchers seeking to improve and expand on these techniques.
This book provides two general granular computing approaches to mining relational data, the first of which uses abstract descriptions of relational objects to build their granular representation, while the second extends existing granular data mining solutions to a relational case. Both approaches make it possible to perform and improve popular data mining tasks such as classification, clustering, and association discovery. How can different relational data mining tasks best be unified? How can the construction process of relational patterns be simplified? How can richer knowledge from relational data be discovered? All these questions can be answered in the same way: by mining relational data in the paradigm of granular computing! This book will allow readers with previous experience in the field of relational data mining to discover the many benefits of its granular perspective. In turn, those readers familiar with the paradigm of granular computing will find valuable insights on its application to mining relational data. Lastly, the book offers all readers interested in computational intelligence in the broader sense the opportunity to deepen their understanding of the newly emerging field granular-relational data mining.
This book constitutes the refereed proceedings of the 16th IFIP WG 9.4 International Conference on Social Implications of Computers in Developing Countries, ICT4D 2020, which was supposed to be held in Salford, UK, in June 2020, but was held virtually instead due to the COVID-19 pandemic. The 18 revised full papers presented were carefully reviewed and selected from 29 submissions. The papers present a wide range of perspectives and disciplines including (but not limited to) public administration, entrepreneurship, business administration, information technology for development, information management systems, organization studies, philosophy, and management. They are organized in the following topical sections: digital platforms and gig economy; education and health; inclusion and participation; and business innovation and data privacy.
Mohamed Medhat Gaber "It is not my aim to surprise or shock you - but the simplest way I can summarise is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until - in a visible future - the range of problems they can handle will be coextensive with the range to which the human mind has been applied" by Herbert A. Simon (1916-2001) 1Overview This book suits both graduate students and researchers with a focus on discovering knowledge from scienti c data. The use of computational power for data analysis and knowledge discovery in scienti c disciplines has found its roots with the re- lution of high-performance computing systems. Computational science in physics, chemistry, and biology represents the rst step towards automation of data analysis tasks. The rational behind the developmentof computationalscience in different - eas was automating mathematical operations performed in those areas. There was no attention paid to the scienti c discovery process. Automated Scienti c Disc- ery (ASD) [1-3] represents the second natural step. ASD attempted to automate the process of theory discovery supported by studies in philosophy of science and cognitive sciences. Although early research articles have shown great successes, the area has not evolved due to many reasons. The most important reason was the lack of interaction between scientists and the automating systems.
th I3E 2010 marked the 10 anniversary of the IFIP Conference on e-Business, e- Services, and e-Society, continuing a tradition that was invented in 1998 during the International Conference on Trends in Electronic Commerce, TrEC 1998, in Hamburg (Germany). Three years later the inaugural I3E 2001 conference was held in Zurich (Switzerland). Since then I3E has made its journey through the world: 2002 Lisbon (Portugal), 2003 Sao Paulo (Brazil), 2004 Toulouse (France), 2005 Poznan (Poland), 2006 Turku (Finland), 2007 Wuhan (China), 2008 Tokyo (Japan), and 2009 Nancy (France). I3E 2010 took place in Buenos Aires (Argentina) November 3-5, 2010. Known as "The Pearl" of South America, Buenos Aires is a cosmopolitan, colorful, and vibrant city, surprising its visitors with a vast variety of cultural and artistic performances, European architecture, and the passion for tango, coffee places, and football disc- sions. A cultural reference in Latin America, the city hosts 140 museums, 300 theaters, and 27 public libraries including the National Library. It is also the main educational center in Argentina and home of renowned universities including the U- versity of Buenos Aires, created in 1821. Besides location, the timing of I3E 2010 is th also significant--it coincided with the 200 anniversary celebration of the first local government in Argentina.
Hyperspectral Image Fusion is the first text dedicated to the fusion techniques for such a huge volume of data consisting of a very large number of images. This monograph brings out recent advances in the research in the area of visualization of hyperspectral data. It provides a set of pixel-based fusion techniques, each of which is based on a different framework and has its own advantages and disadvantages. The techniques are presented with complete details so that practitioners can easily implement them. It is also demonstrated how one can select only a few specific bands to speed up the process of fusion by exploiting spatial correlation within successive bands of the hyperspectral data. While the techniques for fusion of hyperspectral images are being developed, it is also important to establish a framework for objective assessment of such techniques. This monograph has a dedicated chapter describing various fusion performance measures that are applicable to hyperspectral image fusion. This monograph also presents a notion of consistency of a fusion technique which can be used to verify the suitability and applicability of a technique for fusion of a very large number of images. This book will be a highly useful resource to the students, researchers, academicians and practitioners in the specific area of hyperspectral image fusion, as well as generic image fusion.
This book aims to identify promising future developmental opportunities and applications for Tech Mining. Specifically, the enclosed contributions will pursue three converging themes: The increasing availability of electronic text data resources relating to Science, Technology and Innovation (ST&I). The multiple methods that are able to treat this data effectively and incorporate means to tap into human expertise and interests. Translating those analyses to provide useful intelligence on likely future developments of particular emerging S&T targets. Tech Mining can be defined as text analyses of ST&I information resources to generate Competitive Technical Intelligence (CTI). It combines bibliometrics and advanced text analytic, drawing on specialized knowledge pertaining to ST&I. Tech Mining may also be viewed as a special form of "Big Data" analytics because it searches on a target emerging technology (or key organization) of interest in global databases. One then downloads, typically, thousands of field-structured text records (usually abstracts), and analyses those for useful CTI. Forecasting Innovation Pathways (FIP) is a methodology drawing on Tech Mining plus additional steps to elicit stakeholder and expert knowledge to link recent ST&I activity to likely future development. A decade ago, we demeaned Management of Technology (MOT) as somewhat self-satisfied and ignorant. Most technology managers relied overwhelmingly on casual human judgment, largely oblivious of the potential of empirical analyses to inform R&D management and science policy. CTI, Tech Mining, and FIP are changing that. The accumulation of Tech Mining research over the past decade offers a rich resource of means to get at emerging technology developments and organizational networks to date. Efforts to bridge from those recent histories of development to project likely FIP, however, prove considerably harder. One focus of this volume is to extend the repertoire of information resources; that will enrich FIP. Featuring cases of novel approaches and applications of Tech Mining and FIP, this volume will present frontier advances in ST&I text analytics that will be of interest to students, researchers, practitioners, scholars and policy makers in the fields of R&D planning, technology management, science policy and innovation strategy. |
![]() ![]() You may like...
Time Series Analysis and Forecasting…
Ignacio Rojas, Hector Pomares, …
Hardcover
R4,396
Discovery Miles 43 960
|