![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
This book describes trends in email scams and offers tools and techniques to identify such trends. It also describes automated countermeasures based on an understanding of the type of persuasive methods used by scammers. It reviews both consumer-facing scams and enterprise scams, describing in-depth case studies relating to Craigslist scams and Business Email Compromise Scams. This book provides a good starting point for practitioners, decision makers and researchers in that it includes alternatives and complementary tools to the currently deployed email security tools, with a focus on understanding the metrics of scams. Both professionals working in security and advanced-level students interested in privacy or applications of computer science will find this book a useful reference.
This book is the very first book-length study devoted to the advances in technological development and systems research in cooperative economics. The chapters provide, first of all, a coherent framework for understanding and applying the concepts and approaches of complexity and systems science for the advanced study of cooperative networks and particular cooperative enterprises and communities. Second, the book serves as a unique source of reliable information on the frontier information technologies available for the production, consumer, credit, and agricultural cooperative enterprises, discussing predominant strategies, potential drivers of change, and responses to complex problems. Given the diverse range of backgrounds and advanced research results, researchers, decision-makers, and stakeholders from all fields of cooperative economics in any country of the world will undoubtedly benefit from this book.
Strives to be the point of reference for the most important issues in the field of multidimensional databases. This book provides a brief history of the field and distinguishes between what is new in recent research and what is merely a renaming of old concepts. The book reviews past papers and discusses current research projects in the hope to encourage the search for new solutions to the many problems that are still unsolved. In addition this outlines the incredible advances in technology and ever increasing demands from users in the most diverse applicative areas such as finance, medicine, statistics, business, and many more. Many of the most distinguished and well-known researchers have contributed to this book writing about their own specific field.
The papers in this volume comprise the refereed proceedings of the conference Arti- cial Intelligence in Theory and Practice (IFIP AI 2010), which formed part of the 21st World Computer Congress of IFIP, the International Federation for Information Pr- essing (WCC-2010), in Brisbane, Australia in September 2010. The conference was organized by the IFIP Technical Committee on Artificial Int- ligence (Technical Committee 12) and its Working Group 12.5 (Artificial Intelligence Applications). All papers were reviewed by at least two members of our Program Committee. - nal decisions were made by the Executive Program Committee, which comprised John Debenham (University of Technology, Sydney, Australia), Ilias Maglogiannis (University of Central Greece, Lamia, Greece), Eunika Mercier-Laurent (KIM, France) and myself. The best papers were selected for the conference, either as long papers (maximum 10 pages) or as short papers (maximum 5 pages) and are included in this volume. The international nature of IFIP is amply reflected in the large number of countries represented here. I should like to thank the Conference Chair, Tharam Dillon, for all his efforts and the members of our Program Committee for reviewing papers under a very tight de- line.
In the last 15 years we have seen a major transformation in the world of music. - sicians use inexpensive personal computers instead of expensive recording studios to record, mix and engineer music. Musicians use the Internet to distribute their - sic for free instead of spending large amounts of money creating CDs, hiring trucks and shipping them to hundreds of record stores. As the cost to create and distribute recorded music has dropped, the amount of available music has grown dramatically. Twenty years ago a typical record store would have music by less than ten thousand artists, while today online music stores have music catalogs by nearly a million artists. While the amount of new music has grown, some of the traditional ways of ?nding music have diminished. Thirty years ago, the local radio DJ was a music tastemaker, ?nding new and interesting music for the local radio audience. Now - dio shows are programmed by large corporations that create playlists drawn from a limited pool of tracks. Similarly, record stores have been replaced by big box reta- ers that have ever-shrinking music departments. In the past, you could always ask the owner of the record store for music recommendations. You would learn what was new, what was good and what was selling. Now, however, you can no longer expect that the teenager behind the cash register will be an expert in new music, or even be someone who listens to music at all.
As other complex systems in social and natural sciences as well as in engineering, the Internet is hard to understand from a technical point of view. Packet switched networks defy analytical modeling. The Internet is an outstanding and challenging case because of its fast development, unparalleled heterogeneity and the inherent lack of measurement and monitoring mechanisms in its core conception. This monograph deals with applications of computational intelligence methods, with an emphasis on fuzzy techniques, to a number of current issues in measurement, analysis and control of traffic in the Internet. First, the core building blocks of Internet Science and other related networking aspects are introduced. Then, data mining and control problems are addressed. In the first class two issues are considered: predictive modeling of traffic load as well as summarization of traffic flow measurements. The second class, control, includes active queue management schemes for Internet routers as well as window based end-to-end rate and congestion control. The practical hardware implementation of some of the fuzzy inference systems proposed here is also addressed. While some theoretical developments are described, we favor extensive evaluation of models using real-world data by simulation and experiments.
These are exciting times in the fields of Fuzzy Logic and the Semantic Web, and this book will add to the excitement, as it is the first volume to focus on the growing connections between these two fields. This book is expected to be a valuable aid to anyone considering the application of Fuzzy Logic to the Semantic Web, because it contains a number of detailed accounts of these combined fields, written by leading authors in several countries. The Fuzzy Logic field has been maturing for forty years. These years have witnessed a tremendous growth in the number and variety of applications, with a real-world impact across a wide variety of domains with humanlike behavior and reasoning. And we believe that in the coming years, the Semantic Web will be major field of applications of Fuzzy Logic.
This book explores models and concepts of trust in a digitized world. Trust is a core concept that comes into play in multiple social and economic relations of our modern life. The book provides insights into the current state of research while presenting the viewpoints of a variety of disciplines such as communication studies, information systems, educational and organizational psychology, sports psychology and economics. Focusing on an investigation of how the Internet is changing the relationship between trust and communication, and the impact this change has on trust research, this volume facilitates a greater understanding of these topics, thus enabling their employment in social relations.
The book at hand gives an overview of the state of the art research in Computational Sustainability as well as case studies of different application scenarios. This covers topics such as renewable energy supply, energy storage and e-mobility, efficiency in data centers and networks, sustainable food and water supply, sustainable health, industrial production and quality, etc. The book describes computational methods and possible application scenarios.
This book represents the combined peer-reviewed
proceedings The 41 contributions published in this book address many
topics
This book covers topics like big data analyses, services, and smart data. It contains (i) invited papers, (ii) selected papers from the Sixth International Conference on Big Data Applications and Services (BigDAS 2018), as well as (iii) extended papers from the Sixth IEEE International Conference on Big Data and Smart Computing (IEEE BigComp 2019). The aim of BigDAS is to present innovative results, encourage academic and industrial interaction, and promote collaborative research in the field of big data worldwide. BigDAS 2018 was held in Zhengzhou, China, on August 19-22, 2018, and organized by the Korea Big Data Service Society and TusStar. The goal of IEEE BigComp, initiated by Korean Institute of Information Scientists and Engineers (KIISE), is to provide an international forum for exchanging ideas and information on current studies, challenges, research results, system developments, and practical experiences in the emerging fields of big data and smart computing. IEEE BigComp 2019 was held in Kyoto, Japan, on February 27-March 02, 2019, and co-sponsored by IEEE and KIISE.
This book describes the application of modern information technology to reservoir modeling and well management in shale. While covering Shale Analytics, it focuses on reservoir modeling and production management of shale plays, since conventional reservoir and production modeling techniques do not perform well in this environment. Topics covered include tools for analysis, predictive modeling and optimization of production from shale in the presence of massive multi-cluster, multi-stage hydraulic fractures. Given the fact that the physics of storage and fluid flow in shale are not well-understood and well-defined, Shale Analytics avoids making simplifying assumptions and concentrates on facts (Hard Data - Field Measurements) to reach conclusions. Also discussed are important insights into understanding completion practices and re-frac candidate selection and design. The flexibility and power of the technique is demonstrated in numerous real-world situations.
This book brings all of the elements of data mining together in a
single volume, saving the reader the time and expense of making
multiple purchases. It consolidates both introductory and advanced
topics, thereby covering the gamut of data mining and machine
learning tactics ? from data integration and pre-processing, to
fundamental algorithms, to optimization techniques and web mining
methodology.
Disaster management is a process or strategy that is implemented when any type of catastrophic event takes place. The process may be initiated when anything threatens to disrupt normal operations or puts the lives of human beings at risk. Governments on all levels as well as many businesses create some sort of disaster plan that make it possible to overcome the catastrophe and return to normal function as quickly as possible. Response to natural disasters (e.g., floods, earthquakes) or technological disaster (e.g., nuclear, chemical) is an extreme complex process that involves severe time pressure, various uncertainties, high non-linearity and many stakeholders. Disaster management often requires several autonomous agencies to collaboratively mitigate, prepare, respond, and recover from heterogeneous and dynamic sets of hazards to society. Almost all disasters involve high degrees of novelty to deal with most unexpected various uncertainties and dynamic time pressures. Existing studies and approaches within disaster management have mainly been focused on some specific type of disasters with certain agency oriented. There is a lack of a general framework to deal with similarities and synergies among different disasters by taking their specific features into account. This book provides with various decisions analysis theories and support tools in complex systems in general and in disaster management in particular. The book is also generated during a long-term preparation of a European project proposal among most leading experts in the areas related to the book title. Chapters are evaluated based on quality and originality in theory and methodology, application oriented, relevance to the title of the book.
This book features both cutting-edge contributions on managing knowledge in transformational contexts and a selection of real-world case studies. It analyzes how the disruptive power of digitization is becoming a major challenge for knowledge-based value creation worldwide, and subsequently examines the changes in how we manage information and knowledge, communicate, collaborate, learn and decide within and across organizations. The book highlights the opportunities provided by disruptive renewal, while also stressing the need for knowledge workers and organizations to transform governance, leadership and work organization. Emerging new business models and digitally enabled co-creation are presented as drivers that can help establish new ways of managing knowledge. In turn, a number of carefully selected and interpreted case studies provide a link to practice in organizations.
This book brings all of the elements of database design together in
a single volume, saving the reader the time and expense of making
multiple purchases. It consolidates both introductory and advanced
topics, thereby covering the gamut of database design methodology ?
from ER and UML techniques, to conceptual data modeling and table
transformation, to storing XML and querying moving objects
databases.
This book addresses the topic of exploiting enterprise-linked data with a particular focus on knowledge construction and accessibility within enterprises. It identifies the gaps between the requirements of enterprise knowledge consumption and "standard" data consuming technologies by analysing real-world use cases, and proposes the enterprise knowledge graph to fill such gaps. It provides concrete guidelines for effectively deploying linked-data graphs within and across business organizations. It is divided into three parts, focusing on the key technologies for constructing, understanding and employing knowledge graphs. Part 1 introduces basic background information and technologies, and presents a simple architecture to elucidate the main phases and tasks required during the lifecycle of knowledge graphs. Part 2 focuses on technical aspects; it starts with state-of-the art knowledge-graph construction approaches, and then discusses exploration and exploitation techniques as well as advanced question-answering topics concerning knowledge graphs. Lastly, Part 3 demonstrates examples of successful knowledge graph applications in the media industry, healthcare and cultural heritage, and offers conclusions and future visions.
The first of a two volume set on novel methods in harmonic analysis, this book draws on a number of original research and survey papers from well-known specialists detailing the latest innovations and recently discovered links between various fields. Along with many deep theoretical results, these volumes contain numerous applications to problems in signal processing, medical imaging, geodesy, statistics, and data science. The chapters within cover an impressive range of ideas from both traditional and modern harmonic analysis, such as: the Fourier transform, Shannon sampling, frames, wavelets, functions on Euclidean spaces, analysis on function spaces of Riemannian and sub-Riemannian manifolds, Fourier analysis on manifolds and Lie groups, analysis on combinatorial graphs, sheaves, co-sheaves, and persistent homologies on topological spaces. Volume I is organized around the theme of frames and other bases in abstract and function spaces, covering topics such as: The advanced development of frames, including Sigma-Delta quantization for fusion frames, localization of frames, and frame conditioning, as well as applications to distributed sensor networks, Galerkin-like representation of operators, scaling on graphs, and dynamical sampling. A systematic approach to shearlets with applications to wavefront sets and function spaces. Prolate and generalized prolate functions, spherical Gauss-Laguerre basis functions, and radial basis functions. Kernel methods, wavelets, and frames on compact and non-compact manifolds.
Forecasting is a crucial function for companies in the fashion industry, but for many real-life forecasting applications in the, the data patterns are notorious for being highly volatile and it is very difficult, if not impossible, to analytically learn about the underlying patterns. As a result, many traditional methods (such as pure statistical models) will fail to make a sound prediction. Over the past decade, advances in artificial intelligence and computing technologies have provided an alternative way of generating precise and accurate forecasting results for fashion businesses. Despite being an important and timely topic, there is currently an absence of a comprehensive reference source that provides up-to-date theoretical and applied research findings on the subject of intelligent fashion forecasting systems. This three-part handbook fulfills this need and covers materials ranging from introductory studies and technical reviews, theoretical modeling research, to intelligent fashion forecasting applications and analysis. This book is suitable for academic researchers, graduate students, senior undergraduate students and practitioners who are interested in the latest research on fashion forecasting.
Integrating Security and Software Engineering: Advances and Future Vision provides the first step towards narrowing the gap between security and software engineering. This book introduces the field of secure software engineering, which is a branch of research investigating the integration of security concerns into software engineering practices. ""Integrating Security and Software Engineering: Advances and Future Vision"" discusses problems and challenges of considering security during the development of software systems, and also presents the predominant theoretical and practical approaches that integrate security and software engineering.
The problem of mining patterns is becoming a very active research area and efficient techniques have been widely applied to problems in industry, government, and science. From the initial definition and motivated by real-applications, the problem of mining patterns not only addresses the finding of itemsets but also more and more complex patterns.
Electrical energy usage is increasing every year due to population growth and new forms of consumption. As such, it is increasingly imperative to research methods of energy control and safe use. Security Solutions and Applied Cryptography in Smart Grid Communications is a pivotal reference source for the latest research on the development of smart grid technology and best practices of utilization. Featuring extensive coverage across a range of relevant perspectives and topics, such as threat detection, authentication, and intrusion detection, this book is ideally designed for academicians, researchers, engineers and students seeking current research on ways in which to implement smart grid platforms all over the globe.
This book describes analytical techniques for optimizing knowledge acquisition, processing, and propagation, especially in the contexts of cyber-infrastructure and big data. Further, it presents easy-to-use analytical models of knowledge-related processes and their applications. The need for such methods stems from the fact that, when we have to decide where to place sensors, or which algorithm to use for processing the data-we mostly rely on experts' opinions. As a result, the selected knowledge-related methods are often far from ideal. To make better selections, it is necessary to first create easy-to-use models of knowledge-related processes. This is especially important for big data, where traditional numerical methods are unsuitable. The book offers a valuable guide for everyone interested in big data applications: students looking for an overview of related analytical techniques, practitioners interested in applying optimization techniques, and researchers seeking to improve and expand on these techniques.
This book provides two general granular computing approaches to mining relational data, the first of which uses abstract descriptions of relational objects to build their granular representation, while the second extends existing granular data mining solutions to a relational case. Both approaches make it possible to perform and improve popular data mining tasks such as classification, clustering, and association discovery. How can different relational data mining tasks best be unified? How can the construction process of relational patterns be simplified? How can richer knowledge from relational data be discovered? All these questions can be answered in the same way: by mining relational data in the paradigm of granular computing! This book will allow readers with previous experience in the field of relational data mining to discover the many benefits of its granular perspective. In turn, those readers familiar with the paradigm of granular computing will find valuable insights on its application to mining relational data. Lastly, the book offers all readers interested in computational intelligence in the broader sense the opportunity to deepen their understanding of the newly emerging field granular-relational data mining. |
![]() ![]() You may like...
Semantic Web Services - Concepts…
Rudi Studer, Stephan Grimm, …
Hardcover
Semantic Web: Concepts, Technologies and…
Karin Breitman, Marco Antonio Casanova, …
Hardcover
R1,721
Discovery Miles 17 210
|