![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
The efficient management of a consistent and integrated database is a central task in modern IT and highly relevant for science and industry. Hardly any critical enterprise solution comes without any functionality for managing data in its different forms. Web-Scale Data Management for the Cloud addresses fundamental challenges posed by the need and desire to provide database functionality in the context of the Database as a Service (DBaaS) paradigm for database outsourcing. This book also discusses the motivation of the new paradigm of cloud computing, and its impact to data outsourcing and service-oriented computing in data-intensive applications. Techniques with respect to the support in the current cloud environments, major challenges, and future trends are covered in the last section of this book. A survey addressing the techniques and special requirements for building database services are provided in this book as well.
Information Systems (IS) are a nearly omnipresent aspect of the modern world, playing crucial roles in the fields of science and engineering, business and law, art and culture, politics and government, and many others. As such, identity theft and unauthorized access to these systems are serious concerns. Theory and Practice of Cryptography Solutions for Secure Information Systems explores current trends in IS security technologies, techniques, and concerns, primarily through the use of cryptographic tools to safeguard valuable information resources. This reference book serves the needs of professionals, academics, and students requiring dedicated information systems free from outside interference, as well as developers of secure IS applications. This book is part of the Advances in Information Security, Privacy, and Ethics series collection.
This edited book adopts a cognitive perspective to provide breadth and depth to state-of-the-art research related to understanding, analyzing, predicting and improving one of the most prominent and important classes of behavior of modern humans, information search. It is timely as the broader research area of cognitive computing and cognitive technology have recently attracted much attention, and there has been a surge in interest to develop systems and technology that are more compatible with human cognitive abilities. Divided into three interlocking sections, the first introduces the foundational concepts of information search from a cognitive computing perspective to highlight the research questions and approaches that are shared among the contributing authors. Relevant concepts from psychology, information and computing sciences are addressed. The second section discusses methods and tools that are used to understand and predict information search behavior and how the cognitive perspective can provide unique insights into the complexities of the behavior in various contexts. The final part highlights a number of areas of applications of which education and training, collaboration and conversational search interfaces are important ones. Understanding and Improving Information Search - A Cognitive Approach includes contributions from cognitive psychologists, information and computing scientists around the globe, including researchers from Europe (France, Netherlands, Germany), the US, and Asia (India, Japan), providing their unique but coherent perspectives to the core issues and questions most relevant to our current understanding of information search behavior and improving information search.
Information retrieval (IR) aims at defining systems able to provide a fast and effective content-based access to a large amount of stored information. The aim of an IR system is to estimate the relevance of documents to users' information needs, expressed by means of a query. This is a very difficult and complex task, since it is pervaded with imprecision and uncertainty. Most of the existing IR systems offer a very simple model of IR, which privileges efficiency at the expense of effectiveness. A promising direction to increase the effectiveness of IR is to model the concept of "partially intrinsic" in the IR process and to make the systems adaptive, i.e. able to "learn" the user's concept of relevance. To this aim, the application of soft computing techniques can be of help to obtain greater flexibility in IR systems.
This handbook offers comprehensive coverage of recent advancements in Big Data technologies and related paradigms. Chapters are authored by international leading experts in the field, and have been reviewed and revised for maximum reader value. The volume consists of twenty-five chapters organized into four main parts. Part one covers the fundamental concepts of Big Data technologies including data curation mechanisms, data models, storage models, programming models and programming platforms. It also dives into the details of implementing Big SQL query engines and big stream processing systems. Part Two focuses on the semantic aspects of Big Data management including data integration and exploratory ad hoc analysis in addition to structured querying and pattern matching techniques. Part Three presents a comprehensive overview of large scale graph processing. It covers the most recent research in large scale graph processing platforms, introducing several scalable graph querying and mining mechanisms in domains such as social networks. Part Four details novel applications that have been made possible by the rapid emergence of Big Data technologies such as Internet-of-Things (IOT), Cognitive Computing and SCADA Systems. All parts of the book discuss open research problems, including potential opportunities, that have arisen from the rapid progress of Big Data technologies and the associated increasing requirements of application domains. Designed for researchers, IT professionals and graduate students, this book is a timely contribution to the growing Big Data field. Big Data has been recognized as one of leading emerging technologies that will have a major contribution and impact on the various fields of science and varies aspect of the human society over the coming decades. Therefore, the content in this book will be an essential tool to help readers understand the development and future of the field.
This book discusses the development of a theory of info-statics as a sub-theory of the general theory of information. It describes the factors required to establish a definition of the concept of information that fixes the applicable boundaries of the phenomenon of information, its linguistic structure and scientific applications. The book establishes the definitional foundations of information and how the concepts of uncertainty, data, fact, evidence and evidential things are sequential derivatives of information as the primary category, which is a property of matter and energy. The sub-definitions are extended to include the concepts of possibility, probability, expectation, anticipation, surprise, discounting, forecasting, prediction and the nature of past-present-future information structures. It shows that the factors required to define the concept of information are those that allow differences and similarities to be established among universal objects over the ontological and epistemological spaces in terms of varieties and identities. These factors are characteristic and signal dispositions on the basis of which general definitional foundations are developed to construct the general information definition (GID). The book then demonstrates that this definition is applicable to all types of information over the ontological and epistemological spaces. It also defines the concepts of uncertainty, data, fact, evidence and knowledge based on the GID. Lastly, it uses set-theoretic analytics to enhance the definitional foundations, and shows the value of the theory of info-statics to establish varieties and categorial varieties at every point of time and thus initializes the construct of the theory of info-dynamics.
This book gathers selected papers from the KES-IDT-2020 Conference, held as a Virtual Conference on June 17-19, 2020. The aim of the annual conference was to present and discuss the latest research results, and to generate new ideas in the field of intelligent decision-making. However, the range of topics discussed during the conference was definitely broader and covered methods in e.g. classification, prediction, data analysis, big data, data science, decision support, knowledge engineering, and modeling in such diverse areas as finance, cybersecurity, economics, health, management and transportation. The Problems in Industry 4.0 and IoT are also addressed. The book contains several sections devoted to specific topics, such as Intelligent Data Processing and its Applications High-Dimensional Data Analysis and its Applications Multi-Criteria Decision Analysis - Theory and Applications Large-Scale Systems for Intelligent Decision-Making and Knowledge Engineering Decision Technologies and Related Topics in Big Data Analysis of Social and Financial Issues Decision-Making Theory for Economics
This proceedings book presents selected papers from the 4th Conference on Signal and Information Processing, Networking and Computers (ICSINC) held in Qingdao, China on May 23-25, 2018. It focuses on the current research in a wide range of areas related to information theory, communication systems, computer science, signal processing, aerospace technologies, and other related technologies. With contributions from experts from both academia and industry, it is a valuable resource anyone interested in this field.
These proceedings gather cutting-edge papers exploring the principles, techniques, and applications of Microservices in Big Data Analytics. The ICETCE-2019 is the latest installment in a successful series of annual conferences that began in 2011. Every year since, it has significantly contributed to the research community in the form of numerous high-quality research papers. This year, the conference's focus was on the highly relevant area of Microservices in Big Data Analytics.
The issue of missing data imputation has been extensively explored in information engineering, though needing a new focus and approach in research. Computational Intelligence for Missing Data Imputation, Estimation, and Management: Knowledge Optimization Techniques focuses on methods to estimate missing values given to observed data. Providing a defining body of research valuable to those involved in the field of study, this book presents current and new computational intelligence techniques that allow computers to learn the underlying structure of data.
Data mining deals with finding patterns in data that are by
user-definition, interesting and valid. It is an interdisciplinary
area involving databases, machine learning, pattern recognition,
statistics, visualization and others. Independently, data mining and decision support are well-developed research areas, but until now there has been no systematic attempt to integrate them. Data Mining and Decision Support: Integration and Collaboration, written by leading researchers in the field, presents a conceptual framework, plus the methods and tools for integrating the two disciplines and for applying this technology to business problems in a collaborative setting.
Physical processes, involving atomic phenomena, allow more and more precise time and frequency measurements. This progress is not possible without convenient processing of the respective raw data. This book describes the data processing at various levels: design of the time and frequency references, characterization of the time and frequency references, and applications involving precise time and/or frequency references.
Fuzzy sets were first proposed by Lotfi Zadeh in his seminal paper [366] in 1965, and ever since have been a center of many discussions, fervently admired and condemned. Both proponents and opponents consider the argu ments pointless because none of them would step back from their territory. And stiH, discussions burst out from a single sparkle like a conference pa per or a message on some fuzzy-mail newsgroup. Here is an excerpt from an e-mail messagepostedin1993tofuzzy-mail@vexpert. dbai. twvien. ac. at. by somebody who signed "Dave". , . . . Why then the "logic" in "fuzzy logic"? I don't think anyone has successfully used fuzzy sets for logical inference, nor do I think anyone wiH. In my admittedly neophyte opinion, "fuzzy logic" is a misnomer, an oxymoron. (1 would be delighted to be proven wrong on that. ) . . . I carne to the fuzzy literature with an open mind (and open wal let), high hopes and keen interest. I am very much disiHusioned with "fuzzy" per se, but I did happen across some extremely interesting things along the way. " Dave, thanks for the nice quote! Enthusiastic on the surface, are not many of us suspicious deep down? In some books and journals the word fuzzy is religiously avoided: fuzzy set theory is viewed as a second-hand cheap trick whose aim is nothing else but to devalue good classical theories and open up the way to lazy ignorants and newcomers.
The book presents contributions on statistical models and methods applied, for both data science and SDGs, in one place. Measuring and controlling data of SDGs, data driven measurement of progress needs to be distributed to stakeholders. In this situation, the techniques used in data science, specially, in the big data analytics, play an important role rather than the traditional data gathering and manipulation techniques. This book fills this space through its twenty contributions. The contributions have been selected from those presented during the 7th International Conference on Data Science and Sustainable Development Goals organized by the Department of Statistics, University of Rajshahi, Bangladesh; and cover topics mainly on SDGs, bioinformatics, public health, medical informatics, environmental statistics, data science and machine learning. The contents of the volume would be useful to policymakers, researchers, government entities, civil society, and nonprofit organizations for monitoring and accelerating the progress of SDGs.
This book explores the nexus of Sustainability and Information Communication Technologies that are rapidly changing the way we live, learn, and do business. The monumental amount of energy required to power the Zeta byte of data traveling across the globe's billions of computers and mobile phones daily cannot be overstated. This ground-breaking reference examines the possibility that our evolving technologies may enable us to mitigate our global energy crisis, rather than adding to it. By connecting concepts and trends such as smart homes, big data, and the internet of things with their applications to sustainability, the authors suggest that emerging and ubiquitous technologies embedded in our daily lives may rightfully be considered as enabling solutions for our future sustainable development.
To optimally design and manage a directory service, IS architects
and managers must understand current state-of-the-art products.
Directory Services covers Novell's NDS eDirectory, Microsoft's
Active Directory, UNIX directories and products by NEXOR, MaxWare,
Siemens, Critical Path and others. Directory design fundamentals
and products are woven into case studies of large enterprise
deployments. Cox thoroughly explores replication, security,
migration and legacy system integration and interoperability.
Business issues such as how to cost justify, plan, budget and
manage a directory project are also included. The book culminates
in a visionary discussion of future trends and emerging directory
technologies including the strategic direction of the top directory
products, the impact of wireless technology on directory enabled
applications and using directories to customize content delivery
from the Enterprise Portal.
The design of computer systems to be embedded in critical real-time applications is a complex task. Such systems must not only guarantee to meet hard real-time deadlines imposed by their physical environment, they must guarantee to do so dependably, despite both physical faults (in hardware) and design faults (in hardware or software). A fault-tolerance approach is mandatory for these guarantees to be commensurate with the safety and reliability requirements of many life- and mission-critical applications. A Generic Fault-Tolerant Architecture for Real-Time Dependable Systems explains the motivations and the results of a collaborative project(*), whose objective was to significantly decrease the lifecycle costs of such fault-tolerant systems. The end-user companies participating in this project currently deploy fault-tolerant systems in critical railway, space and nuclear-propulsion applications. However, these are proprietary systems whose architectures have been tailored to meet domain-specific requirements. This has led to very costly, inflexible, and often hardware-intensive solutions that, by the time they are developed, validated and certified for use in the field, can already be out-of-date in terms of their underlying hardware and software technology. The project thus designed a generic fault-tolerant architecture with two dimensions of redundancy and a third multi-level integrity dimension for accommodating software components of different levels of criticality. The architecture is largely based on commercial off-the-shelf (COTS) components and follows a software-implemented approach so as to minimise the need for special hardware. Using an associated development and validationenvironment, system developers may configure and validate instances of the architecture that can be shown to meet the very diverse requirements of railway, space, nuclear-propulsion and other critical real-time applications. This book describes the rationale of the generic architecture, the design and validation of its communication, scheduling and fault-tolerance components, and the tools that make up its design and validation environment. The book concludes with a description of three prototype systems that have been developed following the proposed approach. (*) Esprit project No. 20716: GUARDS: a Generic Upgradable Architecture for Real-time Dependable Systems.
This book provides an overview of the resources and research projects that are bringing Big Data and High Performance Computing (HPC) on converging tracks. It demystifies Big Data and HPC for the reader by covering the primary resources, middleware, applications, and tools that enable the usage of HPC platforms for Big Data management and processing.Through interesting use-cases from traditional and non-traditional HPC domains, the book highlights the most critical challenges related to Big Data processing and management, and shows ways to mitigate them using HPC resources. Unlike most books on Big Data, it covers a variety of alternatives to Hadoop, and explains the differences between HPC platforms and Hadoop.Written by professionals and researchers in a range of departments and fields, this book is designed for anyone studying Big Data and its future directions. Those studying HPC will also find the content valuable.
Calendar units, such as months and days, clock units, such as hours and seconds, and specialized units, such as business days and academic years, play a major role in a wide range of information system applications. System support for reasoning about these units, called granularities in this book, is important for the efficient design, use, and implementation of such applications. The book deals with several aspects of temporal information and provides a unifying model for granularities. It is intended for computer scientists and engineers who are interested in the formal models and technical development of specific issues. Practitioners can learn about critical aspects that must be taken into account when designing and implementing databases supporting temporal information. Lecturers may find this book useful for an advanced course on databases. Moreover, any graduate student working on time representation and reasoning, either in data or knowledge bases, should definitely read it.
Today's information technology and security networks demand increasingly complex algorithms and cryptographic systems. Individuals implementing security policies for their companies must utilize technical skill and information technology knowledge to implement these security mechanisms. Cryptography & Security Devices: Mechanisms & Applications addresses cryptography from the perspective of the security services and mechanisms available to implement these services: discussing issues such as e-mail security, public-key architecture, virtual private networks, Web services security, wireless security, and the confidentiality and integrity of security services. This book provides scholars and practitioners in the field of information assurance working knowledge of fundamental encryption algorithms and systems supported in information technology and secure communication networks.
Data warehouses have captured the attention of practitioners and researchers alike. But the design and optimization of data warehouses remains an art rather than a science. This book presents the first comparative review of the state of the art and best current practice of data warehouses. It covers source and data integration, multidimensional aggregation, query optimization, update propagation, metadata management, quality assessment, and design optimization. Also, based on results of the European Data Warehouse Quality project, it offers a conceptual framework by which the architecture and quality of data warehouse efforts can be assessed and improved using enriched metadata management combined with advanced techniques from databases, business modeling, and artificial intelligence. For researchers and database professionals in academia and industry, the book offers an excellent introduction to the issues of quality and metadata usage in the context of data warehouses.
Neural networks represent a powerful data processing technique that has reached maturity and broad application. When clearly understood and appropriately used, they are a mandatory component in the toolbox of any engineer who wants make the best use of the available data, in order to build models, make predictions, mine data, recognize shapes or signals, etc. Ranging from theoretical foundations to real-life applications, this book is intended to provide engineers and researchers with clear methodologies for taking advantage of neural networks in industrial, financial or banking applications, many instances of which are presented in the book. For the benefit of readers wishing to gain deeper knowledge of the topics, the book features appendices that provide theoretical details for greater insight, and algorithmic details for efficient programming and implementation. The chapters have been written by experts and edited to present a coherent and comprehensive, yet not redundant, practically oriented introduction.
This book constitutes the thoroughly refereed post-conference proceedings of the 11th IFIP WG 6.11 Conference on e-Business, e-Services and e-Society, I3E 2011, held in Kaunas, Lithuania, in October 2011. The 25 revised papers presented were carefully reviewed and selected from numerous submissions. They are organized in the following topical sections: e-government and e-governance, e-services, digital goods and products, e-business process modeling and re-engineering, innovative e-business models and implementation, e-health and e-education, and innovative e-business models.
The explosion of computer use and internet communication has placed
new emphasis on the ability to store, retrieve and search for all
types of images, both still photo and video images. The success and
the future of visual information retrieval depends on the cutting
edge research and applications explored in this book. It combines
the expertise from both computer vision and database research.
The central purpose of this collection of essays is to make a creative addition to the debates surrounding the cultural heritage domain. In the 21st century the world faces epochal changes which affect every part of society, including the arenas in which cultural heritage is made, held, collected, curated, exhibited, or simply exists. The book is about these changes; about the decentring of culture and cultural heritage away from institutional structures towards the individual; about the questions which the advent of digital technologies is demanding that we ask and answer in relation to how we understand, collect and make available Europe's cultural heritage. Cultural heritage has enormous potential in terms of its contribution to improving the quality of life for people, understanding the past, assisting territorial cohesion, driving economic growth, opening up employment opportunities and supporting wider developments such as improvements in education and in artistic careers. Given that spectrum of possible benefits to society, the range of studies that follow here are intended to be a resource and stimulus to help inform not just professionals in the sector but all those with an interest in cultural heritage. |
![]() ![]() You may like...
Becoming a Reading Teacher - Connecting…
Jane Spiro, Amos Paran
Hardcover
R4,159
Discovery Miles 41 590
Events Management - A Developmental And…
Dimitri Tassiopoulos
Paperback
Robotics Research - The 15th…
Henrik I. Christensen, Oussama Khatib
Hardcover
R3,071
Discovery Miles 30 710
|