![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > General
This book explores the nexus of Sustainability and Information Communication Technologies that are rapidly changing the way we live, learn, and do business. The monumental amount of energy required to power the Zeta byte of data traveling across the globe's billions of computers and mobile phones daily cannot be overstated. This ground-breaking reference examines the possibility that our evolving technologies may enable us to mitigate our global energy crisis, rather than adding to it. By connecting concepts and trends such as smart homes, big data, and the internet of things with their applications to sustainability, the authors suggest that emerging and ubiquitous technologies embedded in our daily lives may rightfully be considered as enabling solutions for our future sustainable development.
Physical processes, involving atomic phenomena, allow more and more precise time and frequency measurements. This progress is not possible without convenient processing of the respective raw data. This book describes the data processing at various levels: design of the time and frequency references, characterization of the time and frequency references, and applications involving precise time and/or frequency references.
To optimally design and manage a directory service, IS architects
and managers must understand current state-of-the-art products.
Directory Services covers Novell's NDS eDirectory, Microsoft's
Active Directory, UNIX directories and products by NEXOR, MaxWare,
Siemens, Critical Path and others. Directory design fundamentals
and products are woven into case studies of large enterprise
deployments. Cox thoroughly explores replication, security,
migration and legacy system integration and interoperability.
Business issues such as how to cost justify, plan, budget and
manage a directory project are also included. The book culminates
in a visionary discussion of future trends and emerging directory
technologies including the strategic direction of the top directory
products, the impact of wireless technology on directory enabled
applications and using directories to customize content delivery
from the Enterprise Portal.
The design of computer systems to be embedded in critical real-time applications is a complex task. Such systems must not only guarantee to meet hard real-time deadlines imposed by their physical environment, they must guarantee to do so dependably, despite both physical faults (in hardware) and design faults (in hardware or software). A fault-tolerance approach is mandatory for these guarantees to be commensurate with the safety and reliability requirements of many life- and mission-critical applications. A Generic Fault-Tolerant Architecture for Real-Time Dependable Systems explains the motivations and the results of a collaborative project(*), whose objective was to significantly decrease the lifecycle costs of such fault-tolerant systems. The end-user companies participating in this project currently deploy fault-tolerant systems in critical railway, space and nuclear-propulsion applications. However, these are proprietary systems whose architectures have been tailored to meet domain-specific requirements. This has led to very costly, inflexible, and often hardware-intensive solutions that, by the time they are developed, validated and certified for use in the field, can already be out-of-date in terms of their underlying hardware and software technology. The project thus designed a generic fault-tolerant architecture with two dimensions of redundancy and a third multi-level integrity dimension for accommodating software components of different levels of criticality. The architecture is largely based on commercial off-the-shelf (COTS) components and follows a software-implemented approach so as to minimise the need for special hardware. Using an associated development and validationenvironment, system developers may configure and validate instances of the architecture that can be shown to meet the very diverse requirements of railway, space, nuclear-propulsion and other critical real-time applications. This book describes the rationale of the generic architecture, the design and validation of its communication, scheduling and fault-tolerance components, and the tools that make up its design and validation environment. The book concludes with a description of three prototype systems that have been developed following the proposed approach. (*) Esprit project No. 20716: GUARDS: a Generic Upgradable Architecture for Real-time Dependable Systems.
This book provides an overview of the resources and research projects that are bringing Big Data and High Performance Computing (HPC) on converging tracks. It demystifies Big Data and HPC for the reader by covering the primary resources, middleware, applications, and tools that enable the usage of HPC platforms for Big Data management and processing.Through interesting use-cases from traditional and non-traditional HPC domains, the book highlights the most critical challenges related to Big Data processing and management, and shows ways to mitigate them using HPC resources. Unlike most books on Big Data, it covers a variety of alternatives to Hadoop, and explains the differences between HPC platforms and Hadoop.Written by professionals and researchers in a range of departments and fields, this book is designed for anyone studying Big Data and its future directions. Those studying HPC will also find the content valuable.
Calendar units, such as months and days, clock units, such as hours and seconds, and specialized units, such as business days and academic years, play a major role in a wide range of information system applications. System support for reasoning about these units, called granularities in this book, is important for the efficient design, use, and implementation of such applications. The book deals with several aspects of temporal information and provides a unifying model for granularities. It is intended for computer scientists and engineers who are interested in the formal models and technical development of specific issues. Practitioners can learn about critical aspects that must be taken into account when designing and implementing databases supporting temporal information. Lecturers may find this book useful for an advanced course on databases. Moreover, any graduate student working on time representation and reasoning, either in data or knowledge bases, should definitely read it.
Confidently shepherd your organization's implementation of Microsoft Dynamics 365 to a successful conclusion In Mastering Microsoft Dynamics 365 Implementations, accomplished executive, project manager, and author Eric Newell delivers a holistic, step-by-step reference to implementing Microsoft's cloud-based ERP and CRM business applications. You'll find the detailed and concrete instructions you need to take your implementation project all the way to the finish line, on-time, and on-budget. You'll learn: The precise steps to take, in the correct order, to bring your Dynamics 365 implementation to life What to do before you begin the project, including identifying stakeholders and building your business case How to deal with a change management throughout the lifecycle of your project How to manage conference room pilots (CRPs) and what to expect during the sessions Perfect for CIOs, technology VPs, CFOs, Operations leaders, application directors, business analysts, ERP/CRM specialists, and project managers, Mastering Microsoft Dynamics 365 Implementations is an indispensable and practical reference for guiding your real-world Dynamics 365 implementation from planning to completion.
This book constitutes the thoroughly refereed post-conference proceedings of the 11th IFIP WG 6.11 Conference on e-Business, e-Services and e-Society, I3E 2011, held in Kaunas, Lithuania, in October 2011. The 25 revised papers presented were carefully reviewed and selected from numerous submissions. They are organized in the following topical sections: e-government and e-governance, e-services, digital goods and products, e-business process modeling and re-engineering, innovative e-business models and implementation, e-health and e-education, and innovative e-business models.
The explosion of computer use and internet communication has placed
new emphasis on the ability to store, retrieve and search for all
types of images, both still photo and video images. The success and
the future of visual information retrieval depends on the cutting
edge research and applications explored in this book. It combines
the expertise from both computer vision and database research.
The central purpose of this collection of essays is to make a creative addition to the debates surrounding the cultural heritage domain. In the 21st century the world faces epochal changes which affect every part of society, including the arenas in which cultural heritage is made, held, collected, curated, exhibited, or simply exists. The book is about these changes; about the decentring of culture and cultural heritage away from institutional structures towards the individual; about the questions which the advent of digital technologies is demanding that we ask and answer in relation to how we understand, collect and make available Europe's cultural heritage. Cultural heritage has enormous potential in terms of its contribution to improving the quality of life for people, understanding the past, assisting territorial cohesion, driving economic growth, opening up employment opportunities and supporting wider developments such as improvements in education and in artistic careers. Given that spectrum of possible benefits to society, the range of studies that follow here are intended to be a resource and stimulus to help inform not just professionals in the sector but all those with an interest in cultural heritage.
Explains processes and scenarios (process chains) for planning with SAP characteristics. Uses the latest releases of SAP R/3 and APO (Advanced Planning & Optimization software). The levels scenario, process and function are explained from the business case down to the implementation level and the relations between these levels are consistently pointed out throughout the book Many illustrations help to understand the interdependencies between scenario, process and function Aims to help avoiding costly dead ends and securing a smooth implementation and management of supply chains
This book inclusively and systematically presents the fundamental methods, models and techniques of practical application of grey data analysis, bringing together the authors' many years of theoretical exploration, real-life application, and teaching. It also reflects the majority of recent theoretical and applied advances in the theory achieved by scholars from across the world, providing readers a vivid overall picture of this new theory and its pioneering research activities. The book includes 12 chapters, covering the introduction to grey systems, a novel framework of grey system theory, grey numbers and their operations, sequence operators and grey data mining, grey incidence analysis models, grey clustering evaluation models, series of GM models, combined grey models, techniques for grey systems forecasting, grey models for decision-making, techniques for grey control, etc. It also includes a software package that allows practitioners to conveniently and practically employ the theory and methods presented in this book. All methods and models presented here were chosen for their practical applicability and have been widely employed in various research works. I still remember 1983, when I first participated in a course on Grey System Theory. The mimeographed teaching materials had a blue cover and were presented as a book. It was like finding a treasure: This fascinating book really inspired me as a young intellectual going through a period of confusion and lack of academic direction. It shone with pearls of wisdom and offered a beacon in the mist for a man trying to find his way in academic research. This book became the guiding light in my life journey, inspiring me to forge an indissoluble bond with Grey System Theory. --Sifeng Liu
Cultural forces govern a synergistic relationship among information institutions that shapes their roles collectively and individually. Cultural synergy is the combination of perception- and behavior-shaping knowledge within, between, and among groups. Our hyperlinked era makes information-sharing among institutions critically important for scholarship as well as for the advancement of humankind. Information institutions are those that have, or share in, the mission to preserve, conserve, and disseminate information objects and their informative content. A central idea is the notion of social epistemology that information institutions arise culturally from social forces of the cultures they inhabit, and that their purpose is to disseminate that culture. All information institutions are alike in critical ways. Intersecting lines of cultural mission are trajectories for synergy for allowing us to perceive the universe of information institutions as interconnected and evolving and moving forward in distinct ways for the improvement of the condition of humankind through the building up of its knowledge base and of its information-sharing processes. This book is an exploration of the cultural synergy that can be realized by seeing commonalities among information institutions (sometimes also called cultural heritage institutions): museums, libraries, and archives. The hyperlinked era of the Semantic Web makes information sharing among institutions critically important for scholarship as well as the advancement of mankind. The book addresses the origins of cultural information institutions, the history of the professions that run them, and the social imperative of information organization as a catalyst for semantic synergy.
This edited book first consolidates the results of the EU-funded EDISON project (Education for Data Intensive Science to Open New science frontiers), which developed training material and information to assist educators, trainers, employers, and research infrastructure managers in identifying, recruiting and inspiring the data science professionals of the future. It then deepens the presentation of the information and knowledge gained to allow for easier assimilation by the reader. The contributed chapters are presented in sequence, each chapter picking up from the end point of the previous one. After the initial book and project overview, the chapters present the relevant data science competencies and body of knowledge, the model curriculum required to teach the required foundations, profiles of professionals in this domain, and use cases and applications. The text is supported with appendices on related process models. The book can be used to develop new courses in data science, evaluate existing modules and courses, draft job descriptions, and plan and design efficient data-intensive research teams across scientific disciplines.
This book gathers visionary ideas from leading academics and scientists to predict the future of wireless communication and enabling technologies in 2050 and beyond. The content combines a wealth of illustrations, tables, business models, and novel approaches to the evolution of wireless communication. The book also provides glimpses into the future of emerging technologies, end-to-end systems, and entrepreneurial and business models, broadening readers' understanding of potential future advances in the field and their influence on society at large
Cyberspace security is a critical subject of our times. On the one hand the development of Internet, mobile communications, distributed computing, computer software and databases storing essential enterprise information has helped to conduct business and personal communication between individual people. On the other hand it has created many opportunities for abuse, fraud and expensive damage. This book is a selection of the best papers presented at the NATO Advanced Research Workshop dealing with the Subject of Cyberspace Security and Defense. The level of the individual contributions in the volume is advanced and suitable for senior and graduate students, researchers and technologists who wish to get some feeling of the state of the art in several sub-disciplines of Cyberspace security. Several papers provide a broad-brush description of national security issues and brief summaries of technology states. These papers can be read and appreciated by technically enlightened managers and executives who want to understand security issues and approaches to technical solutions. An important question of our times is not "Should we do something for enhancing our digital assets security," the question is "How to do it."
"The Berkeley DB Book" is a practical guide to the intricacies of the Berkeley DB. This book covers in-depth the complex design issues that are mostly only touched on in terse footnotes within the dense Berkeley DB reference manual. It explains the technology at a higher level and also covers the internals, providing generous code and design examples. In this book, you will get to see a developer's perspective on intriguing design issues in Berkeley DB-based applications, and you will be able to choose design options for specific conditions. Also included is a special look at fault tolerance and high-availability frameworks. Berkeley DB is becoming the database of choice for large-scale applications like search engines and high-traffic web sites.
Social media is now ubiquitous on the internet, generating both new possibilities and new challenges in information analysis and retrieval. This comprehensive text/reference examines in depth the synergy between multimedia content analysis, personalization, and next-generation networking. The book demonstrates how this integration can result in robust, personalized services that provide users with an improved multimedia-centric quality of experience. Each chapter offers a practical step-by-step walkthrough for a variety of concepts, components and technologies relating to the development of applications and services. Topics and features: provides contributions from an international and interdisciplinary selection of experts in their fields; introduces the fundamentals of social media retrieval, presenting the most important areas of research in this domain; examines the important topic of multimedia tagging in social environments, including geo-tagging; discusses issues of personalization and privacy in social media; reviews advances in encoding, compression and network architectures for the exchange of social media information; describes a range of applications related to social media. Researchers and students interested in social media retrieval will find this book a valuable resource, covering a broad overview of state-of-the-art research and emerging trends in this area. The text will also be of use to practicing engineers involved in envisioning and building innovative social media applications and services.
This book presents a detailed review of high-performance computing infrastructures for next-generation big data and fast data analytics. Features: includes case studies and learning activities throughout the book and self-study exercises in every chapter; presents detailed case studies on social media analytics for intelligent businesses and on big data analytics (BDA) in the healthcare sector; describes the network infrastructure requirements for effective transfer of big data, and the storage infrastructure requirements of applications which generate big data; examines real-time analytics solutions; introduces in-database processing and in-memory analytics techniques for data mining; discusses the use of mainframes for handling real-time big data and the latest types of data management systems for BDA; provides information on the use of cluster, grid and cloud computing systems for BDA; reviews the peer-to-peer techniques and tools and the common information visualization techniques, used in BDA.
Everyone is already in the era of big data. It is gradually changing people's lifestyles. Therefore, it is necessary to explore the development path of big data, balance the relationship between technology, policy, and the market, so that it can better serve human society.This comprehensive book introduces what big data is, big data processing systems, big data management technologies, and big data analysis methods in an easy-to-understand language. It explains the specific applications of big data in smart government affairs, economic development, and the improvement of people's livelihood and welfare. The reference text also looks at the future development of big data.
This monograph presents a collection of major developments leading toward the implementation of white space technology - an emerging wireless standard for using wireless spectrum in locations where it is unused by licensed users. Some of the key research areas in the field are covered. These include emerging standards, technical insights from early pilots and simulations, software defined radio platforms, geo-location spectrum databases and current white space spectrum usage in India and South Africa.
As computer power grows and data collection technologies advance, a plethora of data is generated in almost every field where computers are used. The com puter generated data should be analyzed by computers; without the aid of computing technologies, it is certain that huge amounts of data collected will not ever be examined, let alone be used to our advantages. Even with today's advanced computer technologies (e. g., machine learning and data mining sys tems), discovering knowledge from data can still be fiendishly hard due to the characteristics of the computer generated data. Taking its simplest form, raw data are represented in feature-values. The size of a dataset can be measUJ.ed in two dimensions, number of features (N) and number of instances (P). Both Nand P can be enormously large. This enormity may cause serious problems to many data mining systems. Feature selection is one of the long existing methods that deal with these problems. Its objective is to select a minimal subset of features according to some reasonable criteria so that the original task can be achieved equally well, if not better. By choosing a minimal subset offeatures, irrelevant and redundant features are removed according to the criterion. When N is reduced, the data space shrinks and in a sense, the data set is now a better representative of the whole data population. If necessary, the reduction of N can also give rise to the reduction of P by eliminating duplicates."
Clustering is an important technique for discovering relatively dense sub-regions or sub-spaces of a multi-dimension data distribution. Clus tering has been used in information retrieval for many different purposes, such as query expansion, document grouping, document indexing, and visualization of search results. In this book, we address issues of cluster ing algorithms, evaluation methodologies, applications, and architectures for information retrieval. The first two chapters discuss clustering algorithms. The chapter from Baeza-Yates et al. describes a clustering method for a general metric space which is a common model of data relevant to information retrieval. The chapter by Guha, Rastogi, and Shim presents a survey as well as detailed discussion of two clustering algorithms: CURE and ROCK for numeric data and categorical data respectively. Evaluation methodologies are addressed in the next two chapters. Ertoz et al. demonstrate the use of text retrieval benchmarks, such as TRECS, to evaluate clustering algorithms. He et al. provide objective measures of clustering quality in their chapter. Applications of clustering methods to information retrieval is ad dressed in the next four chapters. Chu et al. and Noel et al. explore feature selection using word stems, phrases, and link associations for document clustering and indexing. Wen et al. and Sung et al. discuss applications of clustering to user queries and data cleansing. Finally, we consider the problem of designing architectures for infor mation retrieval. Crichton, Hughes, and Kelly elaborate on the devel opment of a scientific data system architecture for information retrieval."
Space support in databases poses new challenges in every part of a database management system & the capability of spatial support in the physical layer is considered very important. This has led to the design of spatial access methods to enable the effective & efficient management of spatial objects. R-trees have a simplicity of structure & together with their resemblance to the B-tree, allow developers to incorporate them easily into existing database management systems for the support of spatial query processing. This book provides an extensive survey of the R-tree evolution, studying the applicability of the structure & its variations to efficient query processing, accurate proposed cost models, & implementation issues like concurrency control and parallelism. Written for database researchers, designers & programmers as well as graduate students, this comprehensive monograph will be a welcome addition to the field. |
![]() ![]() You may like...
Online Learning and Assessment in Higher…
Robyn Benson, Charlotte Brack
Paperback
R1,648
Discovery Miles 16 480
A Handbook for Doctoral Supervisors
Stan Taylor, Margaret Kiley, …
Hardcover
R4,177
Discovery Miles 41 770
At Fault - Joyce and the Crisis of the…
Sebastian D.G. Knowles
Hardcover
R2,149
Discovery Miles 21 490
Pursuit of Liberation - Critical…
Emily A. Nemeth, Ashley N Patterson
Hardcover
R3,130
Discovery Miles 31 300
Formal Semantics and Proof Techniques…
Kothanda Umamageswaran, Sheetanshu L. Pandey, …
Hardcover
R2,975
Discovery Miles 29 750
Just-In-Time Systems for Computing…
Ralph L. Kliem, Irwin S. Ludin
Hardcover
R2,769
Discovery Miles 27 690
|