![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases > General
Strives to be the point of reference for the most important issues in the field of multidimensional databases. This book provides a brief history of the field and distinguishes between what is new in recent research and what is merely a renaming of old concepts. The book reviews past papers and discusses current research projects in the hope to encourage the search for new solutions to the many problems that are still unsolved. In addition this outlines the incredible advances in technology and ever increasing demands from users in the most diverse applicative areas such as finance, medicine, statistics, business, and many more. Many of the most distinguished and well-known researchers have contributed to this book writing about their own specific field.
As other complex systems in social and natural sciences as well as in engineering, the Internet is hard to understand from a technical point of view. Packet switched networks defy analytical modeling. The Internet is an outstanding and challenging case because of its fast development, unparalleled heterogeneity and the inherent lack of measurement and monitoring mechanisms in its core conception. This monograph deals with applications of computational intelligence methods, with an emphasis on fuzzy techniques, to a number of current issues in measurement, analysis and control of traffic in the Internet. First, the core building blocks of Internet Science and other related networking aspects are introduced. Then, data mining and control problems are addressed. In the first class two issues are considered: predictive modeling of traffic load as well as summarization of traffic flow measurements. The second class, control, includes active queue management schemes for Internet routers as well as window based end-to-end rate and congestion control. The practical hardware implementation of some of the fuzzy inference systems proposed here is also addressed. While some theoretical developments are described, we favor extensive evaluation of models using real-world data by simulation and experiments.
This book provides the fundamental knowledge of the classical matching theory problems. It builds up the bridge between the matching theory and the 5G wireless communication resource allocation problems. The potentials and challenges of implementing the semi-distributive matching theory framework into the wireless resource allocations are analyzed both theoretically and through implementation examples. Academics, researchers, engineers, and so on, who are interested in efficient distributive wireless resource allocation solutions, will find this book to be an exceptional resource.
Despite blockchain being an emerging technology that is mainly applied in the financial and logistics domain areas, it has great potential to be applied in other industries to generate a wider impact. Due to the need for social distancing globally, blockchain has great opportunities to be adopted in digital health including health insurance, pharmaceutical supply chain, remote diagnosis, and more. Revolutionizing Digital Healthcare Through Blockchain Technology Applications explores the current applications and future opportunities of blockchain technology in digital health and provides a reference for the development of blockchain in digital health for the future. Covering key topics such as privacy, blockchain economy, and cryptocurrency, this reference work is ideal for computer scientists, healthcare professionals, policymakers, researchers, scholars, academicians, practitioners, instructors, and students.
This book describes the latest methods and tools for the management of information within facility management services and explains how it is possible to collect, organize, and use information over the life cycle of a building in order to optimize the integration of these services and improve the efficiency of processes. The coverage includes presentation and analysis of basic concepts, procedures, and international standards in the development and management of real estate inventories, building registries, and information systems for facility management. Models of strategic management are discussed and the functions and roles of the strategic management center, explained. Detailed attention is also devoted to building information modeling (BIM) for facility management and potential interactions between information systems and BIM applications. Criteria for evaluating information system performance are identified, and guidelines of value in developing technical specifications for facility management services are proposed. The book will aid clients and facility managers in ensuring that information bases are effectively compiled and used in order to enhance building maintenance and facility management.
This book addresses the issue of smart and sustainable development in the Mediterranean (MED) region, a distinct part of the world, full of challenges and risks but also opportunities. Above all, the book focuses on smartening up small and medium-sized cities and insular communities, taking into account their geographical peculiarities, the pattern of MED urban settlements and the abundance of island complexes in the MED Basin. Taking for granted that sustainability in the MED is the overarching policy goal that needs to be served, the book explores different aspects of smartness in support of this goal's achievement. In this respect, evidence from concrete smart developments adopted by forerunners in the MED region is collected and analyzed; coupled with experiences gathered from successful, non-MED, examples of smart efforts in European countries. More specifically, current research and empirical results from MED urban environments are discussed, as well as findings from or concerning other parts of the world, which are of relevance to the MED region. The book's primary goal is to enable policymakers, planners and decision-making bodies to recognize the challenges and options available; and make to more informed policy decisions towards smart, sustainable, inclusive and resilient urban and regional futures in the MED.
This book brings all of the elements of database design together in
a single volume, saving the reader the time and expense of making
multiple purchases. It consolidates both introductory and advanced
topics, thereby covering the gamut of database design methodology ?
from ER and UML techniques, to conceptual data modeling and table
transformation, to storing XML and querying moving objects
databases.
This book examines recent developments in semantic systems that can respond to situations and environments and events. The contributors to this book cover how to design, implement and utilize disruptive technologies. The editor discusses the two fundamental sets of disruptive technologies: the development of semantic technologies including description logics, ontologies and agent frameworks; and the development of semantic information rendering and graphical forms of displays of high-density time-sensitive data to improve situational awareness. Beyond practical illustrations of emerging technologies, the editor proposes to utilize an incremental development method called knowledge scaffolding -a proven educational psychology technique for learning a subject matter thoroughly. The goal of this book is to help readers learn about managing information resources, from the ground up and reinforcing the learning as they read on.
Forecasting is a crucial function for companies in the fashion industry, but for many real-life forecasting applications in the, the data patterns are notorious for being highly volatile and it is very difficult, if not impossible, to analytically learn about the underlying patterns. As a result, many traditional methods (such as pure statistical models) will fail to make a sound prediction. Over the past decade, advances in artificial intelligence and computing technologies have provided an alternative way of generating precise and accurate forecasting results for fashion businesses. Despite being an important and timely topic, there is currently an absence of a comprehensive reference source that provides up-to-date theoretical and applied research findings on the subject of intelligent fashion forecasting systems. This three-part handbook fulfills this need and covers materials ranging from introductory studies and technical reviews, theoretical modeling research, to intelligent fashion forecasting applications and analysis. This book is suitable for academic researchers, graduate students, senior undergraduate students and practitioners who are interested in the latest research on fashion forecasting.
The three-volume set IFIP AICT 368-370 constitutes the refereed post-conference proceedings of the 5th IFIP TC 5, SIG 5.1 International Conference on Computer and Computing Technologies in Agriculture, CCTA 2011, held in Beijing, China, in October 2011. The 189 revised papers presented were carefully selected from numerous submissions. They cover a wide range of interesting theories and applications of information technology in agriculture, including simulation models and decision-support systems for agricultural production, agricultural product quality testing, traceability and e-commerce technology, the application of information and communication technology in agriculture, and universal information service technology and service systems development in rural areas. The 59 papers included in the third volume focus on simulation, optimization, monitoring, and control technology.
Based on more than 10 years of teaching experience, Blanken and his coeditors have assembled all the topics that should be covered in advanced undergraduate or graduate courses on multimedia retrieval and multimedia databases. The single chapters of this textbook explain the general architecture of multimedia information retrieval systems and cover various metadata languages such as Dublin Core, RDF, or MPEG. The authors emphasize high-level features and show how these are used in mathematical models to support the retrieval process. For each chapter, there 's detail on further reading, and additional exercises and teaching material is available online.
th I3E 2010 marked the 10 anniversary of the IFIP Conference on e-Business, e- Services, and e-Society, continuing a tradition that was invented in 1998 during the International Conference on Trends in Electronic Commerce, TrEC 1998, in Hamburg (Germany). Three years later the inaugural I3E 2001 conference was held in Zurich (Switzerland). Since then I3E has made its journey through the world: 2002 Lisbon (Portugal), 2003 Sao Paulo (Brazil), 2004 Toulouse (France), 2005 Poznan (Poland), 2006 Turku (Finland), 2007 Wuhan (China), 2008 Tokyo (Japan), and 2009 Nancy (France). I3E 2010 took place in Buenos Aires (Argentina) November 3-5, 2010. Known as "The Pearl" of South America, Buenos Aires is a cosmopolitan, colorful, and vibrant city, surprising its visitors with a vast variety of cultural and artistic performances, European architecture, and the passion for tango, coffee places, and football disc- sions. A cultural reference in Latin America, the city hosts 140 museums, 300 theaters, and 27 public libraries including the National Library. It is also the main educational center in Argentina and home of renowned universities including the U- versity of Buenos Aires, created in 1821. Besides location, the timing of I3E 2010 is th also significant--it coincided with the 200 anniversary celebration of the first local government in Argentina.
Uncertain data is inherent in many important applications, such as environmental surveillance, market analysis, and quantitative economics research. Due to the importance of those applications and rapidly increasing amounts of uncertain data collected and accumulated, analyzing large collections of uncertain data has become an important task. Ranking queries (also known as top-k queries) are often natural and useful in analyzing uncertain data. "Ranking Queries on Uncertain Data" discusses the motivations/applications, challenging problems, the fundamental principles, and the evaluation algorithms of ranking queries on uncertain data. Theoretical and algorithmic results of ranking queries on uncertain data are presented in the last section of this book. "Ranking Queries on Uncertain Data" is the first book to systematically discuss the problem of ranking queries on uncertain data.
This book provides comprehensive coverage of fundamentals of database management system. It contains a detailed description on Relational Database Management System Concepts. There are a variety of solved examples and review questions with solutions. This book is for those who require a better understanding of relational data modeling, its purpose, its nature, and the standards used in creating relational data model.
Trustworthy Ubiquitous Computing covers aspects of trust in ubiquitous computing environments. The aspects of context, privacy, reliability, usability and user experience related to "emerged and exciting new computing paradigm of Ubiquitous Computing", includes pervasive, grid, and peer-to-peer computing including sensor networks to provide secure computing and communication services at anytime and anywhere. Marc Weiser presented his vision of disappearing and ubiquitous computing more than 15 years ago. The big picture of the computer introduced into our environment was a big innovation and the starting point for various areas of research. In order to totally adopt the idea of ubiquitous computing several houses were build, equipped with technology and used as laboratory in order to find and test appliances that are useful and could be made available in our everyday life. Within the last years industry picked up the idea of integrating ubiquitous computing and already available products like remote controls for your house were developed and brought to the market. In spite of many applications and projects in the area of ubiquitous and pervasive computing the success is still far away. One of the main reasons is the lack of acceptance of and confidence in this technology. Although researchers and industry are working in all of these areas a forum to elaborate security, reliability and privacy issues, that resolve in trustworthy interfaces and computing environments for people interacting within these ubiquitous environments is important. The user experience factor of trust thus becomes a crucial issue for the success of a UbiComp application. The goal of this book is to provide a state the art of Trustworthy Ubiquitous Computing to address recent research results and to present and discuss the ideas, theories, technologies, systems, tools, applications and experiences on all theoretical and practical issues.
In practice, the design and architecture of a cloud varies among cloud providers. We present a generic evaluation framework for the performance, availability and reliability characteristics of various cloud platforms. We describe a generic benchmark architecture for cloud databases, specifically NoSQL database as a service. It measures the performance of replication delay and monetary cost. Service Level Agreements (SLA) represent the contract which captures the agreed upon guarantees between a service provider and its customers. The specifications of existing service level agreements (SLA) for cloud services are not designed to flexibly handle even relatively straightforward performance and technical requirements of consumer applications. We present a novel approach for SLA-based management of cloud-hosted databases from the consumer perspective and an end-to-end framework for consumer-centric SLA management of cloud-hosted databases. The framework facilitates adaptive and dynamic provisioning of the database tier of the software applications based on application-defined policies for satisfying their own SLA performance requirements, avoiding the cost of any SLA violation and controlling the monetary cost of the allocated computing resources. In this framework, the SLA of the consumer applications are declaratively defined in terms of goals which are subjected to a number of constraints that are specific to the application requirements. The framework continuously monitors the application-defined SLA and automatically triggers the execution of necessary corrective actions (scaling out/in the database tier) when required. The framework is database platform-agnostic, uses virtualization-based database replication mechanisms and requires zero source code changes of the cloud-hosted software applications.
The creation and consumption of content, especially visual content, is ingrained into our modern world. This book contains a collection of texts centered on the evaluation of image retrieval systems. To enable reproducible evaluation we must create standardized benchmarks and evaluation methodologies. The individual chapters in this book highlight major issues and challenges in evaluating image retrieval systems and describe various initiatives that provide researchers with the necessary evaluation resources. In particular they describe activities within ImageCLEF, an initiative to evaluate cross-language image retrieval systems which has been running as part of the Cross Language Evaluation Forum (CLEF) since 2003. To this end, the editors collected contributions from a range of people: those involved directly with ImageCLEF, such as the organizers of specific image retrieval or annotation tasks; participants who have developed techniques to tackle the challenges set forth by the organizers; and people from industry and academia involved with image retrieval and evaluation generally. Mostly written for researchers in academia and industry, the book stresses the importance of combing textual and visual information - a multimodal approach - for effective retrieval. It provides the reader with clear ideas about information retrieval and its evaluation in contexts and domains such as healthcare, robot vision, press photography, and the Web.
NewInternetdevelopmentsposegreaterandgreaterprivacydilemmas. Inthe- formation Society, the need for individuals to protect their autonomy and retain control over their personal information is becoming more and more important. Today, informationandcommunicationtechnologies-andthepeopleresponsible for making decisions about them, designing, and implementing them-scarcely consider those requirements, thereby potentially putting individuals' privacy at risk. The increasingly collaborative character of the Internet enables anyone to compose services and contribute and distribute information. It may become hard for individuals to manage and control information that concerns them and particularly how to eliminate outdated or unwanted personal information, thus leavingpersonalhistoriesexposedpermanently. Theseactivitiesraisesubstantial new challenges for personal privacy at the technical, social, ethical, regulatory, and legal levels: How can privacy in emerging Internet applications such as c- laborative scenarios and virtual communities be protected? What frameworks and technical tools could be utilized to maintain life-long privacy? DuringSeptember3-10,2009, IFIP(InternationalFederationforInformation Processing)workinggroups9. 2 (Social Accountability),9. 6/11. 7(IT Misuseand theLaw),11. 4(NetworkSecurity)and11. 6(IdentityManagement)heldtheir5th InternationalSummerSchoolincooperationwiththeEUFP7integratedproject PrimeLife in Sophia Antipolis and Nice, France. The focus of the event was on privacy and identity managementfor emerging Internet applications throughout a person's lifetime. The aim of the IFIP Summer Schools has been to encourage young a- demic and industry entrants to share their own ideas about privacy and identity management and to build up collegial relationships with others. As such, the Summer Schools havebeen introducing participants to the social implications of information technology through the process of informed discussion.
The recent explosive growth of biological data has lead to a rapid increase in the number of molecular biology databases. Held in many different locations and often using varying interfaces and non-standard data formats, integrating and comparing data from these multiple databases can be difficult and time-consuming. This book provides an overview of the key tools currently available for large-scale comparisons of gene sequences and annotations, focusing on the databases and tools from the University of California, Santa Cruz (UCSC), Ensembl, and the National Centre for Biotechnology Information (NCBI). Written specifically for biology and bioinformatics students and researchers, it aims to give an appreciation for the methods by which the browsers and their databases are constructed, enabling readers to determine which tool is the most appropriate for their requirements. Each chapter contains a summary and exercises to aid understanding and promote effective use of these important tools.
Information infrastructures are integrated solutions based on the fusion of information and communication technologies. They are characterized by the large amount of data that must be managed accordingly. An information infrastructure requires an efficient and effective information retrieval system to provide access to the items stored in the infrastructure. Terminological Ontologies: Design, Management and Practical Applications presents the main problems that affect the discovery systems of information infrastructures to manage terminological models, and introduces a combination of research tools and applications in Semantic Web technologies. This book specifically analyzes the need to create, relate, and integrate the models required for an infrastructure by elaborating on the problem of accessing these models in an efficient manner via interoperable services and components. Terminological Ontologies: Design, Management and Practical Applications is geared toward information management systems and semantic web professionals working as project managers, application developers, government workers and more. Advanced undergraduate and graduate level students, professors and researchers focusing on computer science will also find this book valuable as a secondary text or reference book.
With the proliferation of social media and on-line communities in networked world a large gamut of data has been collected and stored in databases. The rate at which such data is stored is growing at a phenomenal rate and pushing the classical methods of data analysis to their limits. This book presents an integrated framework of recent empirical and theoretical research on social network analysis based on a wide range of techniques from various disciplines like data mining, social sciences, mathematics, statistics, physics, network science, machine learning with visualization techniques and security. The book illustrates the potential of multi-disciplinary techniques in various real life problems and intends to motivate researchers in social network analysis to design more effective tools by integrating swarm intelligence and data mining.
This volume presents a collection of carefully selected contributions in the area of social media analysis. Each chapter opens up a number of research directions that have the potential to be taken on further in this rapidly growing area of research. The chapters are diverse enough to serve a number of directions of research with Sentiment Analysis as the dominant topic in the book. The authors have provided a broad range of research achievements from multimodal sentiment identification to emotion detection in a Chinese microblogging website. The book will be useful to research students, academics and practitioners in the area of social media analysis.
Readers will progress from an understanding of what the Internet is now towards an understanding of the motivations and techniques that will drive its future. |
You may like...
Sir Benjamin Stone's Pictures - Records…
John Benjamin Stone
Hardcover
R1,217
Discovery Miles 12 170
How To Draw Crazy Cars & Mad Monsters…
Thom Taylor, Ed Newton
Paperback
(2)
|