![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
This book presents the proceedings of Workshops and Posters at the 13th International Conference on Spatial Information Theory (COSIT 2017), which is concerned with all aspects of space and spatial environments as experienced, represented and elaborated by humans, other animals and artificial agents. Complementing the main conference proceedings, workshop papers and posters investigate specialized research questions or challenges in spatial information theory and closely related topics, including advances in the conceptualization of specific spatio-temporal domains and diverse applications of spatial and temporal information.
This book describes analytical techniques for optimizing knowledge acquisition, processing, and propagation, especially in the contexts of cyber-infrastructure and big data. Further, it presents easy-to-use analytical models of knowledge-related processes and their applications. The need for such methods stems from the fact that, when we have to decide where to place sensors, or which algorithm to use for processing the data-we mostly rely on experts' opinions. As a result, the selected knowledge-related methods are often far from ideal. To make better selections, it is necessary to first create easy-to-use models of knowledge-related processes. This is especially important for big data, where traditional numerical methods are unsuitable. The book offers a valuable guide for everyone interested in big data applications: students looking for an overview of related analytical techniques, practitioners interested in applying optimization techniques, and researchers seeking to improve and expand on these techniques.
This book offers a coherent and comprehensive approach to feature subset selection in the scope of classification problems, explaining the foundations, real application problems and the challenges of feature selection for high-dimensional data. The authors first focus on the analysis and synthesis of feature selection algorithms, presenting a comprehensive review of basic concepts and experimental results of the most well-known algorithms. They then address different real scenarios with high-dimensional data, showing the use of feature selection algorithms in different contexts with different requirements and information: microarray data, intrusion detection, tear film lipid layer classification and cost-based features. The book then delves into the scenario of big dimension, paying attention to important problems under high-dimensional spaces, such as scalability, distributed processing and real-time processing, scenarios that open up new and interesting challenges for researchers. The book is useful for practitioners, researchers and graduate students in the areas of machine learning and data mining.
In practice, the design and architecture of a cloud varies among cloud providers. We present a generic evaluation framework for the performance, availability and reliability characteristics of various cloud platforms. We describe a generic benchmark architecture for cloud databases, specifically NoSQL database as a service. It measures the performance of replication delay and monetary cost. Service Level Agreements (SLA) represent the contract which captures the agreed upon guarantees between a service provider and its customers. The specifications of existing service level agreements (SLA) for cloud services are not designed to flexibly handle even relatively straightforward performance and technical requirements of consumer applications. We present a novel approach for SLA-based management of cloud-hosted databases from the consumer perspective and an end-to-end framework for consumer-centric SLA management of cloud-hosted databases. The framework facilitates adaptive and dynamic provisioning of the database tier of the software applications based on application-defined policies for satisfying their own SLA performance requirements, avoiding the cost of any SLA violation and controlling the monetary cost of the allocated computing resources. In this framework, the SLA of the consumer applications are declaratively defined in terms of goals which are subjected to a number of constraints that are specific to the application requirements. The framework continuously monitors the application-defined SLA and automatically triggers the execution of necessary corrective actions (scaling out/in the database tier) when required. The framework is database platform-agnostic, uses virtualization-based database replication mechanisms and requires zero source code changes of the cloud-hosted software applications.
The creation and consumption of content, especially visual content, is ingrained into our modern world. This book contains a collection of texts centered on the evaluation of image retrieval systems. To enable reproducible evaluation we must create standardized benchmarks and evaluation methodologies. The individual chapters in this book highlight major issues and challenges in evaluating image retrieval systems and describe various initiatives that provide researchers with the necessary evaluation resources. In particular they describe activities within ImageCLEF, an initiative to evaluate cross-language image retrieval systems which has been running as part of the Cross Language Evaluation Forum (CLEF) since 2003. To this end, the editors collected contributions from a range of people: those involved directly with ImageCLEF, such as the organizers of specific image retrieval or annotation tasks; participants who have developed techniques to tackle the challenges set forth by the organizers; and people from industry and academia involved with image retrieval and evaluation generally. Mostly written for researchers in academia and industry, the book stresses the importance of combing textual and visual information - a multimodal approach - for effective retrieval. It provides the reader with clear ideas about information retrieval and its evaluation in contexts and domains such as healthcare, robot vision, press photography, and the Web.
NewInternetdevelopmentsposegreaterandgreaterprivacydilemmas. Inthe- formation Society, the need for individuals to protect their autonomy and retain control over their personal information is becoming more and more important. Today, informationandcommunicationtechnologies-andthepeopleresponsible for making decisions about them, designing, and implementing them-scarcely consider those requirements, thereby potentially putting individuals' privacy at risk. The increasingly collaborative character of the Internet enables anyone to compose services and contribute and distribute information. It may become hard for individuals to manage and control information that concerns them and particularly how to eliminate outdated or unwanted personal information, thus leavingpersonalhistoriesexposedpermanently. Theseactivitiesraisesubstantial new challenges for personal privacy at the technical, social, ethical, regulatory, and legal levels: How can privacy in emerging Internet applications such as c- laborative scenarios and virtual communities be protected? What frameworks and technical tools could be utilized to maintain life-long privacy? DuringSeptember3-10,2009, IFIP(InternationalFederationforInformation Processing)workinggroups9. 2 (Social Accountability),9. 6/11. 7(IT Misuseand theLaw),11. 4(NetworkSecurity)and11. 6(IdentityManagement)heldtheir5th InternationalSummerSchoolincooperationwiththeEUFP7integratedproject PrimeLife in Sophia Antipolis and Nice, France. The focus of the event was on privacy and identity managementfor emerging Internet applications throughout a person's lifetime. The aim of the IFIP Summer Schools has been to encourage young a- demic and industry entrants to share their own ideas about privacy and identity management and to build up collegial relationships with others. As such, the Summer Schools havebeen introducing participants to the social implications of information technology through the process of informed discussion.
Every day millions of people capture, store, transmit, and manipulate digital data. Unfortunately free access digital multimedia communication also provides virtually unprecedented opportunities to pirate copyrighted material. Providing the theoretical background needed to develop and implement advanced techniques and algorithms, Digital Watermarking and Steganography- - Demonstrates how to develop and implement methods to guarantee the authenticity of digital media - Explains the categorization of digital watermarking techniques based on characteristics as well as applications - Presents cutting-edge techniques such as the GA-based breaking algorithm on the frequency-domain steganalytic system. The popularity of digital media continues to soar. The theoretical foundation presented within this valuable reference will facilitate the creation on new techniques and algorithms to combat present and potential threats against information security.
The Semantic Web proposes the mark-up of content on the Web using formal ontologies that structure underlying data for the purpose of comprehensive and transportable machine understanding. ""Semantic Web Services: Theory, Tools and Applications"" brings contributions from researchers, scientists from both industry and academia, and representatives from different communities to study, understand, and explore the theory, tools, and applications of the Semantic Web. ""Semantic Web Services: Theory, Tools and Applications"" binds computing involving the Semantic Web, ontologies, knowledge management, Web services, and Web processes into one fully comprehensive resource, serving as the platform for exchange of both practical technologies and far reaching research.
In this volume, Rudi Studer and his team deliver a self-contained compendium about the exciting field of Semantic Web services, starting with the basic standards and technologies and also including advanced applications in eGovernment and eHealth. The contributions provide both the theoretical background and the practical knowledge necessary to understand the essential ideas and to design new cutting-edge applications.
Electrical energy usage is increasing every year due to population growth and new forms of consumption. As such, it is increasingly imperative to research methods of energy control and safe use. Security Solutions and Applied Cryptography in Smart Grid Communications is a pivotal reference source for the latest research on the development of smart grid technology and best practices of utilization. Featuring extensive coverage across a range of relevant perspectives and topics, such as threat detection, authentication, and intrusion detection, this book is ideally designed for academicians, researchers, engineers and students seeking current research on ways in which to implement smart grid platforms all over the globe.
The recent explosive growth of biological data has lead to a rapid increase in the number of molecular biology databases. Held in many different locations and often using varying interfaces and non-standard data formats, integrating and comparing data from these multiple databases can be difficult and time-consuming. This book provides an overview of the key tools currently available for large-scale comparisons of gene sequences and annotations, focusing on the databases and tools from the University of California, Santa Cruz (UCSC), Ensembl, and the National Centre for Biotechnology Information (NCBI). Written specifically for biology and bioinformatics students and researchers, it aims to give an appreciation for the methods by which the browsers and their databases are constructed, enabling readers to determine which tool is the most appropriate for their requirements. Each chapter contains a summary and exercises to aid understanding and promote effective use of these important tools.
This book is a tribute to Professor Jacek Zurada, who is best known for his contributions to computational intelligence and knowledge-based neurocomputing. It is dedicated to Professor Jacek Zurada, Full Professor at the Computational Intelligence Laboratory, Department of Electrical and Computer Engineering, J.B. Speed School of Engineering, University of Louisville, Kentucky, USA, as a token of appreciation for his scientific and scholarly achievements, and for his longstanding service to many communities, notably the computational intelligence community, in particular neural networks, machine learning, data analyses and data mining, but also the fuzzy logic and evolutionary computation communities, to name but a few. At the same time, the book recognizes and honors Professor Zurada's dedication and service to many scientific, scholarly and professional societies, especially the IEEE (Institute of Electrical and Electronics Engineers), the world's largest professional technical professional organization dedicated to advancing science and technology in a broad spectrum of areas and fields. The volume is divided into five major parts, the first of which addresses theoretic, algorithmic and implementation problems related to the intelligent use of data in the sense of how to derive practically useful information and knowledge from data. In turn, Part 2 is devoted to various aspects of neural networks and connectionist systems. Part 3 deals with essential tools and techniques for intelligent technologies in systems modeling and Part 4 focuses on intelligent technologies in decision-making, optimization and control, while Part 5 explores the applications of intelligent technologies.
Provides readers with the methods, algorithms, and means to perform text mining tasks This book is devoted to the fundamentals of text mining using Perl, an open-source programming tool that is freely available via the Internet (www.perl.org). It covers mining ideas from several perspectives--statistics, data mining, linguistics, and information retrieval--and provides readers with the means to successfully complete text mining tasks on their own. The book begins with an introduction to regular expressions, a text pattern methodology, and quantitative text summaries, all of which are fundamental tools of analyzing text. Then, it builds upon this foundation to explore: Probability and texts, including the bag-of-words model Information retrieval techniques such as the TF-IDF similarity measure Concordance lines and corpus linguistics Multivariate techniques such as correlation, principal components analysis, and clustering Perl modules, German, and permutation tests Each chapter is devoted to a single key topic, and the author carefully and thoughtfully introduces mathematical concepts as they arise, allowing readers to learn as they go without having to refer to additional books. The inclusion of numerous exercises and worked-out examples further complements the book's student-friendly format. Practical Text Mining with Perl is ideal as a textbook for undergraduate and graduate courses in text mining and as a reference for a variety of professionals who are interested in extracting information from text documents.
Digital audio, video, images, and documents are flying through
cyberspace to their respective owners. Unfortunately, along the
way, individuals may choose to intervene and take this content for
themselves. Digital watermarking and steganography technology
greatly reduces the instances of this by limiting or eliminating
the ability of third parties to decipher the content that he has
taken. The many techiniques of digital watermarking (embedding a
code) and steganography (hiding information) continue to evolve as
applications that necessitate them do the same. The authors of this
second edition provide an update on the framework for applying
these techniques that they provided researchers and professionals
in the first well-received edition. Steganography and steganalysis
(the art of detecting hidden information) have been added to a
robust treatment of digital watermarking, as many in each field
research and deal with the other. New material includes
watermarking with side information, QIM, and dirty-paper codes. The
revision and inclusion of new material by these influential authors
has created a must-own book for anyone in this profession.
Information infrastructures are integrated solutions based on the fusion of information and communication technologies. They are characterized by the large amount of data that must be managed accordingly. An information infrastructure requires an efficient and effective information retrieval system to provide access to the items stored in the infrastructure. Terminological Ontologies: Design, Management and Practical Applications presents the main problems that affect the discovery systems of information infrastructures to manage terminological models, and introduces a combination of research tools and applications in Semantic Web technologies. This book specifically analyzes the need to create, relate, and integrate the models required for an infrastructure by elaborating on the problem of accessing these models in an efficient manner via interoperable services and components. Terminological Ontologies: Design, Management and Practical Applications is geared toward information management systems and semantic web professionals working as project managers, application developers, government workers and more. Advanced undergraduate and graduate level students, professors and researchers focusing on computer science will also find this book valuable as a secondary text or reference book.
With the proliferation of social media and on-line communities in networked world a large gamut of data has been collected and stored in databases. The rate at which such data is stored is growing at a phenomenal rate and pushing the classical methods of data analysis to their limits. This book presents an integrated framework of recent empirical and theoretical research on social network analysis based on a wide range of techniques from various disciplines like data mining, social sciences, mathematics, statistics, physics, network science, machine learning with visualization techniques and security. The book illustrates the potential of multi-disciplinary techniques in various real life problems and intends to motivate researchers in social network analysis to design more effective tools by integrating swarm intelligence and data mining.
This book is designed for the professional system administrators
who need to securely deploy Microsoft Vista in their networks.
Readers will not only learn about the new security features of
Vista, but they will learn how to safely integrate Vista with their
existing wired and wireless network infrastructure and safely
deploy with their existing applications and databases. The book
begins with a discussion of Microsoft's Trustworthy Computing
Initiative and Vista's development cycle, which was like none other
in Microsoft's history. Expert authors will separate the hype from
the reality of Vista s preparedness to withstand the 24 x 7 attacks
it will face from malicious attackers as the world s #1 desktop
operating system. The book has a companion CD which contains
hundreds of working scripts and utilities to help administrators
secure their environments.
The field of database security has expanded greatly, with the rapid development of global inter-networked infrastructure. Databases are no longer stand-alone systems accessible only to internal users of organizations. Today, businesses must allow selective access from different security domains. New data services emerge every day, bringing complex challenges to those whose job is to protect data security. The Internet and the web offer means for collecting and sharing data with unprecedented flexibility and convenience, presenting threats and challenges of their own. This book identifies and addresses these new challenges and more, offering solid advice for practitioners and researchers in industry.
Based on research and industry experience, this book structures the issues pertaining to grid computing security into three main categories: architecture-related, infrastructure-related, and management-related issues. It discusses all three categories in detail, presents existing solutions, standards, and products, and pinpoints their shortcomings and open questions. Together with a brief introduction into grid computing in general and underlying security technologies, this book offers the first concise and detailed introduction to this important area, targeting professionals in the grid industry as well as students.
Vulnerability analysis, also known as vulnerability assessment, is a process that defines, identifies, and classifies the security holes, or vulnerabilities, in a computer, network, or application. In addition, vulnerability analysis can forecast the effectiveness of proposed countermeasures and evaluate their actual effectiveness after they are put into use. Vulnerability Analysis and Defense for the Internet provides packet captures, flow charts and pseudo code, which enable a user to identify if an application/protocol is vulnerable. This edited volume also includes case studies that discuss the latest exploits.
This volume presents a collection of carefully selected contributions in the area of social media analysis. Each chapter opens up a number of research directions that have the potential to be taken on further in this rapidly growing area of research. The chapters are diverse enough to serve a number of directions of research with Sentiment Analysis as the dominant topic in the book. The authors have provided a broad range of research achievements from multimodal sentiment identification to emotion detection in a Chinese microblogging website. The book will be useful to research students, academics and practitioners in the area of social media analysis.
The main purpose of this book is to sum up the vital and highly topical research issue of knowledge representation on the Web and to discuss novel solutions by combining benefits of folksonomies and Web 2.0 approaches with ontologies and semantic technologies. The book contains an overview of knowledge representation approaches in past, present and future, introduction to ontologies, Web indexing and in first case the novel approaches of developing ontologies. combines aspects of knowledge representation for both the Semantic Web (ontologies) and the Web 2.0 (folksonomies). Currently there is no monographic book which provides a combined overview over these topics. focus on the topic of using knowledge representation methods for document indexing purposes. For this purpose, considerations from classical librarian interests in knowledge representation (thesauri, classification schemes etc.) are included, which are not part of most other books which have a stronger background in computer science.
This book will help organizations who have implemented or are considering implementing Microsoft Dynamics achieve a better result. It presents Regatta Dynamics, a methodology developed by the authors for the structured implementation of Microsoft Dynamics. From A-to-Z, it details the full implementation process, emphasizing the organizational component of the implementation process and the cohesion with functional and technical processes.
This book focuses on the development of wellness protocols for smart home monitoring, aiming to forecast the wellness of individuals living in ambient assisted living (AAL) environments. It describes in detail the design and implementation of heterogeneous wireless sensors and networks as applied to data mining and machine learning, which the protocols are based on. Further, it shows how these sensor and actuator nodes are deployed in the home environment, generating real-time data on object usage and other movements inside the home, and therefore demonstrates that the protocols have proven to offer a reliable, efficient, flexible, and economical solution for smart home systems. Documenting the approach from sensor to decision making and information generation, the book addresses various issues concerning interference mitigation, errors, security and large data handling. As such, it offers a valuable resource for researchers, students and practitioners interested in interdisciplinary studies at the intersection of wireless sensing processing, radio communication, the Internet of Things and machine learning, and in how they can be applied to smart home monitoring and assisted living environments. |
You may like...
Securing the Internet of Things…
Information Reso Management Association
Hardcover
R10,356
Discovery Miles 103 560
Flash Memory Integration - Performance…
Jalil Boukhobza, Pierre Olivier
Hardcover
R1,831
Discovery Miles 18 310
Modeling Semantic Web Services - The Web…
Jos De Bruijn, Mick Kerrigan, …
Hardcover
R1,413
Discovery Miles 14 130
Java for Bioinformatics and Biomedical…
Harshawardhan Bal, Johnny Hujol
Hardcover
R4,070
Discovery Miles 40 700
Interactive Web-based Virtual Reality…
Chi Chung Ko, Chang Dong Cheng
Hardcover
R4,213
Discovery Miles 42 130
|