![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
The World Wide Web can be considered a huge library that in consequence needs a capable librarian responsible for the classification and retrieval of documents as well as the mediation between library resources and users. Based on this idea, the concept of the "Librarian of the Web" is introduced which comprises novel, librarian-inspired methods and technical solutions to decentrally search for text documents in the web using peer-to-peer technology. The concept's implementation in the form of an interactive peer-to-peer client, called "WebEngine", is elaborated on in detail. This software extends and interconnects common web servers creating a fully integrated, decentralised and self-organising web search system on top of the existing web structure. Thus, the web is turned into its own powerful search engine without the need for any central authority. This book is intended for researchers and practitioners having a solid background in the fields of Information Retrieval and Web Mining.
This book presents the proceedings of Workshops and Posters at the 13th International Conference on Spatial Information Theory (COSIT 2017), which is concerned with all aspects of space and spatial environments as experienced, represented and elaborated by humans, other animals and artificial agents. Complementing the main conference proceedings, workshop papers and posters investigate specialized research questions or challenges in spatial information theory and closely related topics, including advances in the conceptualization of specific spatio-temporal domains and diverse applications of spatial and temporal information.
Hyperspectral Image Fusion is the first text dedicated to the fusion techniques for such a huge volume of data consisting of a very large number of images. This monograph brings out recent advances in the research in the area of visualization of hyperspectral data. It provides a set of pixel-based fusion techniques, each of which is based on a different framework and has its own advantages and disadvantages. The techniques are presented with complete details so that practitioners can easily implement them. It is also demonstrated how one can select only a few specific bands to speed up the process of fusion by exploiting spatial correlation within successive bands of the hyperspectral data. While the techniques for fusion of hyperspectral images are being developed, it is also important to establish a framework for objective assessment of such techniques. This monograph has a dedicated chapter describing various fusion performance measures that are applicable to hyperspectral image fusion. This monograph also presents a notion of consistency of a fusion technique which can be used to verify the suitability and applicability of a technique for fusion of a very large number of images. This book will be a highly useful resource to the students, researchers, academicians and practitioners in the specific area of hyperspectral image fusion, as well as generic image fusion.
Based on more than 10 years of teaching experience, Blanken and his coeditors have assembled all the topics that should be covered in advanced undergraduate or graduate courses on multimedia retrieval and multimedia databases. The single chapters of this textbook explain the general architecture of multimedia information retrieval systems and cover various metadata languages such as Dublin Core, RDF, or MPEG. The authors emphasize high-level features and show how these are used in mathematical models to support the retrieval process. For each chapter, there 's detail on further reading, and additional exercises and teaching material is available online.
This book describes analytical techniques for optimizing knowledge acquisition, processing, and propagation, especially in the contexts of cyber-infrastructure and big data. Further, it presents easy-to-use analytical models of knowledge-related processes and their applications. The need for such methods stems from the fact that, when we have to decide where to place sensors, or which algorithm to use for processing the data-we mostly rely on experts' opinions. As a result, the selected knowledge-related methods are often far from ideal. To make better selections, it is necessary to first create easy-to-use models of knowledge-related processes. This is especially important for big data, where traditional numerical methods are unsuitable. The book offers a valuable guide for everyone interested in big data applications: students looking for an overview of related analytical techniques, practitioners interested in applying optimization techniques, and researchers seeking to improve and expand on these techniques.
This book describes the latest methods and tools for the management of information within facility management services and explains how it is possible to collect, organize, and use information over the life cycle of a building in order to optimize the integration of these services and improve the efficiency of processes. The coverage includes presentation and analysis of basic concepts, procedures, and international standards in the development and management of real estate inventories, building registries, and information systems for facility management. Models of strategic management are discussed and the functions and roles of the strategic management center, explained. Detailed attention is also devoted to building information modeling (BIM) for facility management and potential interactions between information systems and BIM applications. Criteria for evaluating information system performance are identified, and guidelines of value in developing technical specifications for facility management services are proposed. The book will aid clients and facility managers in ensuring that information bases are effectively compiled and used in order to enhance building maintenance and facility management.
This book offers a coherent and comprehensive approach to feature subset selection in the scope of classification problems, explaining the foundations, real application problems and the challenges of feature selection for high-dimensional data. The authors first focus on the analysis and synthesis of feature selection algorithms, presenting a comprehensive review of basic concepts and experimental results of the most well-known algorithms. They then address different real scenarios with high-dimensional data, showing the use of feature selection algorithms in different contexts with different requirements and information: microarray data, intrusion detection, tear film lipid layer classification and cost-based features. The book then delves into the scenario of big dimension, paying attention to important problems under high-dimensional spaces, such as scalability, distributed processing and real-time processing, scenarios that open up new and interesting challenges for researchers. The book is useful for practitioners, researchers and graduate students in the areas of machine learning and data mining.
The creation and consumption of content, especially visual content, is ingrained into our modern world. This book contains a collection of texts centered on the evaluation of image retrieval systems. To enable reproducible evaluation we must create standardized benchmarks and evaluation methodologies. The individual chapters in this book highlight major issues and challenges in evaluating image retrieval systems and describe various initiatives that provide researchers with the necessary evaluation resources. In particular they describe activities within ImageCLEF, an initiative to evaluate cross-language image retrieval systems which has been running as part of the Cross Language Evaluation Forum (CLEF) since 2003. To this end, the editors collected contributions from a range of people: those involved directly with ImageCLEF, such as the organizers of specific image retrieval or annotation tasks; participants who have developed techniques to tackle the challenges set forth by the organizers; and people from industry and academia involved with image retrieval and evaluation generally. Mostly written for researchers in academia and industry, the book stresses the importance of combing textual and visual information - a multimodal approach - for effective retrieval. It provides the reader with clear ideas about information retrieval and its evaluation in contexts and domains such as healthcare, robot vision, press photography, and the Web.
The field of database security has expanded greatly, with the rapid development of global inter-networked infrastructure. Databases are no longer stand-alone systems accessible only to internal users of organizations. Today, businesses must allow selective access from different security domains. New data services emerge every day, bringing complex challenges to those whose job is to protect data security. The Internet and the web offer means for collecting and sharing data with unprecedented flexibility and convenience, presenting threats and challenges of their own. This book identifies and addresses these new challenges and more, offering solid advice for practitioners and researchers in industry.
Provides readers with the methods, algorithms, and means to perform text mining tasks This book is devoted to the fundamentals of text mining using Perl, an open-source programming tool that is freely available via the Internet (www.perl.org). It covers mining ideas from several perspectives--statistics, data mining, linguistics, and information retrieval--and provides readers with the means to successfully complete text mining tasks on their own. The book begins with an introduction to regular expressions, a text pattern methodology, and quantitative text summaries, all of which are fundamental tools of analyzing text. Then, it builds upon this foundation to explore: Probability and texts, including the bag-of-words model Information retrieval techniques such as the TF-IDF similarity measure Concordance lines and corpus linguistics Multivariate techniques such as correlation, principal components analysis, and clustering Perl modules, German, and permutation tests Each chapter is devoted to a single key topic, and the author carefully and thoughtfully introduces mathematical concepts as they arise, allowing readers to learn as they go without having to refer to additional books. The inclusion of numerous exercises and worked-out examples further complements the book's student-friendly format. Practical Text Mining with Perl is ideal as a textbook for undergraduate and graduate courses in text mining and as a reference for a variety of professionals who are interested in extracting information from text documents.
NewInternetdevelopmentsposegreaterandgreaterprivacydilemmas. Inthe- formation Society, the need for individuals to protect their autonomy and retain control over their personal information is becoming more and more important. Today, informationandcommunicationtechnologies-andthepeopleresponsible for making decisions about them, designing, and implementing them-scarcely consider those requirements, thereby potentially putting individuals' privacy at risk. The increasingly collaborative character of the Internet enables anyone to compose services and contribute and distribute information. It may become hard for individuals to manage and control information that concerns them and particularly how to eliminate outdated or unwanted personal information, thus leavingpersonalhistoriesexposedpermanently. Theseactivitiesraisesubstantial new challenges for personal privacy at the technical, social, ethical, regulatory, and legal levels: How can privacy in emerging Internet applications such as c- laborative scenarios and virtual communities be protected? What frameworks and technical tools could be utilized to maintain life-long privacy? DuringSeptember3-10,2009, IFIP(InternationalFederationforInformation Processing)workinggroups9. 2 (Social Accountability),9. 6/11. 7(IT Misuseand theLaw),11. 4(NetworkSecurity)and11. 6(IdentityManagement)heldtheir5th InternationalSummerSchoolincooperationwiththeEUFP7integratedproject PrimeLife in Sophia Antipolis and Nice, France. The focus of the event was on privacy and identity managementfor emerging Internet applications throughout a person's lifetime. The aim of the IFIP Summer Schools has been to encourage young a- demic and industry entrants to share their own ideas about privacy and identity management and to build up collegial relationships with others. As such, the Summer Schools havebeen introducing participants to the social implications of information technology through the process of informed discussion.
Every day millions of people capture, store, transmit, and manipulate digital data. Unfortunately free access digital multimedia communication also provides virtually unprecedented opportunities to pirate copyrighted material. Providing the theoretical background needed to develop and implement advanced techniques and algorithms, Digital Watermarking and Steganography- - Demonstrates how to develop and implement methods to guarantee the authenticity of digital media - Explains the categorization of digital watermarking techniques based on characteristics as well as applications - Presents cutting-edge techniques such as the GA-based breaking algorithm on the frequency-domain steganalytic system. The popularity of digital media continues to soar. The theoretical foundation presented within this valuable reference will facilitate the creation on new techniques and algorithms to combat present and potential threats against information security.
The Semantic Web proposes the mark-up of content on the Web using formal ontologies that structure underlying data for the purpose of comprehensive and transportable machine understanding. ""Semantic Web Services: Theory, Tools and Applications"" brings contributions from researchers, scientists from both industry and academia, and representatives from different communities to study, understand, and explore the theory, tools, and applications of the Semantic Web. ""Semantic Web Services: Theory, Tools and Applications"" binds computing involving the Semantic Web, ontologies, knowledge management, Web services, and Web processes into one fully comprehensive resource, serving as the platform for exchange of both practical technologies and far reaching research.
Vulnerability analysis, also known as vulnerability assessment, is a process that defines, identifies, and classifies the security holes, or vulnerabilities, in a computer, network, or application. In addition, vulnerability analysis can forecast the effectiveness of proposed countermeasures and evaluate their actual effectiveness after they are put into use. Vulnerability Analysis and Defense for the Internet provides packet captures, flow charts and pseudo code, which enable a user to identify if an application/protocol is vulnerable. This edited volume also includes case studies that discuss the latest exploits.
This book will help organizations who have implemented or are considering implementing Microsoft Dynamics achieve a better result. It presents Regatta Dynamics, a methodology developed by the authors for the structured implementation of Microsoft Dynamics. From A-to-Z, it details the full implementation process, emphasizing the organizational component of the implementation process and the cohesion with functional and technical processes.
This book is a tribute to Professor Jacek Zurada, who is best known for his contributions to computational intelligence and knowledge-based neurocomputing. It is dedicated to Professor Jacek Zurada, Full Professor at the Computational Intelligence Laboratory, Department of Electrical and Computer Engineering, J.B. Speed School of Engineering, University of Louisville, Kentucky, USA, as a token of appreciation for his scientific and scholarly achievements, and for his longstanding service to many communities, notably the computational intelligence community, in particular neural networks, machine learning, data analyses and data mining, but also the fuzzy logic and evolutionary computation communities, to name but a few. At the same time, the book recognizes and honors Professor Zurada's dedication and service to many scientific, scholarly and professional societies, especially the IEEE (Institute of Electrical and Electronics Engineers), the world's largest professional technical professional organization dedicated to advancing science and technology in a broad spectrum of areas and fields. The volume is divided into five major parts, the first of which addresses theoretic, algorithmic and implementation problems related to the intelligent use of data in the sense of how to derive practically useful information and knowledge from data. In turn, Part 2 is devoted to various aspects of neural networks and connectionist systems. Part 3 deals with essential tools and techniques for intelligent technologies in systems modeling and Part 4 focuses on intelligent technologies in decision-making, optimization and control, while Part 5 explores the applications of intelligent technologies.
In this volume, Rudi Studer and his team deliver a self-contained compendium about the exciting field of Semantic Web services, starting with the basic standards and technologies and also including advanced applications in eGovernment and eHealth. The contributions provide both the theoretical background and the practical knowledge necessary to understand the essential ideas and to design new cutting-edge applications.
Information infrastructures are integrated solutions based on the fusion of information and communication technologies. They are characterized by the large amount of data that must be managed accordingly. An information infrastructure requires an efficient and effective information retrieval system to provide access to the items stored in the infrastructure. Terminological Ontologies: Design, Management and Practical Applications presents the main problems that affect the discovery systems of information infrastructures to manage terminological models, and introduces a combination of research tools and applications in Semantic Web technologies. This book specifically analyzes the need to create, relate, and integrate the models required for an infrastructure by elaborating on the problem of accessing these models in an efficient manner via interoperable services and components. Terminological Ontologies: Design, Management and Practical Applications is geared toward information management systems and semantic web professionals working as project managers, application developers, government workers and more. Advanced undergraduate and graduate level students, professors and researchers focusing on computer science will also find this book valuable as a secondary text or reference book.
With the proliferation of social media and on-line communities in networked world a large gamut of data has been collected and stored in databases. The rate at which such data is stored is growing at a phenomenal rate and pushing the classical methods of data analysis to their limits. This book presents an integrated framework of recent empirical and theoretical research on social network analysis based on a wide range of techniques from various disciplines like data mining, social sciences, mathematics, statistics, physics, network science, machine learning with visualization techniques and security. The book illustrates the potential of multi-disciplinary techniques in various real life problems and intends to motivate researchers in social network analysis to design more effective tools by integrating swarm intelligence and data mining.
This book is designed for the professional system administrators
who need to securely deploy Microsoft Vista in their networks.
Readers will not only learn about the new security features of
Vista, but they will learn how to safely integrate Vista with their
existing wired and wireless network infrastructure and safely
deploy with their existing applications and databases. The book
begins with a discussion of Microsoft's Trustworthy Computing
Initiative and Vista's development cycle, which was like none other
in Microsoft's history. Expert authors will separate the hype from
the reality of Vista s preparedness to withstand the 24 x 7 attacks
it will face from malicious attackers as the world s #1 desktop
operating system. The book has a companion CD which contains
hundreds of working scripts and utilities to help administrators
secure their environments.
This book addresses the challenges of social network and social media analysis in terms of prediction and inference. The chapters collected here tackle these issues by proposing new analysis methods and by examining mining methods for the vast amount of social content produced. Social Networks (SNs) have become an integral part of our lives; they are used for leisure, business, government, medical, educational purposes and have attracted billions of users. The challenges that stem from this wide adoption of SNs are vast. These include generating realistic social network topologies, awareness of user activities, topic and trend generation, estimation of user attributes from their social content, and behavior detection. This text has applications to widely used platforms such as Twitter and Facebook and appeals to students, researchers, and professionals in the field.
This volume presents a collection of carefully selected contributions in the area of social media analysis. Each chapter opens up a number of research directions that have the potential to be taken on further in this rapidly growing area of research. The chapters are diverse enough to serve a number of directions of research with Sentiment Analysis as the dominant topic in the book. The authors have provided a broad range of research achievements from multimodal sentiment identification to emotion detection in a Chinese microblogging website. The book will be useful to research students, academics and practitioners in the area of social media analysis.
th It is fitting that there was a World Computer Congress in the 50 anniversary year of IFIP. Within the Learn IT Stream of WCC2010, the conference, Key Competencies in the Knowledge Society (KCKS), brought together some 43 papers from around the world covering many areas of ICT and its role in education. Of the papers presented here, three were selected as key theme papers for the KCKS conference. These papers' by Adams and Tatnall, Tarrago and Wilson, Diethelm and Dorge, are included in these proceedings. We congratulate these authors for the quality of their work that led to selection. The range of issues covered within this volume is too broad to set out here but c- ers, amongst other things, e-examination, Twitter, teacher education, school-based learning, methodological frameworks and human development theories. It has been an exciting and rewarding task to put these papers together. They rep- sent a coming together of great minds and cutting-edge research. We thank our contributors and our reviewers for producing such an impressive body of work." |
You may like...
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Blockchain Life - Making Sense of the…
Kary Oberbrunner, Lee Richter
Hardcover
R506
Discovery Miles 5 060
|