![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
The prevalence of data science has grown exponentially in recent years. Increases in data exchange have created the need for standards and formats on handling data from different sources. Developing Metadata Applications Profiles is an innovative reference source that discusses the latest trends and techniques for effectively managing and exchanging metadata. Including a range of perspectives on schemas and application profiles, such as interoperability, ontology-based design, and model-driven approaches, this book is ideally designed for researchers, academics, professionals, graduate students, and practitioners actively engaged in data science.
Information intermediation is the foundation stone of some of the most successful Internet companies, and is perhaps second only to the Internet Infrastructure companies. On the heels of information integration and interoperability, this book on information brokering discusses the next step in information interoperability and integration. The emerging Internet economy based on burgeoning B2B and B2C trading will soon demand semantics-based information intermediation for its feasibility and success. B2B ventures are involved in the rationalization' of new vertical markets and construction of domain specific product catalogs. This book provides approaches for re-use of existing vocabularies and domain ontologies as a basis for this rationalization and provides a framework based on inter-ontology interoperation. Infrastructural trade-offs that identify optimizations in performance and scalability of web sites will soon give way to information based trade-offs as alternate rationalization schemes come into play and the necessity of interoperating across these schemes is realized. Information Brokering Across Heterogeneous Digital Data's intended readers are researchers, software architects and CTOs, advanced product developers dealing with information intermediation issues in the context of e-commerce (B2B and B2C), information technology professionals in various vertical markets (e.g., geo-spatial information, medicine, auto), and all librarians interested in information brokering.
Security is the science and technology of secure communications and resource protection from security violation such as unauthorized access and modification. Putting proper security in place gives us many advantages. It lets us exchange confidential information and keep it confidential. We can be sure that a piece of information received has not been changed. Nobody can deny sending or receiving a piece of information. We can control which piece of information can be accessed, and by whom. We can know when a piece of information was accessed, and by whom. Networks and databases are guarded against unauthorized access. We have seen the rapid development of the Internet and also increasing security requirements in information networks, databases, systems, and other information resources. This comprehensive book responds to increasing security needs in the marketplace, and covers networking security and standards. There are three types of readers who are interested in security: non-technical readers, general technical readers who do not implement security, and technical readers who actually implement security. This book serves all three by providing a comprehensive explanation of fundamental issues of networking security, concept and principle of security standards, and a description of some emerging security technologies. The approach is to answer the following questions: 1. What are common security problems and how can we address them? 2. What are the algorithms, standards, and technologies that can solve common security problems? 3.
It is over 20 years since the functional data model and functional programming languages were first introduced to the computing community. Although developed by separate research communities, recent work, presented in this book, suggests there is powerful synergy in their integration. As database technology emerges as central to yet more complex and demanding applications in areas such as bioinformatics, national security, criminal investigations and advanced engineering, more sophisticated approaches like those presented here, are needed. A tutorial introduction by the editors prepares the reader for the chapters that follow, written by leading researchers, including some of the early pioneers. They provide a comprehensive treatment showing how the functional approach provides for modeling, analyzis and optimization in databases, and also data integration and interoperation in heterogeneous environments. Several chapters deal with mathematical results on the transformation of expressions, fundamental to the functional approach. The book also aims to show how the approach relates to the Internet and current work on semistructured data, XML and RDF. The book presents a comprehensive view of the functional approach to data management, bringing together important material hitherto widely scattered, some new research, and a comprehensive set of references. It will serve as a valuable resource for researchers, faculty and graduate students, as well as those in industry responsible for new systems development.
This book covers the basic statistical and analytical techniques of computer intrusion detection. It is aimed at both statisticians looking to become involved in the data analysis aspects of computer security and computer scientists looking to expand their toolbox of techniques for detecting intruders. The book is self-contained, assumng no expertise in either computer security or statistics. It begins with a description of the basics of TCP/IP, followed by chapters dealing with network traffic analysis, network monitoring for intrusion detection, host based intrusion detection, and computer viruses and other malicious code. Each section develops the necessary tools as needed. There is an extensive discussion of visualization as it relates to network data and intrusion detection. The book also contains a large bibliography covering the statistical, machine learning, and pattern recognition literature related to network monitoring and intrusion detection. David Marchette is a scientist at the Naval Surface Warfacre Center in Dalhgren, Virginia. He has worked at Navy labs for 15 years, doing research in pattern recognition, computational statistics, and image analysis. He has been a fellow by courtesy in the mathematical sciences department of the Johns Hopkins University since 2000. He has been working in conputer intrusion detection for several years, focusing on statistical methods for anomaly detection and visualization. Dr. Marchette received a Masters in Mathematics from the University of California, San Diego in 1982 and a Ph.D. in Computational Sciences and Informatics from George Mason University in 1996.
There are many invaluable books available on data mining theory and applications. However, in compiling a volume titled DATA MINING: Foundations and Intelligent Paradigms: Volume 3: Medical, Health, Social, Biological and other Applications we wish to introduce some of the latest developments to a broad audience of both specialists and non-specialists in this field."
This book proposes representations of multicast rate regions in wireless networks based on the mathematical concept of submodular functions, e.g., the submodular cut model and the polymatroid broadcast model. These models subsume and generalize the graph and hypergraph models. The submodular structure facilitates a dual decomposition approach to network utility maximization problems, which exploits the greedy algorithm for linear programming on submodular polyhedra. This approach yields computationally efficient characterizations of inner and outer bounds on the multicast capacity regions for various classes of wireless networks.
This book presents important applications of soft computing and fuzziness to the growing field of web planning. A new method of using fuzzy numbers to model uncertain probabilities and how these can be used to model a fuzzy queuing system is demonstrated, as well as a method of modeling fuzzy queuing systems employing fuzzy arrival rates and fuzzy service rates. All the computations needed to get to the fuzzy numbers for system performance are described starting for the one server case to more than three servers. A variety of optimization models are discussed with applications to the average response times, server utilization, server and queue costs, as well as to phenomena identified with web sites such as "burstiness" and "long tailed distributions".
This book focuses on new research challenges in intelligent information filtering and retrieval. It collects invited chapters and extended research contributions from DART 2014 (the 8th International Workshop on Information Filtering and Retrieval), held in Pisa (Italy), on December 10, 2014, and co-hosted with the XIII AI*IA Symposium on Artificial Intelligence. The main focus of DART was to discuss and compare suitable novel solutions based on intelligent techniques and applied to real-world contexts. The chapters of this book present a comprehensive review of related works and the current state of the art. The contributions from both practitioners and researchers have been carefully reviewed by experts in the area, who also gave useful suggestions to improve the quality of the book.
Recent technological advancements in data warehousing have been contributing to the emergence of business intelligence useful for managerial decision making. ""Progressive Methods in Data Warehousing and Business Intelligence: Concepts and Competitive Analytics"" presents the latest trends, studies, and developments in business intelligence and data warehousing contributed by experts from around the globe. Consisting of four main sections, this book covers crucial topics within the field such as OLAP and patterns, spatio-temporal data warehousing, and benchmarking of the subject.
The book proposes new technologies and discusses future solutions for design infrastructure for ICT. The book contains high quality submissions presented at Second International Conference on Information and Communication Technology for Sustainable Development (ICT4SD - 2016) held at Goa, India during 1 - 2 July, 2016. The conference stimulates the cutting-edge research discussions among many academic pioneering researchers, scientists, industrial engineers, and students from all around the world. The topics covered in this book also focus on innovative issues at international level by bringing together the experts from different countries.
A field manual on contextualizing cyber threats, vulnerabilities, and risks to connected cars through penetration testing and risk assessment Hacking Connected Cars deconstructs the tactics, techniques, and procedures (TTPs) used to hack into connected cars and autonomous vehicles to help you identify and mitigate vulnerabilities affecting cyber-physical vehicles. Written by a veteran of risk management and penetration testing of IoT devices and connected cars, this book provides a detailed account of how to perform penetration testing, threat modeling, and risk assessments of telematics control units and infotainment systems. This book demonstrates how vulnerabilities in wireless networking, Bluetooth, and GSM can be exploited to affect confidentiality, integrity, and availability of connected cars. Passenger vehicles have experienced a massive increase in connectivity over the past five years, and the trend will only continue to grow with the expansion of The Internet of Things and increasing consumer demand for always-on connectivity. Manufacturers and OEMs need the ability to push updates without requiring service visits, but this leaves the vehicle's systems open to attack. This book examines the issues in depth, providing cutting-edge preventative tactics that security practitioners, researchers, and vendors can use to keep connected cars safe without sacrificing connectivity. Perform penetration testing of infotainment systems and telematics control units through a step-by-step methodical guide Analyze risk levels surrounding vulnerabilities and threats that impact confidentiality, integrity, and availability Conduct penetration testing using the same tactics, techniques, and procedures used by hackers From relatively small features such as automatic parallel parking, to completely autonomous self-driving cars--all connected systems are vulnerable to attack. As connectivity becomes a way of life, the need for security expertise for in-vehicle systems is becoming increasingly urgent. Hacking Connected Cars provides practical, comprehensive guidance for keeping these vehicles secure.
Parallel and Distributed Information Systems brings together in one place important contributions and up-to-date research results in this fast moving area. Parallel and Distributed Information Systems serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Learn how to apply the principles of machine learning to time series modeling with this indispensable resource Machine Learning for Time Series Forecasting with Python is an incisive and straightforward examination of one of the most crucial elements of decision-making in finance, marketing, education, and healthcare: time series modeling. Despite the centrality of time series forecasting, few business analysts are familiar with the power or utility of applying machine learning to time series modeling. Author Francesca Lazzeri, a distinguished machine learning scientist and economist, corrects that deficiency by providing readers with comprehensive and approachable explanation and treatment of the application of machine learning to time series forecasting. Written for readers who have little to no experience in time series forecasting or machine learning, the book comprehensively covers all the topics necessary to: Understand time series forecasting concepts, such as stationarity, horizon, trend, and seasonality Prepare time series data for modeling Evaluate time series forecasting models' performance and accuracy Understand when to use neural networks instead of traditional time series models in time series forecasting Machine Learning for Time Series Forecasting with Python is full real-world examples, resources and concrete strategies to help readers explore and transform data and develop usable, practical time series forecasts. Perfect for entry-level data scientists, business analysts, developers, and researchers, this book is an invaluable and indispensable guide to the fundamental and advanced concepts of machine learning applied to time series modeling.
Security and privacy are paramount concerns in information processing systems, which are vital to business, government and military operations and, indeed, society itself. Meanwhile, the expansion of the Internet and its convergence with telecommunication networks are providing incredible connectivity, myriad applications and, of course, new threats. Data and Applications Security XVII: Status and Prospects
describes original research results, practical experiences and
innovative ideas, all focused on maintaining security and privacy
in information processing systems and applications that pervade
cyberspace. The areas of coverage include: This book is the seventeenth volume in the series produced by the International Federation for Information Processing (IFIP) Working Group 11.3 on Data and Applications Security. It presents a selection of twenty-six updated and edited papers from the Seventeenth Annual IFIP TC11 / WG11.3 Working Conference on Data and Applications Security held at Estes Park, Colorado, USA in August 2003, together with a report on the conference keynote speech and a summary of the conference panel. The contents demonstrate the richness and vitality of the discipline, and other directions for future research in data and applications security. Data and Applications Security XVII: Status and Prospects is an invaluable resource for information assurance researchers, faculty members and graduate students, as well as for individuals engaged in research and development in the information technology sector.
This book explores all relevant aspects of net scoring, also known as uplift modeling: a data mining approach used to analyze and predict the effects of a given treatment on a desired target variable for an individual observation. After discussing modern net score modeling methods, data preparation, and the assessment of uplift models, the book investigates software implementations and real-world scenarios. Focusing on the application of theoretical results and on practical issues of uplift modeling, it also includes a dedicated chapter on software solutions in SAS, R, Spectrum Miner, and KNIME, which compares the respective tools. This book also presents the applications of net scoring in various contexts, e.g. medical treatment, with a special emphasis on direct marketing and corresponding business cases. The target audience primarily includes data scientists, especially researchers and practitioners in predictive modeling and scoring, mainly, but not exclusively, in the marketing context.
What will business software look like in the future? And how will it be developed? This book covers the proceedings of the first international conference on Future Business Software - a new think tank discussing the trends in enterprise software with speakers from Europe's most successful software companies and the leading research institutions. The articles focus on two of the most prominent trends in the field: emergent software and agile development processes. "Emergent Software" is a new paradigm of software development that addresses the highly complex requirements of tomorrow's business software and aims at dynamically and flexibly combining a business software solution's different components in order to fulfill customers' needs with a minimum of effort. Agile development processes are the response of software technology to the implementation of diverse and rapidly changing software requirements. A major focus is on the minimization of project risks, e.g. through short, iterative development cycles, test-driven development and an intensive culture of communication."
This informative text/reference presents a detailed review of the state of the art in industrial sensor and control networks. The book examines a broad range of applications, along with their design objectives and technical challenges. The coverage includes fieldbus technologies, wireless communication technologies, network architectures, and resource management and optimization for industrial networks. Discussions are also provided on industrial communication standards for both wired and wireless technologies, as well as for the Industrial Internet of Things (IIoT). Topics and features: describes the FlexRay, CAN, and Modbus fieldbus protocols for industrial control networks, as well as the MIL-STD-1553 standard; proposes a dual fieldbus approach, incorporating both CAN and ModBus fieldbus technologies, for a ship engine distributed control system; reviews a range of industrial wireless sensor network (IWSN) applications, from environmental sensing and condition monitoring, to process automation; examines the wireless networking performance, design requirements, and technical limitations of IWSN applications; presents a survey of IWSN commercial solutions and service providers, and summarizes the emerging trends in this area; discusses the latest technologies and open challenges in realizing the vision of the IIoT, highlighting various applications of the IIoT in industrial domains; introduces a logistics paradigm for adopting IIoT technology on the Physical Internet. This unique work will be of great value to all researchers involved in industrial sensor and control networks, wireless networking, and the Internet of Things.
This book comprises the refereed papers together with the invited keynote papers, presented at the Second International Conference on Enterprise Information Systems. The conference was organised by the School of Computing at Staffordshire University, UK, and the Escola Superior de Tecnologia of Setubal, Portugal, in cooperation with the British Computer Society and the International Federation for Information Processing, Working Group 8.1. The purpose of this 2nd International Conference was to bring together researchers, engineers and practitioners interested in the advances in and business applications of information systems. The papers demonstrate the vitality and vibrancy of the field of Enterprise Information Systems. The research papers included here were selected from among 143 submissions from 32 countries in the following four areas: Enterprise Database Applications, Artificial Intelligence Applications and Decision Support Systems, Systems Analysis and Specification, and Internet and Electronic Commerce. Every paper had at least two reVIewers drawn from 10 countries. The papers included in this book were recommended by the reviewers. On behalf of the conference organising committee we would like to thank all the members of the Programme Committee for their work in reviewing and selecting the papers that appear in this volume. We would also like to thank all the authors who have submitted their papers to this conference, and would like to apologise to the authors that we were unable to include and wish them success next year.
Knowledge management (KM) is about managing the lifecycle of knowledge consisting of creating, storing, sharing and applying knowledge. Two main approaches towards KM are codification and personalization. The first focuses on capturing knowledge using technology and the latter on the process of socializing for sharing and creating knowledge. Social media are becoming very popular as individuals and also organizations learn how to use it. The primary applications of social media in a business context are marketing and recruitment. But there is also a huge potential for knowledge management in these organizations. For example, wikis can be used to collect organizational knowledge and social networking tools, which leads to exchanging new ideas and innovation. The interesting part of social media is that, by using them, one immediately starts to generate content that can be useful for the organization. Hence, they naturally combine the codification and personalisation approaches to KM. This book aims to provide an overview of new and innovative applications of social media and to report challenges that need to be solved. One example is the watering down of knowledge as a result of the use of organizational social media (Von Krogh, 2012).
Recent years have seen an explosive growth in the use of new database applications such as CAD/CAM systems, spatial information systems, and multimedia information systems. The needs of these applications are far more complex than traditional business applications. They call for support of objects with complex data types, such as images and spatial objects, and for support of objects with wildly varying numbers of index terms, such as documents. Traditional indexing techniques such as the B-tree and its variants do not efficiently support these applications, and so new indexing mechanisms have been developed. As a result of the demand for database support for new applications, there has been a proliferation of new indexing techniques. The need for a book addressing indexing problems in advanced applications is evident. For practitioners and database and application developers, this book explains best practice, guiding the selection of appropriate indexes for each application. For researchers, this book provides a foundation for the development of new and more robust indexes. For newcomers, this book is an overview of the wide range of advanced indexing techniques. Indexing Techniques for Advanced Database Systems is suitable as a secondary text for a graduate level course on indexing techniques, and as a reference for researchers and practitioners in industry.
In recent years, as part of the increasing "informationization" of industry and the economy, enterprises have been accumulating vast amounts of detailed data such as high-frequency transaction data in nancial markets and point-of-sale information onindividualitems in theretail sector. Similarly,vast amountsof data arenow ava- able on business networks based on inter rm transactions and shareholdings. In the past, these types of information were studied only by economists and management scholars. More recently, however, researchers from other elds, such as physics, mathematics, and information sciences, have become interested in this kind of data and, based on novel empirical approaches to searching for regularities and "laws" akin to those in the natural sciences, have produced intriguing results. This book is the proceedings of the international conference THICCAPFA7 that was titled "New Approaches to the Analysis of Large-Scale Business and E- nomic Data," held in Tokyo, March 1-5, 2009. The letters THIC denote the Tokyo Tech (Tokyo Institute of Technology)-Hitotsubashi Interdisciplinary Conference. The conference series, titled APFA (Applications of Physics in Financial Analysis), focuses on the analysis of large-scale economic data. It has traditionally brought physicists and economists together to exchange viewpoints and experience (APFA1 in Dublin 1999, APFA2 in Liege ` 2000, APFA3 in London 2001, APFA4 in Warsaw 2003, APFA5 in Torino 2006, and APFA6 in Lisbon 2007). The aim of the conf- ence is to establish fundamental analytical techniques and data collection methods, taking into account the results from a variety of academic disciplines.
Semistructured Database Design provides an essential reference for anyone interested in the effective management of semsistructured data. Since many new and advanced web applications consume a huge amount of such data, there is a growing need to properly design efficient databases. This volume responds to that need by describing a semantically rich data model for semistructured data, called Object-Relationship-Attribute model for Semistructured data (ORA-SS). Focusing on this new model, the book discuss problems and present solutions for a number of topics, including schema extraction, the design of non-redundant storage organizations for semistructured data, and physical semsitructured database design, among others. Semistructured Database Design, presents researchers and professionals with the most complete and up-to-date research in this fast-growing field.
Advances in microelectronic technology have made massively parallel computing a reality and triggered an outburst of research activity in parallel processing architectures and algorithms. Distributed memory multiprocessors - parallel computers that consist of microprocessors connected in a regular topology - are increasingly being used to solve large problems in many application areas. In order to use these computers for a specific application, existing algorithms need to be restructured for the architecture and new algorithms developed. The performance of a computation on a distributed memory multiprocessor is affected by the node and communication architecture, the interconnection network topology, the I/O subsystem, and the parallel algorithm and communication protocols. Each of these parametersis a complex problem, and solutions require an understanding of the interactions among them. This book is based on the papers presented at the NATO Advanced Study Institute held at Bilkent University, Turkey, in July 1991. The book is organized in five parts: Parallel computing structures and communication, Parallel numerical algorithms, Parallel programming, Fault tolerance, and Applications and algorithms.
This book starts with an introduction to process modeling and process paradigms, then explains how to query and analyze process models, and how to analyze the process execution data. In this way, readers receive a comprehensive overview of what is needed to identify, understand and improve business processes. The book chiefly focuses on concepts, techniques and methods. It covers a large body of knowledge on process analytics - including process data querying, analysis, matching and correlating process data and models - to help practitioners and researchers understand the underlying concepts, problems, methods, tools and techniques involved in modern process analytics. Following an introduction to basic business process and process analytics concepts, it describes the state of the art in this area before examining different analytics techniques in detail. In this regard, the book covers analytics over different levels of process abstractions, from process execution data and methods for linking and correlating process execution data, to inferring process models, querying process execution data and process models, and scalable process data analytics methods. In addition, it provides a review of commercial process analytics tools and their practical applications. The book is intended for a broad readership interested in business process management and process analytics. It provides researchers with an introduction to these fields by comprehensively classifying the current state of research, by describing in-depth techniques and methods, and by highlighting future research directions. Lecturers will find a wealth of material to choose from for a variety of courses, ranging from undergraduate courses in business process management to graduate courses in business process analytics. Lastly, it offers professionals a reference guide to the state of the art in commercial tools and techniques, complemented by many real-world use case scenarios. |
You may like...
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,451
Discovery Miles 14 510
Data Analytics for Social Microblogging…
Soumi Dutta, Asit Kumar Das, …
Paperback
R3,335
Discovery Miles 33 350
Opinion Mining and Text Analytics on…
Pantea Keikhosrokiani, Moussa Pourya Asl
Hardcover
R9,276
Discovery Miles 92 760
|