![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
This book surveys big data tools used in macroeconomic forecasting and addresses related econometric issues, including how to capture dynamic relationships among variables; how to select parsimonious models; how to deal with model uncertainty, instability, non-stationarity, and mixed frequency data; and how to evaluate forecasts, among others. Each chapter is self-contained with references, and provides solid background information, while also reviewing the latest advances in the field. Accordingly, the book offers a valuable resource for researchers, professional forecasters, and students of quantitative economics.
This book presents the combined proceedings of the 7th International Conference on Computer Science and its Applications (CSA-15) and the International Conference on Ubiquitous Information Technologies and Applications (CUTE 2015), both held in Cebu, Philippines, December 15 - 17, 2015. The aim of these two meetings was to promote discussion and interaction among academics, researchers and professionals in the field of computer science covering topics including mobile computing, security and trust management, multimedia systems and devices, networks and communications, databases and data mining, and ubiquitous computing technologies such as ubiquitous communication and networking, ubiquitous software technology, ubiquitous systems and applications, security and privacy. These proceedings reflect the state-of-the-art in the development of computational methods, numerical simulations, error and uncertainty analysis and novel applications of new processing techniques in engineering, science, and other disciplines related to computer science.
Whether you are brand new to data mining or working on your tenth predictive analytics project, "Commercial Data Mining" will be there for you as an accessible reference outlining the entire process and related themes. In this book, you'll learn that your organization does not need a huge volume of data or a Fortune 500 budget to generate business using existing information assets. Expert author David Nettleton guides you through the process from beginning to end and covers everything from business objectives to data sources, and selection to analysis and predictive modeling. "Commercial Data Mining" includes case studies and practical
examples from Nettleton's more than 20 years of commercial
experience. Real-world cases covering customer loyalty,
cross-selling, and audience prediction in industries including
insurance, banking, and media illustrate the concepts and
techniques explained throughout the book.
This book focuses on a combination of theoretical advances in the Internet of Things, cloud computing and its real-life applications to serve society. The book discusses technological innovations, authentication, mobility support and security, group rekeying schemes and a range of concrete applications. The Internet has restructured not only global interrelations, but also an unbelievable number of personal characteristics. Machines are increasingly able to control innumerable autonomous gadgets via the Internet, creating the Internet of Things, which facilitates intelligent communication between humans and things, and among things. The Internet of Things is an active area of current research, and technological advances have been supported by real-life applications to establish their soundness. The material in this book includes concepts, figures, graphs, and tables to guide researchers through the Internet of Things and its applications for society.
This book mainly focuses on cloud security and high performance computing for cloud auditing. The book discusses emerging challenges and techniques developed for high performance semantic cloud auditing, and presents the state of the art in cloud auditing, computing and security techniques with focus on technical aspects and feasibility of auditing issues in federated cloud computing environments. In summer 2011, the United States Air Force Research Laboratory (AFRL) CyberBAT Cloud Security and Auditing Team initiated the exploration of the cloud security challenges and future cloud auditing research directions that are covered in this book. This work was supported by the United States government funds from the Air Force Office of Scientific Research (AFOSR), the AFOSR Summer Faculty Fellowship Program (SFFP), the Air Force Research Laboratory (AFRL) Visiting Faculty Research Program (VFRP), the National Science Foundation (NSF) and the National Institute of Health (NIH). All chapters were partially supported by the AFOSR Information Operations and Security Program extramural and intramural funds (AFOSR/RSL Program Manager: Dr. Robert Herklotz). Key Features: * Contains surveys of cyber threats and security issues in cloud computing and presents secure cloud architectures * Presents in-depth cloud auditing techniques, federated cloud security architectures, cloud access control models, and access assured information sharing technologies * Outlines a wide range of challenges and provides solutions to manage and control very large and complex data sets
This book describes systematically telemetry theory and methods for aircraft in flight test. Test targets of telemetry in flight test include airplanes, helicopters, unmanned aerial vehicles, aerostatics, carrier-based aircraft, airborne equipment (systems), weapon systems, (powered) aircraft scale models, aircraft external stores (e.g., nacelle, auxiliary tanks), and ejection seats and so on. The book collects the author's telemetry research work and presents methods that have been verified in real-world tests. The book has eight chapters: the first three discuss the theoretical basis of telemetry, while the other five focus on the methods used in flight tests.Unlike other professional textbooks, this book describes the practical telemetry theory and combines theory and engineering practice to offer a comprehensive and systematic overview of telemetry in flight test for readers.
The end of the 20th century witnessed an information revolution that introduced a host of new economic efficiencies. This economic change was underpinned by rapidly growing networks of infrastructure that have become increasingly complex. In this new era of global security we are now forced to ask whether our private efficiencies have led to public vulnerabilities, and if so, how do we make ourselves secure without hampering the economy. In order to answer these questions, Sean Gorman provides a framework for how vulnerabilities are identified and cost-effectively mitigated, as well as how resiliency and continuity of infrastructures can be increased. Networks, Security and Complexity goes on to address specific concerns such as determining criticality and interdependency, the most effective means of allocating scarce resources for defense, and whether diversity is a viable strategy. The author provides the economic, policy, and physics background to the issues of infrastructure security, along with tools for taking first steps in tackling these security dilemmas. He includes case studies of infrastructure failures and vulnerabilities, an analysis of threats to US infrastructure, and a review of the economics and geography of agglomeration and efficiency. This critical and controversial book will garner much attention and spark an important dialogue. Policymakers, security professionals, infrastructure operators, academics, and readers following homeland security issues will find this volume of great interest.
This book provides stepwise discussion, exhaustive literature review, detailed analysis and discussion, rigorous experimentation results (using several analytics tools), and an application-oriented approach that can be demonstrated with respect to data analytics using artificial intelligence to make systems stronger (i.e., impossible to breach). We can see many serious cyber breaches on Government databases or public profiles at online social networking in the recent decade. Today artificial intelligence or machine learning is redefining every aspect of cyber security. From improving organizations' ability to anticipate and thwart breaches, protecting the proliferating number of threat surfaces with Zero Trust Security frameworks to making passwords obsolete, AI and machine learning are essential to securing the perimeters of any business. The book is useful for researchers, academics, industry players, data engineers, data scientists, governmental organizations, and non-governmental organizations.
This text integrates different mobility data handling processes, from database management to multi-dimensional analysis and mining, into a unified presentation driven by the spectrum of requirements raised by real-world applications. It presents a step-by-step methodology to understand and exploit mobility data: collecting and cleansing data, storage in Moving Object Database (MOD) engines, indexing, processing, analyzing and mining mobility data. Emerging issues, such as semantic and privacy-aware querying and mining as well as distributed data processing, are also covered. Theoretical presentation is smoothly interchanged with hands-on exercises and case studies involving an actual MOD engine. The authors are established experts who address both theoretical and practical dimensions of the field but also present valuable prototype software. The background context, clear explanations and sample exercises make this an ideal textbook for graduate students studying database management, data mining and geographic information systems.
The book reports on advanced theories and methods in two related engineering fields: electrical and electronic engineering, and communications engineering and computing. It highlights areas of global and growing importance, such as renewable energy, power systems, mobile communications, security and the Internet of Things (IoT). The contributions cover a number of current research issues, including smart grids, photovoltaic systems, wireless power transfer, signal processing, 4G and 5G technologies, IoT applications, mobile cloud computing and many more. Based on the proceedings of the first International Conference on Emerging Trends in Electrical, Electronic and Communications Engineering (ELECOM 2016), held in Voila Bagatelle, Mauritius from November 25 to 27, 2016, the book provides graduate students, researchers and professionals with a snapshot of the state-of-the-art and a source of new ideas for future research and collaborations.
The Internet of Things (IoT) usually refers to a world-wide network of interconnected heterogeneous objects (sensors, actuators, smart devices, smart objects, RFID, embedded computers, etc) uniquely addressable, based on standard communication protocols. Beyond such a definition, it is emerging a new definition of IoT seen as a loosely coupled, decentralized system of cooperating smart objects (SOs). A SO is an autonomous, physical digital object augmented with sensing/actuating, processing, storing, and networking capabilities. SOs are able to sense/actuate, store, and interpret information created within themselves and around the neighbouring external world where they are situated, act on their own, cooperate with each other, and exchange information with other kinds of electronic devices and human users. However, such SO-oriented IoT raises many in-the-small and in-the-large issues involving SO programming, IoT system architecture/middleware and methods/methodologies for the development of SO-based applications. This Book will specifically focus on exploring recent advances in architectures, algorithms, and applications for an Internet of Things based on Smart Objects. Topics appropriate for this Book include, but are not necessarily limited to: - Methods for SO development - IoT Networking - Middleware for SOs - Data Management for SOs - Service-oriented SOs - Agent-oriented SOs - Applications of SOs in Smart Environments: Smart Cities, Smart Health, Smart Buildings, etc. Advanced IoT Projects.
This book examines the methodological foundations of the Big Data-driven world, formulates its concept within the frameworks of modern control methods and theories, and approaches the peculiarities of Control Technologies as a specific sphere of the Big Data-driven world, distinguished in the modern Digital Economy. The book studies the genesis of mathematical and information methods' transition from data analysis & processing to knowledge discovery and predictive analytics in the 21st century. In addition, it analyzes the conditions of development and implementation of Big Data analysis approaches in investigative activities and determines the role and meaning of global networks as platforms for the establishment of legislation and regulations in the Big Data-driven world. The book examines that world through the prism of Legislation Issues, substantiate the scientific and methodological approaches to studying modern mechanisms of terrorism and extremism counteraction in the conditions of new challenges of dissemination and accessibility of socially dangerous information. Systematization of successful experience of the Big Data solutions implementation in the different countries and analyze causal connections of the Digital Economy formation from the positions of new technological challenges is performed. The book's target audience includes scientists, students, PhD and Master students who conduct scientific research on the topic of Big Data not only in the field of IT& data science, but also in connection with legislative regulation aspects of the modern information society. It also includes practitioners and experts, as well as state authorities and representatives of international organizations interested in creating mechanisms for implementing Digital Economy projects in the Big Data-driven world.
While emerging information and internet ubiquitous technologies provide tremendous positive opportunities, there are still numerous vulnerabilities associated with technology. Attacks on computer systems are increasing in sophistication and potential devastation more than ever before. As such, organizations need to stay abreast of the latest protective measures and services to prevent cyber attacks.""The Handbook of Research on Information Security and Assurance"" offers comprehensive definitions and explanations on topics such as firewalls, information warfare, encryption standards, and social and ethical concerns in enterprise security. Edited by scholars in information science, this reference provides tools to combat the growing risk associated with technology.
This book is devoted to the modeling and understanding of complex urban systems. This second volume of Understanding Complex Urban Systems focuses on the challenges of the modeling tools, concerning, e.g., the quality and quantity of data and the selection of an appropriate modeling approach. It is meant to support urban decision-makers-including municipal politicians, spatial planners, and citizen groups-in choosing an appropriate modeling approach for their particular modeling requirements. The contributors to this volume are from different disciplines, but all share the same goal: optimizing the representation of complex urban systems. They present and discuss a variety of approaches for dealing with data-availability problems and finding appropriate modeling approaches-and not only in terms of computer modeling. The selection of articles featured in this volume reflect a broad variety of new and established modeling approaches such as: - An argument for using Big Data methods in conjunction with Agent-based Modeling; - The introduction of a participatory approach involving citizens, in order to utilize an Agent-based Modeling approach to simulate urban-growth scenarios; - A presentation of semantic modeling to enable a flexible application of modeling methods and a flexible exchange of data; - An article about a nested-systems approach to analyzing a city's interdependent subsystems (according to these subsystems' different velocities of change); - An article about methods that use Luhmann's system theory to characterize cities as systems that are composed of flows; - An article that demonstrates how the Sen-Nussbaum Capabilities Approach can be used in urban systems to measure household well-being shifts that occur in response to the resettlement of urban households; - A final article that illustrates how Adaptive Cycles of Complex Adaptive Systems, as well as innovation, can be applied to gain a better understanding of cities and to promote more resilient and more sustainable urban futures.
The objective of this book is to contribute to the development of the intelligent information and database systems with the essentials of current knowledge, experience and know-how. The book contains a selection of 40 chapters based on original research presented as posters during the 8th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2016) held on 14-16 March 2016 in Da Nang, Vietnam. The papers to some extent reflect the achievements of scientific teams from 17 countries in five continents. The volume is divided into six parts: (a) Computational Intelligence in Data Mining and Machine Learning, (b) Ontologies, Social Networks and Recommendation Systems, (c) Web Services, Cloud Computing, Security and Intelligent Internet Systems, (d) Knowledge Management and Language Processing, (e) Image, Video, Motion Analysis and Recognition, and (f) Advanced Computing Applications and Technologies. The book is an excellent resource for researchers, those working in artificial intelligence, multimedia, networks and big data technologies, as well as for students interested in computer science and other related fields.
This book presents a comprehensive report on the evolution of Fuzzy Logic since its formulation in Lotfi Zadeh's seminal paper on "fuzzy sets," published in 1965. In addition, it features a stimulating sampling from the broad field of research and development inspired by Zadeh's paper. The chapters, written by pioneers and prominent scholars in the field, show how fuzzy sets have been successfully applied to artificial intelligence, control theory, inference, and reasoning. The book also reports on theoretical issues; features recent applications of Fuzzy Logic in the fields of neural networks, clustering, data mining and software testing; and highlights an important paradigm shift caused by Fuzzy Logic in the area of uncertainty management. Conceived by the editors as an academic celebration of the fifty years' anniversary of the 1965 paper, this work is a must-have for students and researchers willing to get an inspiring picture of the potentialities, limitations, achievements and accomplishments of Fuzzy Logic-based systems.
This book describes in detail sampling techniques that can be used for unsupervised and supervised cases, with a focus on sampling techniques for machine learning algorithms. It covers theory and models of sampling methods for managing scalability and the "curse of dimensionality", their implementations, evaluations, and applications. A large part of the book is dedicated to database comprising standard feature vectors, and a special section is reserved to the handling of more complex objects and dynamic scenarios. The book is ideal for anyone teaching or learning pattern recognition and interesting teaching or learning pattern recognition and is interested in the big data challenge. It provides an accessible introduction to the field and discusses the state of the art concerning sampling techniques for supervised and unsupervised task. Provides a comprehensive description of sampling techniques for unsupervised and supervised tasks; Describe implementation and evaluation of algorithms that simultaneously manage scalable problems and curse of dimensionality; Addresses the role of sampling in dynamic scenarios, sampling when dealing with complex objects, and new challenges arising from big data. "This book represents a timely collection of state-of-the art research of sampling techniques, suitable for anyone who wants to become more familiar with these helpful techniques for tackling the big data challenge." M. Emre Celebi, Ph.D., Professor and Chair, Department of Computer Science, University of Central Arkansas "In science the difficulty is not to have ideas, but it is to make them work" From Carlo Rovelli
In recent years, searching for source code on the web has become increasingly common among professional software developers and is emerging as an area of academic research. This volume surveys past research and presents the state of the art in the area of "code retrieval on the web." This work is concerned with the algorithms, systems, and tools to allow programmers to search for source code on the web and the empirical studies of these inventions and practices. It is a label that we apply to a set of related research from software engineering, information retrieval, human-computer interaction, management, as well as commercial products. The division of code retrieval on the web into snippet remixing and component reuse is driven both by empirical data, and analysis of existing search engines and tools. Contributors include leading researchers from human-computer interaction, software engineering, programming languages, and management. "Finding Source Code on the Web for Remix and Reuse" consists of five parts. Part I is titled "Programmers and Practices," and consists of a retrospective chapter and two empirical studies on how programmers search the web for source code. Part II is titled "From Data Structures to Infrastructures," and covers the creation of ground-breaking search engines for code retrieval required ingenuity in the adaptation of existing technology and in the creation of new algorithms and data structures. Part III focuses on "Reuse: Components and Projects," which are reused with minimal modification. Part IV is on "Remix: Snippets and Answers," which examines how source code from the web can also be used as solutions to problems and answers to questions. The book concludes with Part V, "Looking Ahead," that looks at future programming and the legalities of software reuse and remix and the implications of current intellectual property law on the future of software development. The story, "Richie Boss: Private Investigator Manager," was selected as the winner of a crowdfunded short story contest."
With the proliferation of Software-as-a-Service (SaaS) offerings, it is becoming increasingly important for individual SaaS providers to operate their services at a low cost. This book investigates SaaS from the perspective of the provider and shows how operational costs can be reduced by using "multi tenancy," a technique for consolidating a large number of customers onto a small number of servers. Specifically, the book addresses multi tenancy on the database level, focusing on in-memory column databases, which are the backbone of many important new enterprise applications. For efficiently implementing multi tenancy in a farm of databases, two fundamental challenges must be addressed, (i) workload modeling and (ii) data placement. The first involves estimating the (shared) resource consumption for multi tenancy on a single in-memory database server. The second consists in assigning tenants to servers in a way that minimizes the number of required servers (and thus costs) based on the assumed workload model. This step also entails replicating tenants for performance and high availability. This book presents novel solutions to both problems.
This book offers practical advice on managing enterprise modeling (EM) projects and facilitating participatory EM sessions. Modeling activities often involve groups of people, and models are created in a participatory way. Ensuring that this is done efficiently requires dedicated individuals who know how to organize modeling projects and sessions, how to manage discussions during these sessions, and what aspects influence the success and efficiency of modeling in practice. The book also includes a summary of the theoretical background to EM, although participatory modeling can also be used in conjunction with other methods that are not made for EM, such as those made for goal-oriented requirements engineering and information systems analysis. The first four chapters present an overview of enterprise modeling from various viewpoints (including methods, processes and organizational challenges), providing a background for those that need to refresh their basic knowledge. The next six chapters form the core of the book and detail the roles and competences needed in an EM project, typical stakeholder behaviors and how to handle them, tools and methods for managing participatory modeling and facilitation, and how to train modeling experts for these social aspects of modeling. Lastly, a concluding chapter presents a summary and an outlook on current research in participatory EM. This book is intended for anybody who wants to learn more about how to facilitate participatory modeling in practice and how to set up and carry out EM projects. It does not require any in-depth knowledge about specific EM methods and tools, and can be used by students and lecturers for courses on participatory modeling, and by practitioners wanting to extend their knowledge of social and organizational topics to become an experienced facilitator and EM project manager.
In this third edition of Vehicle Accident Analysis & Reconstruction Methods, Raymond M. Brach and R. Matthew Brach have expanded and updated their essential work for professionals in the field of accident reconstruction. Most accidents can be reconstructed effectively using of calculations and investigative and experimental data: the authors present the latest scientific, engineering, and mathematical reconstruction methods, providing a firm scientific foundation for practitioners. Accidents that cannot be reconstructed using the methods in this book are rare. In recent decades, the field of crash reconstruction has been transformed through the use of technology. The advent of event data records (EDRs) on vehicles signaled the era of modern crash reconstruction, which utilizes the same physical evidence that was previously available as well as electronic data that are measured/captured before, during, and after the collision. There is increased demand for more professional and accurate reconstruction as more crash data is available from vehicle sensors. The third edition of this essential work includes a new chapter on the use of EDRs as well as examples using EDR data in accident reconstruction. Early chapters feature foundational material that is necessary for the understanding of vehicle collisions and vehicle motion; later chapters present applications of the methods and include example reconstructions. As a result, Vehicle Accident Analysis & Reconstruction Methods remains the definitive resource in accident reconstruction.
Currently there are major challenges in data mining applications in the geosciences. This is due primarily to the fact that there is a wealth of available mining data amid an absence of the knowledge and expertise necessary to analyze and accurately interpret the same data.Most geoscientists have no practical knowledge or experience using data mining techniques. For the few that do, they typically lack expertise in using data mining software and in selecting the most appropriate algorithms for a given application. This leads to a paradoxical scenario of "rich data but poor knowledge." The true solution is to apply data mining techniques in
geosciences databases and to modify these techniques for practical
applications. Authored by a global thought leader in data mining,
"Data Mining and Knowledge Discovery for Geoscientists" addresses
these challenges by summarizing the latest developments in
geosciences data mining and arming scientists with the ability to
apply key concepts to effectively analyze and interpret vast
amounts of critical information.
This book is an important outcome of the Fifth World Internet Conference. It provides a comprehensive review of China's Internet development, especially the new practice and achievement in 2018. And it offers a systematic account of China's experience in Internet development and governance. This year, the book improves China's Internet Development Index System, optimizes the algorithm model, and enhances data collection, to assess and reflect Internet development more comprehensively, objectively and scientifically.
This book documents the creation of the Bichitra Online Tagore Variorum, a publicly accessible database of Rabindranath Tagore's complete works in Bengali and English totaling some 140,000 pages of primary material. Chapters cover innovative aspects of the site, all replicable in other projects: a hyperbibliography; a search engine and hyperconcordance working across the database; and a unique collation program comparing variant texts at three levels. There are also chapters on the special problems of processing manuscripts, and on planning the website. Early chapters take readers through the history of the project, an overview of Tagore's works, and the Bengali writing system with the challenges of adapting it to electronic form. The name Bichitra, meaning "various" in Bengali, alludes both to the great variety of Tagore's works and to their various stages of composition. Beyond their literary excellence, they are notable for their sheer quantity, the number of variant forms of a great many items, and their afterlife in translation, often the poet's own. Seldom if ever has the same writer revised his material and recast it across genres on such a scale. Tagore won the Nobel Prize in 1913. By its value-added presentation of this range of material, Bichitra can be a model for future databases covering an author's complete works or other major corpus of texts. It offers vastly expanded access to Tagore's writings, and enables new kinds of research including computational text analysis. The "book of the website" shows in technical and human terms how researchers with interests in art, literature and technology can collaborate on cultural informatics projects.
This book constitutes the refereed post-conference proceedings of the 10th IFIP WG 5.14 International Conference on Computer and Computing Technologies in Agriculture, CCTA 2016, held in Dongying, China, in October 2016. The 55 revised papers presented were carefully reviewed and selected from 128 submissions. They cover a wide range of interesting theories and applications of information technology in agriculture, including intelligent sensing, cloud computing, key technologies of the Internet of Things, precision agriculture, animal husbandry information technology, including Internet + modern animal husbandry, livestock big data platform and cloud computing applications, intelligent breeding equipment, precision production models, water product networking and big data , including fishery IoT, intelligent aquaculture facilities, and big data applications. |
![]() ![]() You may like...
Advances in Computing and Network…
Sabu M. Thampi, Erol Gelenbe, …
Hardcover
R8,756
Discovery Miles 87 560
Applications of Bat Algorithm and its…
Nilanjan Dey, V. Rajinikanth
Hardcover
R4,348
Discovery Miles 43 480
Information Security in Research and…
Louise Yngstroem, Jan Carlsen
Hardcover
R5,860
Discovery Miles 58 600
|