![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
This book develops a crowdsourced sensor-cloud service composition framework taking into account spatio-temporal aspects. This book also unfolds new horizons to service-oriented computing towards the direction of crowdsourced sensor data based applications, in the broader context of Internet of Things (IoT). It is a massive challenge for the IoT research field how to effectively and efficiently capture, manage and deliver sensed data as user-desired services. The outcome of this research will contribute to solving this very important question, by designing a novel service framework and a set of unique service selection and composition frameworks. Delivering a novel service framework to manage crowdsourced sensor data provides high-level abstraction (i.e., sensor-cloud service) to model crowdsourced sensor data from functional and non-functional perspectives, seamlessly turning the raw data into "ready to go" services. A creative indexing model is developed to capture and manage the spatio-temporal dynamism of crowdsourced service providers. Delivering novel frameworks to compose crowdsourced sensor-cloud services is vital. These frameworks focuses on spatio-temporal composition of crowdsourced sensor-cloud services, which is a new territory for existing service oriented computing research. A creative failure-proof model is also designed to prevent composition failure caused by fluctuating QoS. Delivering an incentive model to drive the coverage of crowdsourced service providers is also vital. A new spatio-temporal incentive model targets changing coverage of the crowdsourced providers to achieve demanded coverage of crowdsourced sensor-cloud services within a region. The outcome of this research is expected to potentially create a sensor services crowdsourcing market and new commercial opportunities focusing on crowdsourced data based applications. The crowdsourced community based approach adds significant value to journey planning and map services thus creating a competitive edge for a technologically-minded companies incentivizing new start-ups, thus enabling higher market innovation. This book primarily targets researchers and practitioners, who conduct research work in service oriented computing, Internet of Things (IoT), smart city and spatio-temporal travel planning, as well as advanced-level students studying this field. Small and Medium Entrepreneurs, who invest in crowdsourced IoT services and journey planning infrastructures, will also want to purchase this book.
The book discusses some key scientific and technological developments in high performance computing, identifies significant trends, and defines desirable research objectives. It covers general concepts and emerging systems, software technology, algorithms and applications. Coverage includes hardware, software tools, networks and numerical methods, new computer architectures, and a discussion of future trends. Beyond purely scientific/engineering computing, the book extends to coverage of enterprise-wide, commercial applications, including papers on performance and scalability of database servers and Oracle DBM systems. Audience: Most papers are research level, but some are suitable for computer literate managers and technicians, making the book useful to users of commercial parallel computers.
This book presents the implementation of novel concepts and solutions, which allows to enhance the cyber security of administrative and industrial systems and the resilience of economies and societies to cyber and hybrid threats. This goal can be achieved by rigorous information sharing, enhanced situational awareness, advanced protection of industrial processes and critical infrastructures, and proper account of the human factor, as well as by adequate methods and tools for analysis of big data, including data from social networks, to find best ways to counter hybrid influence. The implementation of these methods and tools is examined here as part of the process of digital transformation through incorporation of advanced information technologies, knowledge management, training and testing environments, and organizational networking. The book is of benefit to practitioners and researchers in the field of cyber security and protection against hybrid threats, as well as to policymakers and senior managers with responsibilities in information and knowledge management, security policies, and human resource management and training.
Certification and Security in Inter-Organizational E-Services presents the proceedings of CSES 2004 - the 2nd International Workshop on Certification and Security in Inter-Organizational E-Services held within IFIP WCC in August 2004 in Toulouse, France. Certification and security share a common technological basis in the reliable and efficient monitoring of executed and running processes; they likewise depend on the same fundamental organizational and economic principles. As the range of services managed and accessed through communication networks grows throughout society, and given the legal value that is often attached to data treated or exchanged, it is critical to be able to certify the network transactions and ensure that the integrity of the involved computer-based systems is maintained. This collection of papers documents several important developments, and offers real-life application experiences, research results and methodological proposals of direct interest to systems experts and users in governmental, industrial and academic communities.
The Web is growing at an astounding pace surpassing the 8 billion page mark. However, most pages are still designed for human consumption and cannot be processed by machines. This book provides a well-paced introduction to the Semantic Web. It covers a wide range of topics, from new trends (ontologies, rules) to existing technologies (Web Services and software agents) to more formal aspects (logic and inference). It includes: real-world (and complete) examples of the application of Semantic Web concepts; how the technology presented and discussed throughout the book can be extended to other application areas.
This book introduces an efficient resource management approach for future spectrum sharing systems. The book focuses on providing an optimal resource allocation framework based on carrier aggregation to allocate multiple carriers' resources efficiently among mobile users. Furthermore, it provides an optimal traffic dependent pricing mechanism that could be used by network providers to charge mobile users for the allocated resources. The book provides different resource allocation with carrier aggregation solutions, for different spectrum sharing scenarios, and compares them. The provided solutions consider the diverse quality of experience requirement of multiple applications running on the user's equipment since different applications require different application performance. In addition, the book addresses the resource allocation problem for spectrum sharing systems that require user discrimination when allocating the network resources.
This book presents different use cases in big data applications and related practical experiences. Many businesses today are increasingly interested in utilizing big data technologies for supporting their business intelligence so that it is becoming more and more important to understand the various practical issues from different practical use cases. This book provides clear proof that big data technologies are playing an ever increasing important and critical role in a new cross-discipline research between computer science and business.
This book presents a framework through transformation and explains how business goals can be translated into realistic plans that are tangible and yield real results in terms of the top line and the bottom line. Process Transformation is like a tangram puzzle, which has multiple solutions yet is essentially composed of seven 'tans' that hold it together. Based on practical experience and intensive research into existing material, 'Process Tangram' is a simple yet powerful framework that proposes Process Transformation as a program. The seven 'tans' are: the transformation program itself, triggers, goals, tools and techniques, culture, communication and success factors. With its segregation into tans and division into core elements, this framework makes it possible to use 'pick and choose' to quickly and easily map an organization's specific requirements. Change management and process modeling are covered in detail. In addition, the book approaches managed services as a model of service delivery, which it explores as a case of process transformation. This book will appeal to anyone engaged in business process transformation, be it business process management professionals, change managers, sponsors, program managers or line managers. The book starts with the basics, making it suitable even for students who want to make a career in business process management.
In this book, the authors first address the research issues by providing a motivating scenario, followed by the exploration of the principles and techniques of the challenging topics. Then they solve the raised research issues by developing a series of methodologies. More specifically, the authors study the query optimization and tackle the query performance prediction for knowledge retrieval. They also handle unstructured data processing, data clustering for knowledge extraction. To optimize the queries issued through interfaces against knowledge bases, the authors propose a cache-based optimization layer between consumers and the querying interface to facilitate the querying and solve the latency issue. The cache depends on a novel learning method that considers the querying patterns from individual's historical queries without having knowledge of the backing systems of the knowledge base. To predict the query performance for appropriate query scheduling, the authors examine the queries' structural and syntactical features and apply multiple widely adopted prediction models. Their feature modelling approach eschews the knowledge requirement on both the querying languages and system. To extract knowledge from unstructured Web sources, the authors examine two kinds of Web sources containing unstructured data: the source code from Web repositories and the posts in programming question-answering communities. They use natural language processing techniques to pre-process the source codes and obtain the natural language elements. Then they apply traditional knowledge extraction techniques to extract knowledge. For the data from programming question-answering communities, the authors make the attempt towards building programming knowledge base by starting with paraphrase identification problems and develop novel features to accurately identify duplicate posts. For domain specific knowledge extraction, the authors propose to use a clustering technique to separate knowledge into different groups. They focus on developing a new clustering algorithm that uses manifold constraints in the optimization task and achieves fast and accurate performance. For each model and approach presented in this dissertation, the authors have conducted extensive experiments to evaluate it using either public dataset or synthetic data they generated.
Privacy and security risks arising from the application of different data mining techniques to large institutional data repositories have been solely investigated by a new research domain, the so-called privacy preserving data mining. Association rule hiding is a new technique in data mining, which studies the problem of hiding sensitive association rules from within the data. Association Rule Hiding for Data Mining addresses the problem of "hiding" sensitive association rules, and introduces a number of heuristic solutions. Exact solutions of increased time complexity that have been proposed recently are presented, as well as a number of computationally efficient (parallel) approaches that alleviate time complexity problems, along with a thorough discussion regarding closely related problems (inverse frequent item set mining, data reconstruction approaches, etc.). Unsolved problems, future directions and specific examples are provided throughout this book to help the reader study, assimilate and appreciate the important aspects of this challenging problem. Association Rule Hiding for Data Mining is designed for researchers, professors and advanced-level students in computer science studying privacy preserving data mining, association rule mining, and data mining. This book is also suitable for practitioners working in this industry.
This volume contains the final proceedings of the special stream on security in E-government and E-business. This stream has been an integral part of the IFIP World Computer Congress 2002, that has taken place from 26-29 August 2002 in Montreal, Canada. The stream consisted of three events: one tutorial and two workshops. The tutorial was devoted to the theme "An Architecture for Information Se curity Management," and was presented by Prof. Dr. Basie von Solms (Past chairman of IFIP TC 11) and Prof. Dr. Jan Eloff (Past chairman of IFIP TC 11 WG 11.2). Both are from Rand Afrikaans University -Standard Bank Academy for Information Technology, Johannesburg, South Africa. The main purpose of the tutorial was to present and discuss an Architecture for Informa tion Security Management and was specifically of value for people involved in, or who wanted to find out more about the management of information secu rity in a company. It provided a reference framework covering all three of the relevant levels or dimensions of Information Security Management. The theme of the first workshop was "E-Government and Security" and was chaired by Leon Strous, CISA (De Nederlandsche Bank NY, The Netherlands and chairman of IFIP TC 11) and by Sabina Posadziejewski, I.S.P., MBA (Al berta Innovation and Science, Edmonton, Canada)."
Neal Koblitz is a co-inventor of one of the two most popular forms of encryption and digital signature, and his autobiographical memoirs are collected in this volume. Besides his own personal career in mathematics and cryptography, Koblitz details his travels to the Soviet Union, Latin America, Vietnam and elsewhere; political activism; and academic controversies relating to math education, the C. P. Snow "two-culture" problem, and mistreatment of women in academia. These engaging stories fully capture the experiences of a student and later a scientist caught up in the tumultuous events of his generation.
This book highlights practical quantum key distribution systems and research on the implementations of next-generation quantum communication, as well as photonic quantum device technologies. It discusses how the advances in quantum computing and quantum physics have allowed the building, launching and deploying of space exploration systems that are capable of more and more as they become smaller and lighter. It also presents theoretical and experimental research on the potential and limitations of secure communication and computation with quantum devices, and explores how security can be preserved in the presence of a quantum computer, and how to achieve long-distance quantum communication. The development of a real quantum computer is still in the early stages, but a number of research groups have investigated the theoretical possibilities of such computers.
'Securing Web Services' investigates the security-related specifications that encompass message level security, transactions, and identity management.
The information infrastructure---comprising computers, embedded devices, networks and software systems---is vital to day-to-day operations in every sector: information and telecommunications, banking and finance, energy, chemicals and hazardous materials, agriculture, food, water, public health, emergency services, transportation, postal and shipping, government and defense. Global business and industry, governments, indeed society itself, cannot function effectively if major components of the critical information infrastructure are degraded, disabled or destroyed. Critical Infrastructure Protection V describes original research results and innovative applications in the interdisciplinary field of critical infrastructure protection. Also, it highlights the importance of weaving science, technology and policy in crafting sophisticated, yet practical, solutions that will help secure information, computer and network assets in the various critical infrastructure sectors. Areas of coverage include: Themes and Issues, Control Systems Security, Infrastructure Security, and Infrastructure Modeling and Simulation. This book is the 5th volume in the annual series produced by the International Federation for Information Processing (IFIP) Working Group 11.10 on Critical Infrastructure Protection, an international community of scientists, engineers, practitioners and policy makers dedicated to advancing research, development and implementation efforts focused on infrastructure protection. The book contains a selection of 14 edited papers from the 5th Annual IFIP WG 11.10 International Conference on Critical Infrastructure Protection, held at Dartmouth College, Hanover, New Hampshire, USA in the spring of 2011. Critical Infrastructure Protection V is an important resource for researchers, faculty members and graduate students, as well as for policy makers, practitioners and other individuals with interests in homeland security. Jonathan Butts is an Assistant Professor of Computer Science at the Air Force Institute of Technology, Wright-Patterson Air Force Base, Ohio, USA. Sujeet Shenoi is the F.P. Walter Professor of Computer Science at the University of Tulsa, Tulsa, Oklahoma, USA.
The Semantic Web aims at machine agents that thrive on explicitly specified semantics of content in order to search, filter, condense, or negotiate knowledge for their human users. A core technology for making the Semantic Web happen, but also to leverage application areas like Knowledge Management and E-Business, is the field of Semantic Annotation, which turns human-understandable content into a machine understandable form. This book reports on the broad range of technologies that are used to achieve this translation and nourish 3rd millennium applications. The book starts with a survey of the oldest semantic annotations, viz. indexing of publications in libraries. It continues with several techniques for the explicit construction of semantic annotations, including approaches for collaboration and Semantic Web metadata. One of the major means for improving the semantic annotation task is information extraction and much can be learned from the semantic tagging of linguistic corpora. In particular, information extraction is gaining prominence for automating the formerly purely manual annotation task - at least to some extent.An important subclass of information extraction tasks is the goal-oriented extraction of content from HTML and / or XML resources.
ISGC 2009, The International Symposium on Grid Computing was held at Academia Sinica, Taipei, Taiwan in April 2009 bringing together prestigious scientists and engineers worldwide to exchange ideas, present challenges/solutions and introduce future development in the field of Grid Computing. Managed Grids and Cloud Systems in the Asia-Pacific Research Community presents the latest achievements in grid technology including Cloud Computing. This volume also covers international projects in Grid Operation, Grid Middleware, E-Science applications, technical developments in grid operations and management, Security and Networking, Digital Library and more. The resources used to support these advances, such as volunteer grids, production managed grids, and cloud systems are discussed in detail. This book is designed for a professional audience composed of grid users, developers and researchers working in the grid computing. Advanced-level students focusing on computer science and engineering will find this book valuable as a reference or secondary text book.
Processing data streams has raised new research challenges over the last few years. This book provides the reader with a comprehensive overview of stream data processing, including famous prototype implementations like the Nile system and the TinyOS operating system. Applications in security, the natural sciences, and education are presented. The huge bibliography offers an excellent starting point for further reading and future research.
Data Mining and Applications in Genomics contains the data mining algorithms and their applications in genomics, with frontier case studies based on the recent and current works at the University of Hong Kong and the Oxford University Computing Laboratory, University of Oxford. It provides a systematic introduction to the use of data mining algorithms as an investigative tool for applications in genomics. Data Mining and Applications in Genomics offers state of the art of tremendous advances in data mining algorithms and applications in genomics and also serves as an excellent reference work for researchers and graduate students working on data mining algorithms and applications in genomics.
Mobile communications and ubiquitous computing generate large volumes of data. Mining this data can produce useful knowledge, yet individual privacy is at risk. This book investigates the various scientific and technological issues of mobility data, open problems, and roadmap. The editors manage a research project called GeoPKDD, Geographic Privacy-Aware Knowledge Discovery and Delivery, and this book relates their findings in 13 chapters covering all related subjects.
This book presents research in artificial techniques using intelligence for energy transition, outlining several applications including production systems, energy production, energy distribution, energy management, renewable energy production, cyber security, industry 4.0 and internet of things etc. The book goes beyond standard application by placing a specific focus on the use of AI techniques to address the challenges related to the different applications and topics of energy transition. The contributions are classified according to the market and actor interactions (service providers, manufacturers, customers, integrators, utilities etc.), to the SG architecture model (physical layer, infrastructure layer, and business layer), to the digital twin of SG (business model, operational model, fault/transient model, and asset model), and to the application domain (demand side management, load monitoring, micro grids, energy consulting (residents, utilities), energy saving, dynamic pricing revenue management and smart meters, etc.).
Communications and Multimedia Security is an essential reference for both academic and professional researchers in the fields of Communications and Multimedia Security. This state-of-the-art volume presents the proceedings of the Eighth Annual IFIP TC-6 TC-11 Conference on Communications and Multimedia Security, September 2004, in Windermere, UK. The papers presented here represent the very latest developments in security research from leading people in the field. The papers explore a wide variety of subjects including privacy protection and trust negotiation, mobile security, applied cryptography, and security of communication protocols. Of special interest are several papers which addressed security in the Microsoft .Net architecture, and the threats that builders of web service applications need to be aware of. The papers were a result of research sponsored by Microsoft at five European University research centers. This collection will be important not only for multimedia security experts and researchers, but also for all teachers and administrators interested in communications security.
This volume introduces a series of different data-driven computational methods for analyzing group processes through didactic and tutorial-based examples. Group processes are of central importance to many sectors of society, including government, the military, health care, and corporations. Computational methods are better suited to handle (potentially huge) group process data than traditional methodologies because of their more flexible assumptions and capability to handle real-time trace data. Indeed, the use of methods under the name of computational social science have exploded over the years. However, attention has been focused on original research rather than pedagogy, leaving those interested in obtaining computational skills lacking a much needed resource. Although the methods here can be applied to wider areas of social science, they are specifically tailored to group process research. A number of data-driven methods adapted to group process research are demonstrated in this current volume. These include text mining, relational event modeling, social simulation, machine learning, social sequence analysis, and response surface analysis. In order to take advantage of these new opportunities, this book provides clear examples (e.g., providing code) of group processes in various contexts, setting guidelines and best practices for future work to build upon. This volume will be of great benefit to those willing to learn computational methods. These include academics like graduate students and faculty, multidisciplinary professionals and researchers working on organization and management science, and consultants for various types of organizations and groups.
Examples abound in database applications of well-formulated queries running slowly, even if all levels of the database are properly tuned. It is essential to address each level separately by focusing first on underlying principles and root causes, and only then proposing both theoretical and practical solutions. "Database Performance Tuning and Optimization" comprehensively addresses each level separately by focusing first on underlying principles and root causes, and then proposes both theoretical and practical solutions using Oracle 8i examples as the RDBMS. The book combines theory with practical tools (in the form of Oracle and UNIX shell scripts) to address the tuning and optimization issues of DBAs and developers, irrespective of whether they use Oracle. Topics and features: * An integrated approach to tuning by improving all three levels of a database (conceptual, internal, and external) for optimal performance * Balances theory with practice, developing underlying principles and then applying them to other RDBMSs, not just Oracle * Includes CD-ROM containing all scripts and methods utilized in the book * Coverage of data warehouses provides readers much needed principles and tools for tuning large reporting databases * Coverage of web-based databases * Appendix B shows how to create an instance and its associated database and all its objects * Provides useful exercises, references, and Oracle 8i and select 9i examples Based on nearly two decades of experience as an Oracle developer and DBA, the author delivers comprehensive coverage of the fundamental principles and methodologies of tuning and optimizing database performance. Database professionals and practitioners with some experience developing, implementing, and maintaining relational databases will find the work an essential resource. It is also suitable for professional short courses and self-study purposes.
New approaches are needed that could move us towards developing effective systems for problem solving and decision making, systems that can deal with complex and ill-structured situations, systems that can function in information rich environments, systems that can cope with imprecise information, systems that can rely on their knowledge and learn from experience - i.e. intelligent systems. One of the main efforts in intelligent systems development is focused on knowledge and information management which is regarded as the crucial issue in smart decision making support. The 13 Chapters of this book represent a sample of such effort. The overall aim of this book is to provide guidelines to develop tools for smart processing of knowledge and information. Still, the guide does not presume to give ultimate answers. Rather, it poses ideas and case studies to explore and the complexities and challenges of modern knowledge management issues. It also encourages its reader to become aware of the multifaceted interdisciplinary character of such issues. The premise of this book is that its reader will leave it with a heightened ability to think - in different ways - about developing, evaluating, and supporting intelligent knowledge and information management systems in real life based environment. |
![]() ![]() You may like...
News Search, Blogs and Feeds - A Toolkit
Lars Vage, Lars Iselid
Paperback
R1,412
Discovery Miles 14 120
Learning-Based Reconfigurable Multiple…
Tho Le-Ngoc, Atoosa Dalili Shoaei
Hardcover
R2,878
Discovery Miles 28 780
Bluetooth Application Programming with…
Timothy J. Thompson, C.Bala Kumar, …
Paperback
R1,045
Discovery Miles 10 450
Web-Based Services - Concepts…
Information Reso Management Association
Hardcover
R18,336
Discovery Miles 183 360
Cisco ASA - All-in-one Next-Generation…
Jazib Frahim, Omar Santos, …
Paperback
|