![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases
This book is open access under a CC BY-NC 4.0 license. This volume presents several case studies highlighting the latest findings in Industry 4.0 projects utilizing S-BPM features. Their potential is explored in detail, while the limits of engineering a company from a communication-centred perspective are also discussed. After a general introduction and an overview of the book in chapter 1, chapter 2 starts by condensing the industrial challenges driven by the German "Industry 4.0" trend to form a concrete vision for future production industries. Subsequently, chapter 3 introduces the basic concepts of S-BPM and its capabilities, in particular for supporting the restructuring of processes. The next three chapters then present various case studies, e.g. at an SME offering the production of atypical, unique and special purpose machinery, equipment and technologically complex units particularly useful in the automotive and electronic industries; and at a further SME producing highly-customized floor cleaning machines. Rounding out the coverage, the last two chapters summarize the achievements and lessons learned with regard to the road ahead. Overall, the book provides a realistic portrait of the status quo based on current findings, and outlines the future activities to be pursued in order to establish stakeholder-centred digital production systems. As such, developers, educators, and practitioners will find both the conceptual background and results from the field reflecting the state-of-the-art in vertical and horizontal process integration.
This thesis deals with two important and very timely aspects of the future power system operation - assessment of demand flexibility and advanced demand side management (DSM) facilitating flexible and secure operation of the power network. It provides a clear and comprehensive literature review in these two areas and states precisely the original contributions of the research. The book first demonstrates the benefits of data mining for a reliable assessment of demand flexibility and its composition even with very limited observability of the end-users. It then illustrates the importance of accurate load modelling for efficient application of DSM and considers different criteria in designing DSM programme to achieve several objectives of the network performance simultaneously. Finally, it demonstrates the importance of considering realistic assumptions when planning and estimating the success of DSM programs. The findings presented here have both scientific and practical significance; they gained her BSc and MSc degrees in electrical engineering from the University of Belgrade in 2011 and 2012 respectively. She graduated with her PhD from the University of Manchester. She has presented at several conferences, and has won runner-up prizes in poster presentation at three. She has authored or co-authored more than 40 journal, conference and technical papers.provide a basis for further research, and can be used to guide future applications in industry.
Cryptography has experienced rapid development, with major advances recently in both secret and public key ciphers, cryptographic hash functions, cryptographic algorithms and multiparty protocols, including their software engineering correctness verification, and various methods of cryptanalysis. This textbook introduces the reader to these areas, offering an understanding of the essential, most important, and most interesting ideas, based on the authors' teaching and research experience. After introducing the basic mathematical and computational complexity concepts, and some historical context, including the story of Enigma, the authors explain symmetric and asymmetric cryptography, electronic signatures and hash functions, PGP systems, public key infrastructures, cryptographic protocols, and applications in network security. In each case the text presents the key technologies, algorithms, and protocols, along with methods of design and analysis, while the content is characterized by a visual style and all algorithms are presented in readable pseudocode or using simple graphics and diagrams. The book is suitable for undergraduate and graduate courses in computer science and engineering, particularly in the area of networking, and it is also a suitable reference text for self-study by practitioners and researchers. The authors assume only basic elementary mathematical experience, the text covers the foundational mathematics and computational complexity theory.
The first part of this book covers the key concepts of cryptography on an undergraduate level, from encryption and digital signatures to cryptographic protocols. Essential techniques are demonstrated in protocols for key exchange, user identification, electronic elections and digital cash. In the second part, more advanced topics are addressed, such as the bit security of one-way functions and computationally perfect pseudorandom bit generators. The security of cryptographic schemes is a central topic. Typical examples of provably secure encryption and signature schemes and their security proofs are given. Though particular attention is given to the mathematical foundations, no special background in mathematics is presumed. The necessary algebra, number theory and probability theory are included in the appendix. Each chapter closes with a collection of exercises. In the second edition the authors added a complete description of the AES, an extended section on cryptographic hash functions, and new sections on random oracle proofs and public-key encryption schemes that are provably secure against adaptively-chosen-ciphertext attacks. The third edition is a further substantive extension, with new topics added, including: elliptic curve cryptography; Paillier encryption; quantum cryptography; the new SHA-3 standard for cryptographic hash functions; a considerably extended section on electronic elections and Internet voting; mix nets; and zero-knowledge proofs of shuffles. The book is appropriate for undergraduate and graduate students in computer science, mathematics, and engineering.
This book aims through 11 chapters discussing the problems and challenges and some future research points from the recent technologies point of view such as artificial intelligence and the Internet of things (IoT) that can help the environment and healthcare sectors reducing COVID-19.
The Digital Humanities have arrived at a moment when digital Big Data is becoming more readily available, opening exciting new avenues of inquiry but also new challenges. This pioneering book describes and demonstrates the ways these data can be explored to construct cultural heritage knowledge, for research and in teaching and learning. It helps humanities scholars to grasp Big Data in order to do their work, whether that means understanding the underlying algorithms at work in search engines, or designing and using their own tools to process large amounts of information.Demonstrating what digital tools have to offer and also what 'digital' does to how we understand the past, the authors introduce the many different tools and developing approaches in Big Data for historical and humanistic scholarship, show how to use them, what to be wary of, and discuss the kinds of questions and new perspectives this new macroscopic perspective opens up. Authored 'live' online with ongoing feedback from the wider digital history community, Exploring Big Historical Data breaks new ground and sets the direction for the conversation into the future. It represents the current state-of-the-art thinking in the field and exemplifies the way that digital work can enhance public engagement in the humanities.Exploring Big Historical Data should be the go-to resource for undergraduate and graduate students confronted by a vast corpus of data, and researchers encountering these methods for the first time. It will also offer a helping hand to the interested individual seeking to make sense of genealogical data or digitized newspapers, and even the local historical society who are trying to see the value in digitizing their holdings.The companion website to Exploring Big Historical Data can be found at www.themacroscope.org/. On this site you will find code, a discussion forum, essays, and datafiles that accompany this book.
The proceedings from the eighth KMO conference represent the findings of this international meeting which brought together researchers and developers from industry and the academic world to report on the latest scientific and technical advances on knowledge management in organizations. This conference provided an international forum for authors to present and discuss research focused on the role of knowledge management for innovative services in industries, to shed light on recent advances in social and big data computing for KM as well as to identify future directions for researching the role of knowledge management in service innovation and how cloud computing can be used to address many of the issues currently facing KM in academia and industrial sectors.
This text presents an overview of smart information systems for both the private and public sector, highlighting the research questions that can be studied by applying computational intelligence. The book demonstrates how to transform raw data into effective smart information services, covering the challenges and potential of this approach. Each chapter describes the algorithms, tools, measures and evaluations used to answer important questions. This is then further illustrated by a diverse selection of case studies reflecting genuine problems faced by SMEs, multinational manufacturers, service companies, and the public sector. Features: provides a state-of-the-art introduction to the field, integrating contributions from both academia and industry; reviews novel information aggregation services; discusses personalization and recommendation systems; examines sensor-based knowledge acquisition services, describing how the analysis of sensor data can be used to provide a clear picture of our world.
Through interaction with other databases such as social media, geographic information systems have the ability to build and obtain not only statistics defined on the flows of people, things, and information but also on perceptions, impressions, and opinions about specific places, territories, and landscapes. It is thus necessary to systematize, integrate, and coordinate the various sources of data (especially open data) to allow more appropriate and complete analysis, descriptions, and elaborations. Spatial Planning in the Big Data Revolution is a critical scholarly resource that aims to bring together different methodologies that combine the potential of large data analysis with GIS applications in dedicated tools specifically for territorial, social, economic, environmental, transport, energy, real estate, and landscape evaluation. Additionally, the book addresses a number of fundamental objectives including the application of big data analysis in supporting territorial analysis, validating crowdsourcing and crowdmapping techniques, and disseminating information and community involvement. Urban planners, architects, researchers, academicians, professionals, and practitioners in such fields as computer science, data science, and business intelligence will benefit most from the research contained within this publication.
This book presents innovative research works to demonstrate the potential and the advancements of computing approaches to utilize healthcare centric and medical datasets in solving complex healthcare problems. Computing technique is one of the key technologies that are being currently used to perform medical diagnostics in the healthcare domain, thanks to the abundance of medical data being generated and collected. Nowadays, medical data is available in many different forms like MRI images, CT scan images, EHR data, test reports, histopathological data and doctor patient conversation data. This opens up huge opportunities for the application of computing techniques, to derive data-driven models that can be of very high utility, in terms of providing effective treatment to patients. Moreover, machine learning algorithms can uncover hidden patterns and relationships present in medical datasets, which are too complex to uncover, if a data-driven approach is not taken. With the help of computing systems, today, it is possible for researchers to predict an accurate medical diagnosis for new patients, using models built from previous patient data. Apart from automatic diagnostic tasks, computing techniques have also been applied in the process of drug discovery, by which a lot of time and money can be saved. Utilization of genomic data using various computing techniques is another emerging area, which may in fact be the key to fulfilling the dream of personalized medications. Medical prognostics is another area in which machine learning has shown great promise recently, where automatic prognostic models are being built that can predict the progress of the disease, as well as can suggest the potential treatment paths to get ahead of the disease progression.
Commercial Web search engines such as Google, Yahoo, and Bing are used every day by millions of people across the globe. With their ever-growing refinement and usage, it has become increasingly difficult for academic researchers to keep up with the collection sizes and other critical research issues related to Web search, which has created a divide between the information retrieval research being done within academia and industry. Such large collections pose a new set of challenges for information retrieval researchers. In this work, Metzler describes highly effective information retrieval models for both smaller, classical data sets, and larger Web collections. In a shift away from heuristic, hand-tuned ranking functions and complex probabilistic models, he presents feature-based retrieval models. The Markov random field model he details goes beyond the traditional yet ill-suited bag of words assumption in two ways. First, the model can easily exploit various types of dependencies that exist between query terms, eliminating the term independence assumption that often accompanies bag of words models. Second, arbitrary textual or non-textual features can be used within the model. As he shows, combining term dependencies and arbitrary features results in a very robust, powerful retrieval model. In addition, he describes several extensions, such as an automatic feature selection algorithm and a query expansion framework. The resulting model and extensions provide a flexible framework for highly effective retrieval across a wide range of tasks and data sets. A Feature-Centric View of Information Retrieval provides graduate students, as well as academic and industrial researchers in the fields of information retrieval and Web search with a modern perspective on information retrieval modeling and Web searches.
This book presents an overview of a variety of contemporary statistical, mathematical and computer science techniques which are used to further the knowledge in the medical domain. The authors focus on applying data mining to the medical domain, including mining the sets of clinical data typically found in patient's medical records, image mining, medical mining, data mining and machine learning applied to generic genomic data and more. This work also introduces modeling behavior of cancer cells, multi-scale computational models and simulations of blood flow through vessels by using patient-specific models. The authors cover different imaging techniques used to generate patient-specific models. This is used in computational fluid dynamics software to analyze fluid flow. Case studies are provided at the end of each chapter. Professionals and researchers with quantitative backgrounds will find Computational Medicine in Data Mining and Modeling useful as a reference. Advanced-level students studying computer science, mathematics, statistics and biomedicine will also find this book valuable as a reference or secondary text book.
This book presents a mathematical treatment of the radio resource allocation of modern cellular communications systems in contested environments. It focuses on fulfilling the quality of service requirements of the living applications on the user devices, which leverage the cellular system, and with attention to elevating the users' quality of experience. The authors also address the congestion of the spectrum by allowing sharing with the band incumbents while providing with a quality-of-service-minded resource allocation in the network. The content is of particular interest to telecommunications scheduler experts in industry, communications applications academia, and graduate students whose paramount research deals with resource allocation and quality of service.
This monograph describes and implements partially homomorphic encryption functions using a unified notation. After introducing the appropriate mathematical background, the authors offer a systematic examination of the following known algorithms: Rivest-Shamir-Adleman; Goldwasser-Micali; ElGamal; Benaloh; Naccache-Stern; Okamoto-Uchiyama; Paillier; Damgaard-Jurik; Boneh-Goh-Nissim; and Sander-Young-Yung. Over recent years partially and fully homomorphic encryption algorithms have been proposed and researchers have addressed issues related to their formulation, arithmetic, efficiency and security. Formidable efficiency barriers remain, but we now have a variety of algorithms that can be applied to various private computation problems in healthcare, finance and national security, and studying these functions may help us to understand the difficulties ahead. The book is valuable for researchers and graduate students in Computer Science, Engineering, and Mathematics who are engaged with Cryptology.
COOP 2012 is the tenth COOP conference, marking twenty years from the first conference in 1992. In this special anniversary edition we asked researchers and practitioners to reflect on what have been the successes and the failures in designing cooperative systems, and what challenges still need to be addressed. We have come a long way in understanding the intricacies of cooperation and in designing systems that support work practices and collective activities. These advances would not have been possible without the concerted effort of contributions from a plethora of domains including CSCW, HCI, Information Systems, Knowledge Engineering, Multi-agent systems, organizational and management sciences, sociology, psychology, anthropology, ergonomics, linguistics, etc. The COOP community is going from strength to strength in developing new technologies, advancing and proposing new methodological approaches, and forging theories.
In many decision support fields, the data that is exploited is becoming more and more complex. To take this phenomenon into account, classical architectures of data warehouses or data mining algorithms must be completely re-evaluated. ""Processing and Managing Complex Data for Decision Support"" provides readers with an overview of the emerging field of complex data processing by bringing together various research studies and surveys in different subfields, and by highlighting the similarities between the different data, issues, and approaches. This book deals with important topics, such as: complex data warehousing, including spatial, XML, and text warehousing; and complex data mining, including distance metrics and similarity measures, pattern management, multimedia, and gene sequence mining.
The Semantic Web is characterized by the existence of a very large number of distributed semantic resources, which together define a network of ontologies. These ontologies in turn are interlinked through a variety of different meta-relationships such as versioning, inclusion, and many more. This scenario is radically different from the relatively narrow contexts in which ontologies have been traditionally developed and applied, and thus calls for new methods and tools to effectively support the development of novel network-oriented semantic applications. This book by Suarez-Figueroa et al. provides the necessary methodological and technological support for the development and use of ontology networks, which ontology developers need in this distributed environment. After an introduction, in its second part the authors describe the NeOn Methodology framework. The book's third part details the key activities relevant to the ontology engineering life cycle. For each activity, a general introduction, methodological guidelines, and practical examples are provided. The fourth part then presents a detailed overview of the NeOn Toolkit and its plug-ins. Lastly, case studies from the pharmaceutical and the fishery domain round out the work. The book primarily addresses two main audiences: students (and their lecturers) who need a textbook for advanced undergraduate or graduate courses on ontology engineering, and practitioners who need to develop ontologies in particular or Semantic Web-based applications in general. Its educational value is maximized by its structured approach to explaining guidelines and combining them with case studies and numerous examples. The description of the open source NeOn Toolkit provides an additional asset, as it allows readers to easily evaluate and apply the ideas presented."
This book is an authoritative handbook of current topics, technologies and methodological approaches that may be used for the study of scholarly impact. The included methods cover a range of fields such as statistical sciences, scientific visualization, network analysis, text mining, and information retrieval. The techniques and tools enable researchers to investigate metric phenomena and to assess scholarly impact in new ways. Each chapter offers an introduction to the selected topic and outlines how the topic, technology or methodological approach may be applied to metrics-related research. Comprehensive and up-to-date, Measuring Scholarly Impact: Methods and Practice is designed for researchers and scholars interested in informetrics, scientometrics, and text mining. The hands-on perspective is also beneficial to advanced-level students in fields from computer science and statistics to information science.
Information and communication technologies of the 20th century have had a significant impact on our daily lives. They have brought new opportunities as well as new challenges for human development. The Philosopher: Luciano Floridi claims that these new technologies have led to a revolutionary shift in our understanding of humanity's nature and its role in the universe. Florodi's philosophical analysis of new technologies leads to a novel metaphysical framework in which our understanding of the ultimate nature of reality shifts from a materialist one to an informational one. In this world, all entities, be they natural or artificial, are analyzed as informational entities. This book provides critical reflection to this idea, in four different areas: Information Ethics and The Method of Levels of Abstraction The Information Revolution and Alternative Categorizations of Technological Advancements Applications: Education, Internet and Information Science Epistemic and Ontic Aspects of the Philosophy of Information
This book covers the basics of semantic web technologies and indexing languages, and describes their contribution to improve methods of formal knowledge representation and reasoning. The methodologies included combine the specifics of indexing languages, Web representation languages and intersystem relations, and explain their contribution to search functionalities in information retrieval scenarios. An example oriented discussion, considering aspects of conceptual and semantic interoperability in processes of subject querying and knowledge exploration is provided. The book is relevant to information scientists, knowledge workers and indexers. It provides a suitable combination of theoretical foundations and practical applications.
This book examines recent methods for data-driven fault diagnosis of multimode continuous processes. It formalizes, generalizes, and systematically presents the main concepts, and approaches required to design fault diagnosis methods for multimode continuous processes. The book provides both theoretical and practical tools to help readers address the fault diagnosis problem by drawing data-driven methods from at least three different areas: statistics, unsupervised, and supervised learning.
This book explores a wide range of emerging cultural, heritage, and other tourism issues that will shape the future of hospitality and tourism research and practice in the digital and innovation era. It offers stimulating new perspectives in the fields of tourism, travel, hospitality, culture and heritage, leisure, and sports within the context of a knowledge society and smart economy. A central theme is the need to adopt a more holistic approach to tourism development that is aligned with principles of sustainability; at the same time, the book critically reassesses the common emphasis on innovation as a tool for growth-led and market-oriented development. In turn, fresh approaches to innovation practices underpinned by ethics and sustainability are encouraged, and opportunities for the exploration of new research avenues and projects on innovation in tourism are highlighted. Based on the proceedings of the Sixth International Conference of the International Association of Cultural and Digital Tourism (IACuDiT) and edited in collaboration with IACuDiT, the book will appeal to a broad readership encompassing academia, industry, government, and other organizations.
This volume constitutes the refereed and revised post-conference proceedings of the 4th IFIP TC 5 DCITDRR International Conference on Information Technology in Disaster Risk Reduction, ITDRR 2019, in Kyiv, Ukraine, in October 2019. The 17 full papers and 2 short papers presented were carefully reviewed and selected from 53 submissions. The papers focus on various aspects and challenges of coping with disaster risk reduction. The main topics include areas such as natural disasters, big data, cloud computing, Internet of Things, mobile computing, emergency management, disaster information processing, and disaster risk assessment and management. |
![]() ![]() You may like...
Applied Big Data Analytics and Its Role…
Peng Zhao, Xin Wang, …
Hardcover
R7,211
Discovery Miles 72 110
Opinion Mining and Text Analytics on…
Pantea Keikhosrokiani, Moussa Pourya Asl
Hardcover
R10,065
Discovery Miles 100 650
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Management Of Information Security
Michael Whitman, Herbert Mattord
Paperback
Database Principles - Fundamentals of…
Carlos Coronel, Keeley Crockett, …
Paperback
Fundamentals of Spatial Information…
Robert Laurini, Derek Thompson
Hardcover
R1,539
Discovery Miles 15 390
|