![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Databases > General
The requirements for production systems are constantly changing as a result of changing competitive conditions. This poses a challenge for manufacturers in the various branches of industry and creates an ever-increasing need for flexibility. With this as a background, this book explores the current developments and trends as well as their impact on today's production systems. It also compares known strategies, concepts and methods used to achieve production flexibility. Similarly, the practical knowledge and current research will be drawn upon and subjected to a sound scientific analysis, through which the technical and organizational flexibility ranges can be measured in their application in a production system. The convenience and usefulness of this concept for manufacturers is substantiated by its implementation in a software tool called ecoFLEX and its practical application, based on extensive examples. This illustrates how flexibility flaws can be quickly identified, classified and properly disposed of using ecoFLEX. This tool helps to close the gap between ERP / PPS systems and digital factory planning tools.
This book is a selection of results obtained within three years of research performed under SYNAT-a nation-wide scientific project aiming at creating an infrastructure for scientific content storage and sharing for academia, education and open knowledge society in Poland. The book is intended to be the last of the series related to the SYNAT project. The previous books, titled "Intelligent Tools for Building a Scientific Information Platform" and "Intelligent Tools for Building a Scientific Information Platform: Advanced Architectures and Solutions," were published as volumes 390 and 467 in Springer's Studies in Computational Intelligence. Its contents is based on the SYNAT 2013 Workshop held in Warsaw. The papers included in this volume present an overview and insight into information retrieval, repository systems, text processing, ontology-based systems, text mining, multimedia data processing and advanced software engineering, addressing the problems of implementing intelligent tools for building a scientific information platform.
This book is proceedings of the 7th FTRA International Conference on Future Information Technology (FutureTech 2012). The topics of FutureTech 2012 cover the current hot topics satisfying the world-wide ever-changing needs. The FutureTech 2012 is intended to foster the dissemination of state-of-the-art research in all future IT areas, including their models, services, and novel applications associated with their utilization. The FutureTech 2012 will provide an opportunity for academic and industry professionals to discuss the latest issues and progress in this area. In addition, the conference will publish high quality papers which are closely related to the various theories, modeling, and practical applications in many types of future technology. The main scope of FutureTech 2012 is as follows. Hybrid Information Technology Cloud and Cluster Computing Ubiquitous Networks and Wireless Communications Multimedia Convergence Intelligent and Pervasive Applications Security and Trust Computing IT Management and Service Bioinformatics and Bio-Inspired Computing Database and Data Mining Knowledge System and Intelligent Agent Human-centric Computing and Social Networks The FutureTech is a major forum for scientists, engineers, and practitioners throughout the world to present the latest research, results, ideas, developments and applications in all areas of future technologies.
This book discusses the advances of artificial intelligence and data sciences in climate change and provides the power of the climate data that is used as inputs to artificial intelligence systems. It is a good resource for researchers and professionals who work in the field of data sciences, artificial intelligence, and climate change applications.
Business process management is usually treated from two different perspectives: business administration and computer science. While business administration professionals tend to consider information technology as a subordinate aspect in business process management for experts to handle, by contrast computer science professionals often consider business goals and organizational regulations as terms that do not deserve much thought but require the appropriate level of abstraction. Matthias Weske argues that all communities involved need to have a common understanding of the different aspects of business process management. To this end, he details the complete business process lifecycle from the modeling phase to process enactment and improvement, taking into account all different stakeholders involved. After starting with a presentation of general foundations and abstraction models, he explains concepts like process orchestrations and choreographies, as well as process properties and data dependencies. Finally, he presents both traditional and advanced business process management architectures, covering, for example, workflow management systems, service-oriented architectures, and data-driven approaches. In addition, he shows how standards like WfMC, SOAP, WSDL, and BPEL fit into the picture. This textbook is ideally suited for classes on business process management, information systems architecture, and workflow management. This 2nd edition contains major updates on BPMN Version 2 process orchestration and process choreographies, and the chapter on BPM methodologies has been completely rewritten. The accompanying website www.bpm-book.com contains further information and additional teaching material.
With the proliferation of huge amounts of (heterogeneous) data on the Web, the importance of information retrieval (IR) has grown considerably over the last few years. Big players in the computer industry, such as Google, Microsoft and Yahoo , are the primary contributors of technology for fast access to Web-based information; and searching capabilities are now integrated into most information systems, ranging from business management software and customer relationship systems to social networks and mobile phone applications. Ceri and his co-authors aim at taking their readers from the foundations of modern information retrieval to the most advanced challenges of Web IR. To this end, their book is divided into three parts. The first part addresses the principles of IR and provides a systematic and compact description of basic information retrieval techniques (including binary, vector space and probabilistic models as well as natural language search processing) before focusing on its application to the Web. Part two addresses the foundational aspects of Web IR by discussing the general architecture of search engines (with a focus on the crawling and indexing processes), describing link analysis methods (specifically Page Rank and HITS), addressing recommendation and diversification, and finally presenting advertising in search (the main source of revenues for search engines). The third and final part describes advanced aspects of Web search, each chapter providing a self-contained, up-to-date survey on current Web research directions. Topics in this part include meta-search and multi-domain search, semantic search, search in the context of multimedia data, and crowd search. The book is ideally suited to courses on information retrieval, as it covers all Web-independent foundational aspects. Its presentation is self-contained and does not require prior background knowledge. It can also be used in the context of classic courses on data management, allowing the instructor to cover both structured and unstructured data in various formats. Its classroom use is facilitated by a set of slides, which can be downloaded from www.search-computing.org.
This book presents best selected papers presented at the First Global Conference on Artificial Intelligence and Applications (GCAIA 2020), organized by the University of Engineering & Management, Jaipur, India, during 8-10 September 2020. The proceeding will be targeting the current research works in the domain of intelligent systems and artificial intelligence.
This book aims to highlight the latest achievements in the use of AI and multimodal artificial intelligence in biomedicine and healthcare. Multimodal AI is a relatively new concept in AI, in which different types of data (e.g. text, image, video, audio, and numerical data) are collected, integrated, and processed through a series of intelligence processing algorithms to improve performance. The edited volume contains selected papers presented at the 2022 Health Intelligence workshop and the associated Data Hackathon/Challenge, co-located with the Thirty-Sixth Association for the Advancement of Artificial Intelligence (AAAI) conference, and presents an overview of the issues, challenges, and potentials in the field, along with new research results. This book provides information for researchers, students, industry professionals, clinicians, and public health agencies interested in the applications of AI and Multimodal AI in public health and medicine.
This open access book presents how cutting-edge digital technologies like Big Data, Machine Learning, Artificial Intelligence (AI), and Blockchain are set to disrupt the financial sector. The book illustrates how recent advances in these technologies facilitate banks, FinTech, and financial institutions to collect, process, analyze, and fully leverage the very large amounts of data that are nowadays produced and exchanged in the sector. To this end, the book also describes some more the most popular Big Data, AI and Blockchain applications in the sector, including novel applications in the areas of Know Your Customer (KYC), Personalized Wealth Management and Asset Management, Portfolio Risk Assessment, as well as variety of novel Usage-based Insurance applications based on Internet-of-Things data. Most of the presented applications have been developed, deployed and validated in real-life digital finance settings in the context of the European Commission funded INFINITECH project, which is a flagship innovation initiative for Big Data and AI in digital finance. This book is ideal for researchers and practitioners in Big Data, AI, banking and digital finance.
Fault Covering Problems in Reconfigurable VLSI Systems describes the authors' recent research on reconfiguration problems for fault-tolerance in VLSI and WSI Systems. The book examines solutions to a number of reconfiguration problems. Efficient algorithms are given for tractable covering problems and general techniques are given for dealing with a large number of intractable covering problems. The book begins with an investigation of algorithms for the reconfiguration of large redundant memories. Next, a number of more general covering problems are considered and the complexity of these problems is analyzed. Finally, a general and uniform approach is proposed for solving a wide class of covering problems. The results and techniques described here will be useful to researchers and students working in this area. As such, the book serves as an excellent reference and may be used as the text for an advanced course on the topic.
As organizations continue to develop, there is an increasing need for technological methods that can keep up with the rising amount of data and information that is being generated. Machine learning is a tool that has become powerful due to its ability to analyze large amounts of data quickly. Machine learning is one of many technological advancements that is being implemented into a multitude of specialized fields. An extensive study on the execution of these advancements within professional industries is necessary. Advanced Multi-Industry Applications of Big Data Clustering and Machine Learning is an essential reference source that synthesizes the analytic principles of clustering and machine learning to big data and provides an interface between the main disciplines of engineering/technology and the organizational, administrative, and planning abilities of management. Featuring research on topics such as project management, contextual data modeling, and business information systems, this book is ideally designed for engineers, economists, finance officers, marketers, decision makers, business professionals, industry practitioners, academicians, students, and researchers seeking coverage on the implementation of big data and machine learning within specific professional fields.
This book focuses on the changes which big data brings to human's society and personal thinking models. The author uses the concept of "data civilization" to reveal that we have entered a brand-new era on civilization level, which could be found and understood from three levels: human data civilization, commercial data civilization, and personal data civilization. There is no doubt data civilization will inevitably make a profound influence on the subversion and reconstruction of human beings including business, society, and thinking models. The book presents a unique perspective to understand the world which is dominated by data more and more.
"Biomedical Engineering: Health Care Systems, Technology and Techniques" is an edited volume with contributions from world experts. It provides readers with unique contributions related to current research and future healthcare systems. Practitioners and researchers focused on computer science, bioinformatics, engineering and medicine will find this book a valuable reference.
This book is about methodological aspects of uncertainty propagation in data processing. Uncertainty propagation is an important problem: while computer algorithms efficiently process data related to many aspects of their lives, most of these algorithms implicitly assume that the numbers they process are exact. In reality, these numbers come from measurements, and measurements are never 100% exact. Because of this, it makes no sense to translate 61 kg into pounds and get the result-as computers do-with 13 digit accuracy. In many cases-e.g., in celestial mechanics-the state of a system can be described by a few numbers: the values of the corresponding physical quantities. In such cases, for each of these quantities, we know (at least) the upper bound on the measurement error. This bound is either provided by the manufacturer of the measuring instrument-or is estimated by the user who calibrates this instrument. However, in many other cases, the description of the system is more complex than a few numbers: we need a function to describe a physical field (e.g., electromagnetic field); we need a vector in Hilbert space to describe a quantum state; we need a pseudo-Riemannian space to describe the physical space-time, etc. To describe and process uncertainty in all such cases, this book proposes a general methodology-a methodology that includes intervals as a particular case. The book is recommended to students and researchers interested in challenging aspects of uncertainty analysis and to practitioners who need to handle uncertainty in such unusual situations.
Creating scientific workflow applications is a very challenging task due to the complexity of the distributed computing environments involved, the complex control and data flow requirements of scientific applications, and the lack of high-level languages and tools support. Particularly, sophisticated expertise in distributed computing is commonly required to determine the software entities to perform computations of workflow tasks, the computers on which workflow tasks are to be executed, the actual execution order of workflow tasks, and the data transfer between them. Qin and Fahringer present a novel workflow language called Abstract Workflow Description Language (AWDL) and the corresponding standards-based, knowledge-enabled tool support, which simplifies the development of scientific workflow applications. AWDL is an XML-based language for describing scientific workflow applications at a high level of abstraction. It is designed in a way that allows users to concentrate on specifying such workflow applications without dealing with either the complexity of distributed computing environments or any specific implementation technology. This research monograph is organized into five parts: overview, programming, optimization, synthesis, and conclusion, and is complemented by an appendix and an extensive reference list. The topics covered in this book will be of interest to both computer science researchers (e.g. in distributed programming, grid computing, or large-scale scientific applications) and domain scientists who need to apply workflow technologies in their work, as well as engineers who want to develop distributed and high-throughput workflow applications, languages and tools.
The growth of the Internet and the availability of enormous volumes of data in digital form has necessitated intense interest in techniques for assisting the user in locating data of interest. The Internet has over 350 million pages of data and is expected to reach over one billion pages by the year 2000. Buried on the Internet are both valuable nuggets for answering questions as well as large quantities of information the average person does not care about. The Digital Library effort is also progressing, with the goal of migrating from the traditional book environment to a digital library environment. Information Retrieval Systems: Theory and Implementation provides a theoretical and practical explanation of the latest advancements in information retrieval and their application to existing systems. It takes a system approach, discussing all aspects of an Information Retrieval System. The importance of the Internet and its associated hypertext-linked structure is put into perspective as a new type of information retrieval data structure. The total system approach also includes discussion of the human interface and the importance of information visualization for identification of relevant information. The theoretical metrics used to describe information systems are expanded to discuss their practical application in the uncontrolled environment of real world systems. Information Retrieval Systems: Theory and Implementation is suitable as a textbook for a graduate-level course on information retrieval, and as a reference for researchers and practitioners in industry.
This book presents the collection of the accepted research papers presented in the 1st 'International Conference on Computational Intelligence and Sustainable Technologies (ICoCIST-2021)'. This edited book contains the articles related to the themes on artificial intelligence in machine learning, big data analysis, soft computing techniques, pattern recognitions, sustainable infrastructural development, sustainable grid computing and innovative technology for societal development, renewable energy, and innovations in Internet of Things (IoT).
This book highlights the latest advances on the implementation and adaptation of blockchain technologies in real-world scientific, biomedical, and data applications. It presents rapid advancements in life sciences research and development by applying the unique capabilities inherent in distributed ledger technologies. The book unveils the current uses of blockchain in drug discovery, drug and device tracking, real-world data collection, and increased patient engagement used to unlock opportunities to advance life sciences research. This paradigm shift is explored from the perspectives of pharmaceutical professionals, biotechnology start-ups, regulatory agencies, ethical review boards, and blockchain developers. This book enlightens readers about the opportunities to empower and enable data in life sciences.
This book gathers extended versions of papers presented at DoSIER 2021 (the 2021 Third Doctoral Symposium on Intelligence Enabled Research, held at Cooch Behar Government Engineering College, West Bengal, India, during November 12-13, 2021). The papers address the rapidly expanding research area of computational intelligence, which, no longer limited to specific computational fields, has since made inroads in signal processing, smart manufacturing, predictive control, robot navigation, smart cities, and sensor design, to name but a few. Presenting chapters written by experts active in these areas, the book offers a valuable reference guide for researchers and industrial practitioners alike and inspires future studies.
This book provides the reader with a basic understanding of the formal concepts of the cluster, clustering, partition, cluster analysis etc. The book explains feature-based, graph-based and spectral clustering methods and discusses their formal similarities and differences. Understanding the related formal concepts is particularly vital in the epoch of Big Data; due to the volume and characteristics of the data, it is no longer feasible to predominantly rely on merely viewing the data when facing a clustering problem. Usually clustering involves choosing similar objects and grouping them together. To facilitate the choice of similarity measures for complex and big data, various measures of object similarity, based on quantitative (like numerical measurement results) and qualitative features (like text), as well as combinations of the two, are described, as well as graph-based similarity measures for (hyper) linked objects and measures for multilayered graphs. Numerous variants demonstrating how such similarity measures can be exploited when defining clustering cost functions are also presented. In addition, the book provides an overview of approaches to handling large collections of objects in a reasonable time. In particular, it addresses grid-based methods, sampling methods, parallelization via Map-Reduce, usage of tree-structures, random projections and various heuristic approaches, especially those used for community detection.
Extensive research and development has produce mutation tools for languages such as Fortran, Ada, C, and IDL; empirical evaluations comparing mutation with other test adequacy criteria; empirical evidence and theoretical justification for the coupling effect; and techniques for speeding up mutation testing using various types of high performance architectures. Mutation has received the attention of software developers and testers in such diverse areas as network protocols and nuclear simulation. Mutation Testing for the New Century brings together cutting edge research results in mutation testing from a wide range of researchers. This book provides answers to key questions related to mutation and raises questions yet to be answered. It is an excellent resource for researchers, practitioners, and students of software engineering.
This book discusses various open issues in software engineering, such as the efficiency of automated testing techniques, predictions for cost estimation, data processing, and automatic code generation. Many traditional techniques are available for addressing these problems. But, with the rapid changes in software development, they often prove to be outdated or incapable of handling the software's complexity. Hence, many previously used methods are proving insufficient to solve the problems now arising in software development. The book highlights a number of unique problems and effective solutions that reflect the state-of-the-art in software engineering. Deep learning is the latest computing technique, and is now gaining popularity in various fields of software engineering. This book explores new trends and experiments that have yielded promising solutions to current challenges in software engineering. As such, it offers a valuable reference guide for a broad audience including systems analysts, software engineers, researchers, graduate students and professors engaged in teaching software engineering.
This book provides a written record of the synergy that already exists among the research communities and represents a solid framework in the advancement of big data and cloud computing disciplines from which new interaction will result in the future. This book is a compendium of the International Conference on Big Data and Cloud Computing (ICBDCC 2021). It includes recent advances in big data analytics, cloud computing, the Internet of nano things, cloud security, data analytics in the cloud, smart cities and grids, etc. This book primarily focuses on the application of knowledge that promotes ideas for solving the problems of society through cutting-edge technologies. The articles featured in this book provide novel ideas that contribute to the growth of world-class research and development. The contents of this book are of interest to researchers and professionals alike. |
![]() ![]() You may like...
Multibiometric Watermarking with…
Rohit M. Thanki, Vedvyas J. Dwivedi, …
Hardcover
R1,613
Discovery Miles 16 130
Handbook of Research on Recent…
Siddhartha Bhattacharyya, Nibaran Das, …
Hardcover
R10,306
Discovery Miles 103 060
Services Computing for Language…
Yohei Murakami, Donghui Lin, …
Hardcover
Bayesian Natural Language Semantics and…
Henk Zeevat, Hans-Christian Schmitz
Hardcover
R3,638
Discovery Miles 36 380
Essentials of Bioinformatics, Volume I…
Noor Ahmad Shaik, Khalid Rehman Hakeem, …
Hardcover
R6,443
Discovery Miles 64 430
|