![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Databases
Digital Image Processing with C++ presents the theory of digital image processing, and implementations of algorithms using a dedicated library. Processing a digital image means transforming its content (denoising, stylizing, etc.), or extracting information to solve a given problem (object recognition, measurement, motion estimation, etc.). This book presents the mathematical theories underlying digital image processing, as well as their practical implementation through examples of algorithms implemented in the C++ language, using the free and easy-to-use CImg library. Chapters cover in a broad way the field of digital image processing and proposes practical and functional implementations of each method theoretically described. The main topics covered include filtering in spatial and frequency domains, mathematical morphology, feature extraction and applications to segmentation, motion estimation, multispectral image processing and 3D visualization. Students or developers wishing to discover or specialize in this discipline, teachers and researchers wishing to quickly prototype new algorithms, or develop courses, will all find in this book material to discover image processing or deepen their knowledge in this field.
Today, cloud computing, big data, and the internet of things (IoT) are becoming indubitable parts of modern information and communication systems. They cover not only information and communication technology but also all types of systems in society including within the realms of business, finance, industry, manufacturing, and management. Therefore, it is critical to remain up-to-date on the latest advancements and applications, as well as current issues and challenges. The Handbook of Research on Cloud Computing and Big Data Applications in IoT is a pivotal reference source that provides relevant theoretical frameworks and the latest empirical research findings on principles, challenges, and applications of cloud computing, big data, and IoT. While highlighting topics such as fog computing, language interaction, and scheduling algorithms, this publication is ideally designed for software developers, computer engineers, scientists, professionals, academicians, researchers, and students.
This book is a selection of results obtained within three years of research performed under SYNAT-a nation-wide scientific project aiming at creating an infrastructure for scientific content storage and sharing for academia, education and open knowledge society in Poland. The book is intended to be the last of the series related to the SYNAT project. The previous books, titled "Intelligent Tools for Building a Scientific Information Platform" and "Intelligent Tools for Building a Scientific Information Platform: Advanced Architectures and Solutions," were published as volumes 390 and 467 in Springer's Studies in Computational Intelligence. Its contents is based on the SYNAT 2013 Workshop held in Warsaw. The papers included in this volume present an overview and insight into information retrieval, repository systems, text processing, ontology-based systems, text mining, multimedia data processing and advanced software engineering, addressing the problems of implementing intelligent tools for building a scientific information platform.
With the ever-increasing volume of data, proper management of data is a challenging proposition to scientists and researchers, and given the vast storage space required, multimedia data is no exception in this regard. Scientists and researchers are investing great effort to discover new space-efficient methods for storage and archiving of this data. Intelligent Innovations in Multimedia Data Engineering and Management provides emerging research exploring the theoretical and practical aspects of storage systems and computing methods for large forms of data. Featuring coverage on a broad range of topics such as binary image, fuzzy logic, and metaheuristic algorithms, this book is ideally designed for computer engineers, IT professionals, technology developers, academicians, and researchers seeking current research on advancing strategies and computing techniques for various types of data.
This book contains the first and second volume papers from the 8th International Conference on the History of Records and Archives (I-CHORA 8). Contributors present articles that propose new solutions and aspirations for a new era in the technology of archives and recordkeeping. Topics cover rethinking the role played by archivists, and reframing recordkeeping practices that focus on the rights of the subjects of the records. This text appeals to students, researchers and professionals in the field. Previously published in: Archival Science: "Special Issue: Archives in a Changing Climate - Part I" and "Archives in a Changing Climate - Part II" Chapter "Displaced archives": proposing a research agenda is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
This book provides a thorough summary of the means currently available to the investigators of Artificial Intelligence for making criminal behavior (both individual and collective) foreseeable, and for assisting their investigative capacities. The volume provides chapters on the introduction of artificial intelligence and machine learning suitable for an upper level undergraduate with exposure to mathematics and some programming skill or a graduate course. It also brings the latest research in Artificial Intelligence to life with its chapters on fascinating applications in the area of law enforcement, though much is also being accomplished in the fields of medicine and bioengineering. Individuals with a background in Artificial Intelligence will find the opening chapters to be an excellent refresher but the greatest excitement will likely be the law enforcement examples, for little has been done in that area. The editors have chosen to shine a bright light on law enforcement analytics utilizing artificial neural network technology to encourage other researchers to become involved in this very important and timely field of study.
This important text/reference presents a comprehensive review of techniques for taxonomy matching, discussing matching algorithms, analyzing matching systems, and comparing matching evaluation approaches. Different methods are investigated in accordance with the criteria of the Ontology Alignment Evaluation Initiative (OAEI). The text also highlights promising developments and innovative guidelines, to further motivate researchers and practitioners in the field. Topics and features: discusses the fundamentals and the latest developments in taxonomy matching, including the related fields of ontology matching and schema matching; reviews next-generation matching strategies, matching algorithms, matching systems, and OAEI campaigns, as well as alternative evaluations; examines how the latest techniques make use of different sources of background knowledge to enable precise matching between repositories; describes the theoretical background, state-of-the-art research, and practical real-world applications; covers the fields of dynamic taxonomies, personalized directories, catalog segmentation, and recommender systems. This stimulating book is an essential reference for practitioners engaged in data science and business intelligence, and for researchers specializing in taxonomy matching and semantic similarity assessment. The work is also suitable as a supplementary text for advanced undergraduate and postgraduate courses on information and metadata management.
Fault Covering Problems in Reconfigurable VLSI Systems describes the authors' recent research on reconfiguration problems for fault-tolerance in VLSI and WSI Systems. The book examines solutions to a number of reconfiguration problems. Efficient algorithms are given for tractable covering problems and general techniques are given for dealing with a large number of intractable covering problems. The book begins with an investigation of algorithms for the reconfiguration of large redundant memories. Next, a number of more general covering problems are considered and the complexity of these problems is analyzed. Finally, a general and uniform approach is proposed for solving a wide class of covering problems. The results and techniques described here will be useful to researchers and students working in this area. As such, the book serves as an excellent reference and may be used as the text for an advanced course on the topic.
This Springer book provides a perfect platform to submit chapters that discuss the prospective developments and innovative ideas in artificial intelligence and machine learning techniques in the diagnosis of COVID-19. COVID-19 is a huge challenge to humanity and the medical sciences. So far as of today, we have been unable to find a medical solution (Vaccine). However, globally, we are still managing the use of technology for our work, communications, analytics, and predictions with the use of advancement in data science, communication technologies (5G & Internet), and AI. Therefore, we might be able to continue and live safely with the use of research in advancements in data science, AI, machine learning, mobile apps, etc., until we can find a medical solution such as a vaccine. We have selected eleven chapters after the vigorous review process. Each chapter has demonstrated the research contributions and research novelty. Each group of authors must fulfill strict requirements.
This book provides a comprehensive set of characterization, prediction, optimization, evaluation, and evolution techniques for a diagnosis system for fault isolation in large electronic systems. Readers with a background in electronics design or system engineering can use this book as a reference to derive insightful knowledge from data analysis and use this knowledge as guidance for designing reasoning-based diagnosis systems. Moreover, readers with a background in statistics or data analytics can use this book as a practical case study for adapting data mining and machine learning techniques to electronic system design and diagnosis. This book identifies the key challenges in reasoning-based, board-level diagnosis system design and presents the solutions and corresponding results that have emerged from leading-edge research in this domain. It covers topics ranging from highly accurate fault isolation, adaptive fault isolation, diagnosis-system robustness assessment, to system performance analysis and evaluation, knowledge discovery and knowledge transfer. With its emphasis on the above topics, the book provides an in-depth and broad view of reasoning-based fault diagnosis system design. * Explains and applies optimized techniques from the machine-learning domain to solve the fault diagnosis problem in the realm of electronic system design and manufacturing;* Demonstrates techniques based on industrial data and feedback from an actual manufacturing line;* Discusses practical problems, including diagnosis accuracy, diagnosis time cost, evaluation of diagnosis system, handling of missing syndromes in diagnosis, and need for fast diagnosis-system development.
This book presents Proceedings of the International Conference on Intelligent Systems and Networks (ICISN 2022), held at Hanoi in Vietnam. It includes peer reviewed high quality articles on Intelligent System and Networks. It brings together professionals and researchers in the area and presents a platform for exchange of ideas and to foster future collaboration. The topics covered in this book include- Foundations of Computer Science; Computational Intelligence Language and speech processing; Software Engineering Software development methods; Wireless Communications Signal Processing for Communications; Electronics track IoT and Sensor Systems Embedded Systems; etc.
The process of developing predictive models includes many stages. Most resources focus on the modeling algorithms but neglect other critical aspects of the modeling process. This book describes techniques for finding the best representations of predictors for modeling and for nding the best subset of predictors for improving model performance. A variety of example data sets are used to illustrate the techniques along with R programs for reproducing the results.
This book constitutes the refereed proceedings of the 22nd International TRIZ Future Conference on Automated Invention for Smart Industries, TFC 2022, which took place in Warsaw, Poland, in September 2022; the event was sponsored by IFIP WG 5.4.The 39 full papers presented were carefully reviewed and selected from 43 submissions. They are organized in the following thematic sections: New perspectives of TRIZ; AI in systematic innovation; systematic innovations supporting IT and AI; TRIZ applications; TRIZ education and ecosystem.
Environmental information systems (EIS) are concerned with the management of data about the soil, the water, the air, and the species in the world around us. This first textbook on the topic gives a conceptual framework for EIS by structuring the data flow into 4 phases: data capture, storage, analysis, and metadata management. This flow corresponds to a complex aggregation process gradually transforming the incoming raw data into concise documents suitable for high-level decision support. All relevant concepts are covered, including statistical classification, data fusion, uncertainty management, knowledge based systems, GIS, spatial databases, multidimensional access methods, object-oriented databases, simulation models, and Internet-based information management. Several case studies present EIS in practice.
This book presents recent advances in Knowledge discovery in databases (KDD) with a focus on the areas of market basket database, time-stamped databases and multiple related databases. Various interesting and intelligent algorithms are reported on data mining tasks. A large number of association measures are presented, which play significant roles in decision support applications. This book presents, discusses and contrasts new developments in mining time-stamped data, time-based data analyses, the identification of temporal patterns, the mining of multiple related databases, as well as local patterns analysis.
The growth of the Internet and the availability of enormous volumes of data in digital form has necessitated intense interest in techniques for assisting the user in locating data of interest. The Internet has over 350 million pages of data and is expected to reach over one billion pages by the year 2000. Buried on the Internet are both valuable nuggets for answering questions as well as large quantities of information the average person does not care about. The Digital Library effort is also progressing, with the goal of migrating from the traditional book environment to a digital library environment. Information Retrieval Systems: Theory and Implementation provides a theoretical and practical explanation of the latest advancements in information retrieval and their application to existing systems. It takes a system approach, discussing all aspects of an Information Retrieval System. The importance of the Internet and its associated hypertext-linked structure is put into perspective as a new type of information retrieval data structure. The total system approach also includes discussion of the human interface and the importance of information visualization for identification of relevant information. The theoretical metrics used to describe information systems are expanded to discuss their practical application in the uncontrolled environment of real world systems. Information Retrieval Systems: Theory and Implementation is suitable as a textbook for a graduate-level course on information retrieval, and as a reference for researchers and practitioners in industry.
Service-oriented computing has become one of the predominant factors in current IT research and development. Web services seem to be the middleware solution of the future for highly interoperable distributed software solutions. In parallel, research on the Semantic Web provides the results required to exploit distributed machine-processable data. To combine these two research lines into industrial-strength applications, a number of research projects have been set up by organizations like W3C and the EU. Dieter Fensel and his coauthors deliver a profound introduction into one of the most promising approaches the Web Service Modeling Ontology (WSMO). After a brief presentation of the underlying basic technologies and standards of the World Wide Web, the Semantic Web, and Web Services, they detail all the elements of WSMO from basic concepts to possible applications in e-commerce, e-government and e-banking, and they also describe its relation to other approaches like OWL-S or WSDL-S. While many of the related technologies and standards are still under development, this book already offers both a broad conceptual introduction and lots of pointers to future application scenarios for researchers in academia and industry as well as for developers of distributed Web applications.
This book, which has been in the making for some eighteen years, would never have begun were it not for Dr. David Dewhirst in 1976 kindly having shown the author a packet of papers in the archives of the Cambridge Obser vatories. These letters and miscellaneous papers of Fearon Fallows sparked an interest in the history of the Royal Observatory at the Cape of Good Hope which, after the diversion of producing several books on later phases of the Observatory, has finally resulted in a detailed study of the origin and first years of the Observatory's life. Publication of this book coincides with the 175th anniversary of the founding of the Royal Observatory, e.G.H. Observatories are built for the use of astronomers. They are built through astronomers, architects, engineers and contractors acting in concert (if not always in harmony). They are constructed, with whatever techniques and skills are available, from bricks, stones and mortar; but their construction may take a toll of personal relationships, patience, and flesh and blood."
This book introduces the properties of conservative extensions of First Order Logic (FOL) to new Intensional First Order Logic (IFOL). This extension allows for intensional semantics to be used for concepts, thus affording new and more intelligent IT systems. Insofar as it is conservative, it preserves software applications and constitutes a fundamental advance relative to the current RDB databases, Big Data with NewSQL, Constraint databases, P2P systems, and Semantic Web applications. Moreover, the many-valued version of IFOL can support the AI applications based on many-valued logics.
This book addresses a range of aging intensity functions, which make it possible to measure and compare aging trends for lifetime random variables. Moreover, they can be used for the characterization of lifetime distributions, also with bounded support. Stochastic orders based on the aging intensities, and their connections with some other orders, are also discussed. To demonstrate the applicability of aging intensity in reliability practice, the book analyzes both real and generated data. The estimated, properly chosen, aging intensity function is mainly recommended to identify data's lifetime distribution, and secondly, to estimate some of the parameters of the identified distribution. Both reliability researchers and practitioners will find the book a valuable guide and source of inspiration.
Creating scientific workflow applications is a very challenging task due to the complexity of the distributed computing environments involved, the complex control and data flow requirements of scientific applications, and the lack of high-level languages and tools support. Particularly, sophisticated expertise in distributed computing is commonly required to determine the software entities to perform computations of workflow tasks, the computers on which workflow tasks are to be executed, the actual execution order of workflow tasks, and the data transfer between them. Qin and Fahringer present a novel workflow language called Abstract Workflow Description Language (AWDL) and the corresponding standards-based, knowledge-enabled tool support, which simplifies the development of scientific workflow applications. AWDL is an XML-based language for describing scientific workflow applications at a high level of abstraction. It is designed in a way that allows users to concentrate on specifying such workflow applications without dealing with either the complexity of distributed computing environments or any specific implementation technology. This research monograph is organized into five parts: overview, programming, optimization, synthesis, and conclusion, and is complemented by an appendix and an extensive reference list. The topics covered in this book will be of interest to both computer science researchers (e.g. in distributed programming, grid computing, or large-scale scientific applications) and domain scientists who need to apply workflow technologies in their work, as well as engineers who want to develop distributed and high-throughput workflow applications, languages and tools.
Extensive research and development has produce mutation tools for languages such as Fortran, Ada, C, and IDL; empirical evaluations comparing mutation with other test adequacy criteria; empirical evidence and theoretical justification for the coupling effect; and techniques for speeding up mutation testing using various types of high performance architectures. Mutation has received the attention of software developers and testers in such diverse areas as network protocols and nuclear simulation. Mutation Testing for the New Century brings together cutting edge research results in mutation testing from a wide range of researchers. This book provides answers to key questions related to mutation and raises questions yet to be answered. It is an excellent resource for researchers, practitioners, and students of software engineering.
This book examines the field of parallel database management systems and illustrates the great variety of solutions based on a shared-storage or a shared-nothing architecture. Constantly dropping memory prices and the desire to operate with low-latency responses on large sets of data paved the way for main memory-based parallel database management systems. However, this area is currently dominated by the shared-nothing approach in order to preserve the in-memory performance advantage by processing data locally on each server. The main argument this book makes is that such an unilateral development will cease due to the combination of the following three trends: a) Today's network technology features remote direct memory access (RDMA) and narrows the performance gap between accessing main memory on a server and of a remote server to and even below a single order of magnitude. b) Modern storage systems scale gracefully, are elastic and provide high-availability. c) A modern storage system such as Stanford's RAM Cloud even keeps all data resident in the main memory. Exploiting these characteristics in the context of a main memory-based parallel database management system is desirable. The book demonstrates that the advent of RDMA-enabled network technology makes the creation of a parallel main memory DBMS based on a shared-storage approach feasible.
Intelligent Integration of Information presents a collection of chapters bringing the science of intelligent integration forward. The focus on integration defines tasks that increase the value of information when information from multiple sources is accessed, related, and combined. This contributed volume has also been published as a special double issue of the Journal of Intelligent Information Systems (JIIS), Volume 6:2/3. |
You may like...
Revolutionary Applications of…
Surjit Singh, Anca Delia Jurcut
Hardcover
R6,196
Discovery Miles 61 960
Mathematical Methods in Data Science
Jingli Ren, Haiyan Wang
Paperback
R3,925
Discovery Miles 39 250
Big Data and Smart Service Systems
Xiwei Liu, Rangachari Anand, …
Hardcover
Utilizing Blockchain Technologies in…
S. B. Goyal, Nijalingappa Pradeep, …
Hardcover
R6,170
Discovery Miles 61 700
CompTIA Data+ DA0-001 Exam Cram
Akhil Behl, Sivasubramanian
Digital product license key
R1,024
Discovery Miles 10 240
Build A Large Language Model - From…
Sebastian Raschka
Paperback
|