Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Artificial intelligence
This book gathers state-of-the-art research in computational engineering and bioengineering to facilitate knowledge exchange between various scientific communities. Computational engineering (CE) is a relatively new discipline that addresses the development and application of computational models and simulations often coupled with high-performance computing to solve complex physical problems arising in engineering analysis and design in the context of natural phenomena. Bioengineering (BE) is an important aspect of computational biology, which aims to develop and use efficient algorithms, data structures, and visualization and communication tools to model biological systems. Today, engineering approaches are essential for biologists, enabling them to analyse complex physiological processes, as well as for the pharmaceutical industry to support drug discovery and development programmes.
The proceedings includes cutting-edge research articles from the Fourth International Conference on Signal and Image Processing (ICSIP), which is organised by Dr. N.G.P. Institute of Technology, Kalapatti, Coimbatore. The Conference provides academia and industry to discuss and present the latest technological advances and research results in the fields of theoretical, experimental, and application of signal, image and video processing. The book provides latest and most informative content from engineers and scientists in signal, image and video processing from around the world, which will benefit the future research community to work in a more cohesive and collaborative way.
This book demonstrates the power of neural networks in learning complex behavior from the underlying financial time series data. The results presented also show how neural networks can successfully be applied to volatility modeling, option pricing, and value-at-risk modeling. These features mean that they can be applied to market-risk problems to overcome classic problems associated with statistical models.
The book presents a snapshot of the state of the art in the field of fully fuzzy linear programming. The main focus is on showing current methods for finding the fuzzy optimal solution of fully fuzzy linear programming problems in which all the parameters and decision variables are represented by non-negative fuzzy numbers. It presents new methods developed by the authors, as well as existing methods developed by others, and their application to real-world problems, including fuzzy transportation problems. Moreover, it compares the outcomes of the different methods and discusses their advantages/disadvantages. As the first work to collect at one place the most important methods for solving fuzzy linear programming problems, the book represents a useful reference guide for students and researchers, providing them with the necessary theoretical and practical knowledge to deal with linear programming problems under uncertainty.
This congress proceedings provides recent research on leading-edge manufacturing processes. The aim of this scientific congress is to work out diverse individual solutions of "production in the border area" and transferable methodological approaches. In addition, guest speakers with different backgrounds will give the congress participants food for thoughts, interpretations, views and suggestions. The manufacturing industry is currently undergoing a profound structural change, which on the one hand produces innovative solutions through the use of high-performance communication and information technology, and on the other hand is driven by new requirements for goods, especially in the mobility and energy sector. With the social discourse on how we should live and act primarily according to guidelines of sustainability, structural change is gaining increasing dynamic. It is essential to translate politically specified sustainability goals into socially accepted and marketable technical solutions. Production research is meeting this challenge and will make important contributions and provide innovative solutions from different perspectives.
This special book is dedicated to the memory of Professor Zdzislaw Pawlak, the father of rough set theory, in order to commemorate both the 10th anniversary of his passing and 35 years of rough set theory. The book consists of 20 chapters distributed into four sections, which focus in turn on a historical review of Professor Zdzislaw Pawlak and rough set theory; a review of the theory of rough sets; the state of the art of rough set theory; and major developments in rough set based data mining approaches. Apart from Professor Pawlak's contributions to rough set theory, other areas he was interested in are also included. Moreover, recent theoretical studies and advances in applications are also presented. The book will offer a useful guide for researchers in Knowledge Engineering and Data Mining by suggesting new approaches to solving the problems they encounter.
This book presents innovative research works to demonstrate the potential and the advancements of computing approaches to utilize healthcare centric and medical datasets in solving complex healthcare problems. Computing technique is one of the key technologies that are being currently used to perform medical diagnostics in the healthcare domain, thanks to the abundance of medical data being generated and collected. Nowadays, medical data is available in many different forms like MRI images, CT scan images, EHR data, test reports, histopathological data and doctor patient conversation data. This opens up huge opportunities for the application of computing techniques, to derive data-driven models that can be of very high utility, in terms of providing effective treatment to patients. Moreover, machine learning algorithms can uncover hidden patterns and relationships present in medical datasets, which are too complex to uncover, if a data-driven approach is not taken. With the help of computing systems, today, it is possible for researchers to predict an accurate medical diagnosis for new patients, using models built from previous patient data. Apart from automatic diagnostic tasks, computing techniques have also been applied in the process of drug discovery, by which a lot of time and money can be saved. Utilization of genomic data using various computing techniques is another emerging area, which may in fact be the key to fulfilling the dream of personalized medications. Medical prognostics is another area in which machine learning has shown great promise recently, where automatic prognostic models are being built that can predict the progress of the disease, as well as can suggest the potential treatment paths to get ahead of the disease progression.
This edited monograph includes state-of-the-art contributions on continuous time dynamical networks with delays. The book is divided into four parts. The first part presents tools and methods for the analysis of time-delay systems with a particular attention on control problems of large scale or infinite-dimensional systems with delays. The second part of the book is dedicated to the use of time-delay models for the analysis and design of Networked Control Systems. The third part of the book focuses on the analysis and design of systems with asynchronous sampling intervals which occur in Networked Control Systems. The last part of the book exposes several contributions dealing with the design of cooperative control and observation laws for networked control systems. The target audience primarily comprises researchers and experts in the field of control theory, but the book may also be beneficial for graduate students.
Over the past decade the field of Human-Computer Interaction has evolved from the study of the usability of interactive products towards a more holistic understanding of how they may mediate desired human experiences. This book identifies the notion of diversity in users' experiences with interactive products and proposes methods and tools for modeling this along two levels: (a) interpersonal diversity in users' responses to early conceptual designs, and (b) the dynamics of users' experiences over time. The Repertory Grid Technique is proposed as an alternative to standardized psychometric scales for modeling interpersonal diversity in users' responses to early concepts in the design process, and new Multi-Dimensional Scaling procedures are introduced for modeling such complex quantitative data. iScale, a tool for the retrospective assessment of users' experiences over time is proposed as an alternative to longitudinal field studies, and a semi-automated technique for the analysis of the elicited experience narratives is introduced. Through these two methodological contributions, this book argues against averaging in the subjective evaluation of interactive products. It proposes the development of interactive tools that can assist designers in moving across multiple levels of abstraction of empirical data, as design-relevant knowledge might be found on all these levels. Foreword by Jean-Bernard Martens and Closing Note by Marc Hassenzahl.
This book introduces Document As System (DAS), a new GeoComputation pattern, which is also a new GIS application pattern. It uses the GeoComputation language (G language) to describe and execute complex spatial analysis model in the MS Word environment, which solves the bottleneck problem of GIS application, makes GIS become a popular tool for spatial data analysis from the spatial data visualization tool, and plays an important role in the wide application of GIS technology. This book systematically introduces the theory related to the new GeoComputation pattern and the application example in the "dual-evaluation" of territorial and spatial planning, which can be used as a learning and reference manual for GIS related professionals and business personnel engaged in the "dual-evaluation" of territorial and spatial planning.
This book aims to bring together researchers and practitioners from diverse disciplines-from sociology, biology, physics, and computer science-who share a passion to better understand the interdependencies within and across systems. This volume contains contributions presented at the 11th International Conference on Complex Networks (CompleNet) in Exeter, United Kingdom, 31 March - 3 April 2020. CompleNet is a venue for discussing ideas and findings about all types of networks, from biological, to technological, to informational and social. It is this interdisciplinary nature of complex networks that CompleNet aims to explore and celebrate.
The book conclusively solves problems associated with the control and estimation of nonlinear and chaotic dynamics in financial systems when these are described in the form of nonlinear ordinary differential equations. It then addresses problems associated with the control and estimation of financial systems governed by partial differential equations (e.g. the Black-Scholes partial differential equation (PDE) and its variants). Lastly it an offers optimal solution to the problem of statistical validation of computational models and tools used to support financial engineers in decision making. The application of state-space models in financial engineering means that the heuristics and empirical methods currently in use in decision-making procedures for finance can be eliminated. It also allows methods of fault-free performance and optimality in the management of assets and capitals and methods assuring stability in the functioning of financial systems to be established. Covering the following key areas of financial engineering: (i) control and stabilization of financial systems dynamics, (ii) state estimation and forecasting, and (iii) statistical validation of decision-making tools, the book can be used for teaching undergraduate or postgraduate courses in financial engineering. It is also a useful resource for the engineering and computer science community
In this book, the editors explain how students enrolled in two digital forensic courses at their institution are exposed to experiential learning opportunities, where the students acquire the knowledge and skills of the subject-matter while also learning how to adapt to the ever-changing digital forensic landscape. Their findings (e.g., forensic examination of different IoT devices) are also presented in the book. Digital forensics is a topic of increasing importance as our society becomes "smarter" with more of the "things" around us been internet- and inter-connected (e.g., Internet of Things (IoT) and smart home devices); thus, the increasing likelihood that we will need to acquire data from these things in a forensically sound manner. This book is of interest to both digital forensic educators and digital forensic practitioners, as well as students seeking to learn about digital forensics.
These papers on Intelligent Data Analysis and Management (IDAM) examine issues related to the research and applications of Artificial Intelligence techniques in data analysis and management across a variety of disciplines. The papers derive from the 2013 IDAM conference in Kaohsiung ,Taiwan. It is an interdisciplinary research field involving academic researchers in information technologies, computer science, public policy, bioinformatics, medical informatics, and social and behavior studies, etc. The techniques studied include (but are not limited to): data visualization, data pre-processing, data engineering, database mining techniques, tools and applications, evolutionary algorithms, machine learning, neural nets, fuzzy logic, statistical pattern recognition, knowledge filtering, and post-processing, etc.
This book provides an overview of the research work on data privacy and privacy enhancing technologies carried by the participants of the ARES project. ARES (Advanced Research in Privacy an Security, CSD2007-00004) has been one of the most important research projects funded by the Spanish Government in the fields of computer security and privacy. It is part of the now extinct CONSOLIDER INGENIO 2010 program, a highly competitive program which aimed to advance knowledge and open new research lines among top Spanish research groups. The project started in 2007 and will finish this 2014. Composed by 6 research groups from 6 different institutions, it has gathered an important number of researchers during its lifetime. Among the work produced by the ARES project, one specific work package has been related to privacy. This books gathers works produced by members of the project related to data privacy and privacy enhancing technologies. The presented works not only summarize important research carried in the project but also serve as an overview of the state of the art in current research on data privacy and privacy enhancing technologies.
This authored monograph presents key aspects of signal processing analysis in the biomedical arena. Unlike wireless communication systems, biological entities produce signals with underlying nonlinear, chaotic nature that elude classification using the standard signal processing techniques, which have been developed over the past several decades for dealing primarily with standard communication systems. This book separates what is random from that which appears to be random and yet is truly deterministic with random appearance. At its core, this work gives the reader a perspective on biomedical signals and the means to classify and process such signals. In particular, a review of random processes along with means to assess the behavior of random signals is also provided. The book also includes a general discussion of biological signals in order to demonstrate the inefficacy of the well-known techniques to correctly extract meaningful information from such signals. Finally, a thorough discussion of recently proposed signal processing tools and methods for addressing biological signals is included. The target audience primarily comprises researchers and expert practitioners but the book may also be beneficial for graduate students.
The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, researchers and practitioners in the area of soft computing, systems modeling and control.
This book presents recent applications and approaches as well as challenges in digital forensic science. One of the evolving challenges that is covered in the book is the cloud forensic analysis which applies the digital forensic science over the cloud computing paradigm for conducting either live or static investigations within the cloud environment. The book also covers the theme of multimedia forensics and watermarking in the area of information security. That includes highlights on intelligence techniques designed for detecting significant changes in image and video sequences. Moreover, the theme proposes recent robust and computationally efficient digital watermarking techniques. The last part of the book provides several digital forensics related applications, including areas such as evidence acquisition enhancement, evidence evaluation, cryptography, and finally, live investigation through the importance of reconstructing the botnet attack scenario to show the malicious activities and files as evidences to be presented in a court.
In the fields of data mining and control, the huge amount of unstructured data and the presence of uncertainty in system descriptions have always been critical issues. The book Randomized Algorithms in Automatic Control and Data Mining introduces the readers to the fundamentals of randomized algorithm applications in data mining (especially clustering) and in automatic control synthesis. The methods proposed in this book guarantee that the computational complexity of classical algorithms and the conservativeness of standard robust control techniques will be reduced. It is shown that when a problem requires "brute force" in selecting among options, algorithms based on random selection of alternatives offer good results with certain probability for a restricted time and significantly reduce the volume of operations.
This book contains the full papers presented at the MICCAI 2013 workshop Bio-Imaging and Visualization for Patient-Customized Simulations (MWBIVPCS 2013). MWBIVPCS 2013 brought together researchers representing several fields, such as Biomechanics, Engineering, Medicine, Mathematics, Physics and Statistic. The contributions included in this book present and discuss new trends in those fields, using several methods and techniques, including the finite element method, similarity metrics, optimization processes, graphs, hidden Markov models, sensor calibration, fuzzy logic, data mining, cellular automation, active shape models, template matching and level sets. These serve as tools to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modelling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis. This book will appeal to researchers, PhD students and graduate students with multidisciplinary interests related to the areas of medical imaging, image processing and analysis, computer vision, image segmentation, image registration and fusion, scientific data visualization and image based modeling and simulation.
Biological and other natural processes have always been a source of inspiration for computer science and information technology. Many emerging problem solving techniques integrate advanced evolution and cooperation strategies, encompassing a range of spatio-temporal scales for visionary conceptualization of evolutionary computation. This book is a collection of research works presented in the VI International Workshop on Nature Inspired Cooperative Strategies for Optimization (NICSO) held in Canterbury, UK. Previous editions of NICSO were held in Granada, Spain (2006 & 2010), Acireale, Italy (2007), Tenerife, Spain (2008), and Cluj-Napoca, Romania (2011). NICSO 2013 and this book provides a place where state-of-the-art research, latest ideas and emerging areas of nature inspired cooperative strategies for problem solving are vigorously discussed and exchanged among the scientific community. The breadth and variety of articles in this book report on nature inspired methods and applications such as Swarm Intelligence, Hyper-heuristics, Evolutionary Algorithms, Cellular Automata, Artificial Bee Colony, Dynamic Optimization, Support Vector Machines, Multi-Agent Systems, Ant Clustering, Evolutionary Design Optimisation, Game Theory and other several Cooperation Models.
This volume presents a collection of original research works by leading specialists focusing on novel and promising approaches in which the multi-agent system paradigm is used to support, enhance or replace traditional approaches to solving difficult optimization problems. The editors have invited several well-known specialists to present their solutions, tools, and models falling under the common denominator of the agent-based optimization. The book consists of eight chapters covering examples of application of the multi-agent paradigm and respective customized tools to solve difficult optimization problems arising in different areas such as machine learning, scheduling, transportation and, more generally, distributed and cooperative problem solving.
With the increasing advances in hardware technology for data collection, and advances in software technology (databases) for data organization, computer scientists have increasingly participated in the latest advancements of the outlier analysis field. Computer scientists, specifically, approach this field based on their practical experiences in managing large amounts of data, and with far fewer assumptions- the data can be of any type, structured or unstructured, and may be extremely large. Outlier Analysis is a comprehensive exposition, as understood by data mining experts, statisticians and computer scientists. The book has been organized carefully, and emphasis was placed on simplifying the content, so that students and practitioners can also benefit. Chapters will typically cover one of three areas: methods and techniques commonly used in outlier analysis, such as linear methods, proximity-based methods, subspace methods, and supervised methods; data domains, such as, text, categorical, mixed-attribute, time-series, streaming, discrete sequence, spatial and network data; and key applications of these methods as applied to diverse domains such as credit card fraud detection, intrusion detection, medical diagnosis, earth science, web log analytics, and social network analysis are covered.
The aim of this book is to explain to high-performance computing (HPC) developers how to utilize the Intel(r) Xeon Phi series products efficiently. To that end, it introduces some computing grammar, programming technology and optimization methods for using many-integrated-core (MIC) platforms and also offers tips and tricks for actual use, based on the authors first-hand optimization experience. The material is organized in three sections. The first section, Basics of MIC, introduces the fundamentals of MIC architecture and programming, including the specific Intel MIC programming environment. Next, the section on Performance Optimization explains general MIC optimization techniques, which are then illustrated step-by-step using the classical parallel programming example of matrix multiplication. Finally, Project development presents a set of practical and experience-driven methods for using parallel computing in application projects, including how to determine if a serial or parallel CPU program is suitable for MIC and how to transplant a program onto MIC. This book appeals to two main audiences: First, software developers for HPC applications it will enable them to fully exploit the MIC architecture and thus achieve the extreme performance usually required in biological genetics, medical imaging, aerospace, meteorology and other areas of HPC. Second, students and researchers engaged in parallel and high-performance computing it will guide them on how to push the limits of system performance for HPC applications. "
This book contains an edited selection of the papers accepted for presentation and discussion at the first International Symposium on Qualitative Research (ISQR2016), held in Porto, Portugal, July 12th-14th, 2016. The book and the symposium features the four main application fields Education, Health, Social Sciences and Engineering and Technology and seven main subjects: Rationale and Paradigms of Qualitative Research (theoretical studies, critical reflection about epistemological dimensions, ontological and axiological); Systematization of approaches with Qualitative Studies (literature review, integrating results, aggregation studies, meta -analysis, meta- analysis of qualitative meta- synthesis, meta- ethnography); Qualitative and Mixed Methods Research (emphasis in research processes that build on mixed methodologies but with priority to qualitative approaches); Data Analysis Types (content analysis , discourse analysis , thematic analysis , narrative analysis , etc.); Innovative processes of Qualitative Data Analysis (design analysis, articulation and triangulation of different sources of data - images, audio, video); Qualitative Research in Web Context (eResearch, virtual ethnography, interaction analysis , latent corpus on the internet, etc.); Qualitative Analysis with Support of Specific Software (usability studies, user experience, the impact of software on the quality of research. |
You may like...
Research Handbook on the Law of…
Woodrow Barfield, Ugo Pagallo
Paperback
R1,471
Discovery Miles 14 710
Competition and Regulation in the Data…
Gintare Surblyte-Namaviciene
Hardcover
R3,019
Discovery Miles 30 190
Advanced Introduction to Law and…
Woodrow Barfield, Ugo Pagallo
Paperback
R639
Discovery Miles 6 390
Technology, Users and Uses - Ethics and…
Joan Casas-Roma
Hardcover
|