![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General
This book contains papers presented at the 2014 MICCAI Workshop on Computational Diffusion MRI, CDMRI’14. Detailing new computational methods applied to diffusion magnetic resonance imaging data, it offers readers a snapshot of the current state of the art and covers a wide range of topics from fundamental theoretical work on mathematical modeling to the development and evaluation of robust algorithms and applications in neuroscientific studies and clinical practice. Inside, readers will find information on brain network analysis, mathematical modeling for clinical applications, tissue microstructure imaging, super-resolution methods, signal reconstruction, visualization, and more. Contributions include both careful mathematical derivations and a large number of rich full-color visualizations. Computational techniques are key to the continued success and development of diffusion MRI and to its widespread transfer into the clinic. This volume will offer a valuable starting point for anyone interested in learning computational diffusion MRI. It also offers new perspectives and insights on current research challenges for those currently in the field. The book will be of interest to researchers and practitioners in computer science, MR physics, and applied mathematics.
This book is a tribute to 40 years of contributions by Professor Mo Jamshidi who is a well known and respected scholar, researcher, and educator. Mo Jamshidi has spent his professional career formalizing and extending the field of large-scale complex systems (LSS) engineering resulting in educating numerous graduates specifically, ethnic minorities. He has made significant contributions in modeling, optimization, CAD, control and applications of large-scale systems leading to his current global role in formalizing system of systems engineering (SoSE), as a new field. His books on complex LSS and SoSE have filled a vacuum in cyber-physical systems literature for the 21st Century. His contributions to ethnic minority engineering education commenced with his work at the University of New Mexico (UNM, Tier-I Hispanic Serving Institution) in 1980 through a NASA JPL grant. Followed by several more major federal grants, he formalized a model for educating minorities, called VI-P Pyramid where K-12 students(bottom of pyramid) to doctoral (top of pyramid) students form a seamless group working on one project. Upper level students mentor lower ones on a sequential basis. Since 1980, he has graduated over 114 minority students consisting of 62 Hispanics, 34 African Americans., 15 Native Americans, and 3 Pacific Islanders. This book contains contributed chapters from colleagues, and former and current students of Professor Jamshidi. Areas of focus are: control systems, energy and system of systems, robotics and soft computing.
This book presents novel and advanced topics in Medical Image Processing and Computational Vision in order to solidify knowledge in the related fields and define their key stakeholders. It contains extended versions of selected papers presented in VipIMAGE 2013 – IV International ECCOMAS Thematic Conference on Computational Vision and Medical Image, which took place in Funchal, Madeira, Portugal, 14-16 October 2013. The twenty-two chapters were written by invited experts of international recognition and address important issues in medical image processing and computational vision, including: 3D vision, 3D visualization, colour quantisation, continuum mechanics, data fusion, data mining, face recognition, GPU parallelisation, image acquisition and reconstruction, image and video analysis, image clustering, image registration, image restoring, image segmentation, machine learning, modelling and simulation, object detection, object recognition, object tracking, optical flow, pattern recognition, pose estimation, and texture analysis. Different applications are addressed and described throughout the book, comprising: biomechanical studies, bio-structure modelling and simulation, bone characterization, cell tracking, computer-aided diagnosis, dental imaging, face recognition, hand gestures detection and recognition, human motion analysis, human-computer interaction, image and video understanding, image processing, image segmentation, object and scene reconstruction, object recognition and tracking, remote robot control, and surgery planning. This volume is of use to researchers, students, practitioners and manufacturers from several multidisciplinary fields, such as artificial intelligence, bioengineering, biology, biomechanics, computational mechanics, computational vision, computer graphics, computer science, computer vision, human motion, imagiology, machine learning, machine vision, mathematics, medical image, medicine, pattern recognition, and physics.
This book provides embedded software developers with techniques for programming heterogeneous Multi-Processor Systems-on-Chip (MPSoCs), capable of executing multiple applications simultaneously. It describes a set of algorithms and methodologies to narrow the software productivity gap, as well as an in-depth description of the underlying problems and challenges of today’s programming practices. The authors present four different tool flows: A parallelism extraction flow for applications written using the C programming language, a mapping and scheduling flow for parallel applications, a special mapping flow for baseband applications in the context of Software Defined Radio (SDR) and a final flow for analyzing multiple applications at design time. The tool flows are evaluated on Virtual Platforms (VPs), which mimic different characteristics of state-of-the-art heterogeneous MPSoCs.
This monograph book is focused on the recent advances in smart, multimedia and computer gaming technologies. The Contributions include: ·Smart Gamification and Smart Serious Games. ·Fusion of secure IPsec-based Virtual Private Network, mobile computing and rich multimedia technology. ·Teaching and Promoting Smart Internet of Things Solutions Using the Serious-game Approach. ·Evaluation of Student Knowledge using an e-Learning Framework. ·The iTEC Eduteka. ·3D Virtual Worlds as a Fusion of Immersing, Visualizing, Recording, and Replaying Technologies. ·Fusion of multimedia and mobile technology in audio guides for Museums and Exhibitions: from Bluetooth Push to Web Pull. The book is directed to researchers, students and software developers working in the areas of education and information technologies.
This book provides an introduction to the emerging field of planning and decision making for aerial robots. An aerial robot is the ultimate form of Unmanned Aerial Vehicle, an aircraft endowed with built-in intelligence, requiring no direct human control and able to perform a specific task. It must be able to fly within a partially structured environment, to react and adapt to changing environmental conditions and to accommodate for the uncertainty that exists in the physical world. An aerial robot can be termed as a physical agent that exists and flies in the real 3D world, can sense its environment and act on it to achieve specific goals. So throughout this book, an aerial robot will also be termed as an agent. Fundamental problems in aerial robotics include the tasks of spatial motion, spatial sensing and spatial reasoning. Reasoning in complex environments represents a difficult problem. The issues specific to spatial reasoning are planning and decision making. Planning deals with the trajectory algorithmic development based on the available information, while decision making determines priorities and evaluates potential environmental uncertainties. The issues specific to planning and decision making for aerial robots in their environment are examined in this book and categorized as follows: motion planning, deterministic decision making, decision making under uncertainty and finally multi-robot planning. A variety of techniques are presented in this book, and a number of relevant case studies are examined. The topics considered in this book are multidisciplinary in nature and lie at the intersection of Robotics, Control Theory, Operational Research and Artificial Intelligence.
Addressing the open problem of engineering normative open systems using the multi-agent paradigm, normative open systems are explained as systems in which heterogeneous and autonomous entities and institutions coexist in a complex social and legal framework that can evolve to address the different and often conflicting objectives of the many stakeholders involved. Presenting a software engineering approach which covers both the analysis and design of these kinds of systems, and which deals with the open issues in the area, ROMAS (Regulated Open Multi-Agent Systems) defines a specific multi-agent architecture, meta-model, methodology and CASE tool. This CASE tool is based on Model-Driven technology and integrates the graphical design with the formal verification of some properties of these systems by means of model checking techniques. Utilizing tables to enhance reader insights into the most important requirements for designing normative open multi-agent systems, the book also provides a detailed and easy to understand description of the ROMAS approach and the advantages of using ROMAS. This method is illustrated with case studies, in which the reader may develop a comprehensive understanding of applying ROMAS to a given problem. The case studies are presented with illustrations of the developments. Reading this book will help readers to understand the increasing demand for normative open systems and their development requirements; understand how multi-agent systems approaches can be used to deal with the development of systems of this kind; to learn an easy to use and complete engineering method for large-scale and complex normative systems and to recognize how Model-Driven technology can be used to integrate the analysis, design, verification and implementation of multi-agent systems.
The field of robotic vision has advanced dramatically recently with the development of new range sensors. Tremendous progress has been made resulting in significant impact on areas such as robotic navigation, scene/environment understanding, and visual learning. This edited book provides a solid and diversified reference source for some of the most recent important advancements in the field of robotic vision. The book starts with articles that describe new techniques to understand scenes from 2D/3D data such as estimation of planar structures, recognition of multiple objects in the scene using different kinds of features as well as their spatial and semantic relationships, generation of 3D object models, approach to recognize partially occluded objects, etc. Novel techniques are introduced to improve 3D perception accuracy with other sensors such as a gyroscope, positioning accuracy with a visual servoing based alignment strategy for microassembly, and increasing object recognition reliability using related manipulation motion models. For autonomous robot navigation, different vision-based localization and tracking strategies and algorithms are discussed. New approaches using probabilistic analysis for robot navigation, online learning of vision-based robot control, and 3D motion estimation via intensity differences from a monocular camera are described. This collection will be beneficial to graduate students, researchers, and professionals working in the area of robotic vision.
This is a comprehensive description of the cryptographic hash function BLAKE, one of the five final contenders in the NIST SHA3 competition, and of BLAKE2, an improved version popular among developers. It describes how BLAKE was designed and why BLAKE2 was developed, and it offers guidelines on implementing and using BLAKE, with a focus on software implementation. In the first two chapters, the authors offer a short introduction to cryptographic hashing, the SHA3 competition and BLAKE. They review applications of cryptographic hashing, they describe some basic notions such as security definitions and state-of-the-art collision search methods and they present SHA1, SHA2 and the SHA3 finalists. In the chapters that follow, the authors give a complete description of the four instances BLAKE-256, BLAKE-512, BLAKE-224 and BLAKE-384; they describe applications of BLAKE, including simple hashing with or without a salt and HMAC and PBKDF2 constructions; they review implementation techniques, from portable C and Python to AVR assembly and vectorized code using SIMD CPU instructions; they describe BLAKE’s properties with respect to hardware design for implementation in ASICs or FPGAs; they explain BLAKE's design rationale in detail, from NIST’s requirements to the choice of internal parameters; they summarize the known security properties of BLAKE and describe the best attacks on reduced or modified variants; and they present BLAKE2, the successor of BLAKE, starting with motivations and also covering its performance and security aspects. The book concludes with detailed test vectors, a reference portable C implementation of BLAKE, and a list of third-party software implementations of BLAKE and BLAKE2. The book is oriented towards practice – engineering and craftsmanship – rather than theory. It is suitable for developers, engineers and security professionals engaged with BLAKE and cryptographic hashing in general and for applied cryptography researchers and students who need a consolidated reference and a detailed description of the design process, or guidelines on how to design a cryptographic algorithm.
Color perception plays an important role in object recognition and scene understanding both for humans and intelligent vision systems. Recent advances in digital color imaging and computer hardware technology have led to an explosion in the use of color images in a variety of applications including medical imaging, content-based image retrieval, biometrics, watermarking, digital inpainting, remote sensing, visual quality inspection, among many others. As a result, automated processing and analysis of color images has become an active area of research, to which the large number of publications of the past two decades bears witness. The multivariate nature of color image data presents new challenges for researchers and practitioners as the numerous methods developed for single channel images are often not directly applicable to multichannel ones. The goal of this volume is to summarize the state-of-the-art in the early stages of the color image processing pipeline.
The success of a BCI system depends as much on the system itself as on the user’s ability to produce distinctive EEG activity. BCI systems can be divided into two groups according to the placement of the electrodes used to detect and measure neurons firing in the brain. These groups are: invasive systems, electrodes are inserted directly into the cortex are used for single cell or multi unit recording, and electrocorticography (EcoG), electrodes are placed on the surface of the cortex (or dura); noninvasive systems, they are placed on the scalp and use electroencephalography (EEG) or magnetoencephalography (MEG) to detect neuron activity. The book is basically divided into three parts. The first part of the book covers the basic concepts and overviews of Brain Computer Interface. The second part describes new theoretical developments of BCI systems. The third part covers views on real applications of BCI systems.
This book discusses recent developments and contemporary research in mathematics, statistics and their applications in computing. All contributing authors are eminent academicians, scientists, researchers and scholars in their respective fields, hailing from around the world. The conference has emerged as a powerful forum, offering researchers a venue to discuss, interact and collaborate and stimulating the advancement of mathematics and its applications in computer science. The book will allow aspiring researchers to update their knowledge of cryptography, algebra, frame theory, optimizations, stochastic processes, compressive sensing, functional analysis, complex variables, etc. Educating future consumers, users, producers, developers and researchers in mathematics and computing is a challenging task and essential to the development of modern society. Hence, mathematics and its applications in computer science are of vital importance to a broad range of communities, including mathematicians and computing professionals across different educational levels and disciplines.
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, control, and finance.
The areas of natural language processing and computational linguistics have continued to grow in recent years, driven by the demand to automatically process text and spoken data. With the processing power and techniques now available, research is scaling up from lab prototypes to real-world, proven applications. This book teaches the principles of natural language processing, first covering practical linguistics issues such as encoding and annotation schemes, defining words, tokens and parts of speech and morphology, as well as key concepts in machine learning, such as entropy, regression and classification, which are used throughout the book. It then details the language-processing functions involved, including part-of-speech tagging using rules and stochastic techniques, using Prolog to write phase-structure grammars, syntactic formalisms and parsing techniques, semantics, predicate logic and lexical semantics and analysis of discourse and applications in dialogue systems. A key feature of the book is the author's hands-on approach throughout, with sample code in Prolog and Perl, extensive exercises, and a detailed introduction to Prolog. The reader is supported with a companion website that contains teaching slides, programs and additional material. The second edition is a complete revision of the techniques exposed in the book to reflect advances in the field the author redesigned or updated all the chapters, added two new ones and considerably expanded the sections on machine-learning techniques.
The volume analyses and develops David Makinson’s efforts to make classical logic useful outside its most obvious application areas. The book contains chapters that analyse, appraise, or reshape Makinson’s work and chapters that develop themes emerging from his contributions. These are grouped into major areas to which Makinsons has made highly influential contributions and the volume in its entirety is divided into four sections, each devoted to a particular area of logic: belief change, uncertain reasoning, normative systems and the resources of classical logic. Among the contributions included in the volume, one chapter focuses on the “inferential preferential method”, i.e. the combined use of classical logic and mechanisms of preference and choice and provides examples from Makinson’s work in non-monotonic and defeasible reasoning and belief revision. One chapter offers a short autobiography by Makinson which details his discovery of modern logic, his travels across continents and reveals his intellectual encounters and inspirations. The chapter also contains an unusually explicit statement on his views on the (limited but important) role of logic in philosophy.
The book introduces the reader to game-changing ways of building and utilizing Internet-based services related to design and manufacture activities through the cloud. In a broader sense, CBDM refers to a new product realization model that enables collective open innovation and rapid product development with minimum costs through social networking and negotiation platforms between service providers and consumers. It is a type of parallel and distributed system consisting of a collection of inter-connected physical and virtualized service pools of design and manufacturing resources as well as intelligent search capabilities for design and manufacturing solutions. Practicing engineers and decision makers will learn how to strategically position their product development operations for success in a globalized interconnected world.
This book illustrates the program of Logical-Informational Dynamics. Rational agents exploit the information available in the world in delicate ways, adopt a wide range of epistemic attitudes, and in that process, constantly change the world itself. Logical-Informational Dynamics is about logical systems putting such activities at center stage, focusing on the events by which we acquire information and change attitudes. Its contributions show many current logics of information and change at work, often in multi-agent settings where social behavior is essential, and often stressing Johan van Benthem's pioneering work in establishing this program. However, this is not a Festschrift, but a rich tapestry for a field with a wealth of strands of its own. The reader will see the state of the art in such topics as information update, belief change, preference, learning over time, and strategic interaction in games. Moreover, no tight boundary has been enforced, and some chapters add more general mathematical or philosophical foundations or links to current trends in computer science. The theme of this book lies at the interface of many disciplines. Logic is the main methodology, but the various chapters cross easily between mathematics, computer science, philosophy, linguistics, cognitive and social sciences, while also ranging from pure theory to empirical work. Accordingly, the authors of this book represent a wide variety of original thinkers from different research communities. And their interconnected themes challenge at the same time how we think of logic, philosophy and computation. Thus, very much in line with van Benthem's work over many decades, the volume shows how all these disciplines form a natural unity in the perspective of dynamic logicians (broadly conceived) exploring their new themes today. And at the same time, in doing so, it offers a broader conception of logic with a certain grandeur, moving its horizons beyond the traditional study of consequence relations.
Structures placed on hillsides often present a number of challenges and a limited number of economical choices for site design. An option sometimes employed is to use the building frame as a retaining element, comprising a Rigidly Framed Earth Retaining Structure (RFERS). The relationship between temperature and earth pressure acting on RFERS, is explored in this monograph through a 4.5 year monitoring program of a heavily instrumented in service structure. The data indicated that the coefficient of earth pressure behind the monitored RFERS had a strong linear correlation with temperature. The study also revealed that thermal cycles, rather than lateral earth pressure, were the cause of failure in many structural elements. The book demonstrates that depending on the relative stiffness of the retained soil mass and that of the structural frame, the developed lateral earth pressure, during thermal expansion, can reach magnitudes several times larger than those determined using classical earth pressure theories. Additionally, a nearly perpetual lateral displacement away from the retained soil mass may occur at the free end of the RFERS leading to unacceptable serviceability problems. These results suggest that reinforced concrete structures designed for the flexural stresses imposed by the backfill soil will be inadequately reinforced to resist stresses produced during the expansion cycles. Parametric studies of single and multi-story RFERS with varying geometries and properties are also presented to investigate the effects of structural stiffness on the displacement of RFERS and the lateral earth pressure developed in the soil mass. These studies can aid the reader in selecting appropriate values of lateral earth pressure for the design of RFERS. Finally, simplified closed form equations that can be used to predict the lateral drift of RFERS are presented. KEY WORDS: Earth Pressure; Soil-Structure Interaction; Mechanics; Failure; Distress; Temperature; Thermal Effects; Concrete; Coefficient of Thermal Expansion; Segmental Bridges; Jointless Bridges; Integral Bridges; Geotechnical Instrumentation; Finite Element Modeling; FEM; Numerical Modeling.
The pursuit of artificial intelligence has been a highly active domain of research for decades, yielding exciting scientific insights and productive new technologies. In terms of generating intelligence, however, this pursuit has yielded only limited success. This book explores the hypothesis that adaptive growth is a means of moving forward. By emulating the biological process of development, we can incorporate desirable characteristics of natural neural systems into engineered designs and thus move closer towards the creation of brain-like systems. The particular focus is on how to design artificial neural networks for engineering tasks. The book consists of contributions from 18 researchers, ranging from detailed reviews of recent domains by senior scientists, to exciting new contributions representing the state of the art in machine learning research. The book begins with broad overviews of artificial neurogenesis and bio-inspired machine learning, suitable both as an introduction to the domains and as a reference for experts. Several contributions provide perspectives and future hypotheses on recent highly successful trains of research, including deep learning, the Hyper NEAT model of developmental neural network design, and a simulation of the visual cortex. Other contributions cover recent advances in the design of bio-inspired artificial neural networks, including the creation of machines for classification, the behavioural control of virtual agents, the desi gn of virtual multi-component robots and morphologies and the creation of flexible intelligence. Throughout, the contributors share their vast expertise on the means and benefits of creating brain-like machines. This book is appropriate for advanced students and practitioners of artificial intelligence and machine learning.
The book Soft Computing for Business Intelligence is the remarkable output of a program based on the idea of joint trans-disciplinary research as supported by the Eureka Iberoamerica Network and the University of Oldenburg. It contains twenty-seven papers allocated to three sections: Soft Computing, Business Intelligence and Knowledge Discovery, and Knowledge Management and Decision Making. Although the contents touch different domains they are similar in so far as they follow the BI principle “Observation and Analysis” while keeping a practical oriented theoretical eye on sound methodologies, like Fuzzy Logic, Compensatory Fuzzy Logic (CFL), Rough Sets and other soft computing elements. The book tears down the traditional focus on business, and extends Business Intelligence techniques in an impressive way to a broad range of fields like medicine, environment, wind farming, social collaboration and interaction, car sharing and sustainability.
Digital Speech Processing Using Matlab deals with digital speech pattern recognition, speech production model, speech feature extraction, and speech compression. The book is written in a manner that is suitable for beginners pursuing basic research in digital speech processing. Matlab illustrations are provided for most topics to enable better understanding of concepts. This book also deals with the basic pattern recognition techniques (illustrated with speech signals using Matlab) such as PCA, LDA, ICA, SVM, HMM, GMM, BPN, and KSOM.
This book describes the challenges that critical infrastructure systems face, and presents state of the art solutions to address them. How can we design intelligent systems or intelligent agents that can make appropriate real-time decisions in the management of such large-scale, complex systems? What are the primary challenges for critical infrastructure systems? The book also provides readers with the relevant information to recognize how important infrastructures are, and their role in connection with a society’s economy, security and prosperity. It goes on to describe state-of-the-art solutions to address these points, including new methodologies and instrumentation tools (e.g. embedded software and intelligent algorithms) for transforming and optimizing target infrastructures. The book is the most comprehensive resource to date for professionals in both the private and public sectors, while also offering an essential guide for students and researchers in the areas of modeling and analysis of critical infrastructure systems, monitoring, control, risk/impact evaluation, fault diagnosis, fault-tolerant control, and infrastructure dependencies/interdependencies. The importance of the research presented in the book is reflected in the fact that currently, for the first time in human history, more people live in cities than in rural areas, and that, by 2050, roughly 70% of the world’s total population is expected to live in cities.
This book is a collection of papers by leading researchers in computational semantics. It presents a state-of-the-art overview of recent and current research in computational semantics, including descriptions of new methods for constructing and improving resources for semantic computation, such as WordNet, VerbNet, and semantically annotated corpora. It also presents new statistical methods in semantic computation, such as the application of distributional semantics in the compositional calculation of sentence meanings. Computing the meaning of sentences, texts, and spoken or texted dialogue is the ultimate challenge in natural language processing, and the key to a wide range of exciting applications. The breadth and depth of coverage of this book makes it suitable as a reference and overview of the state of the field for researchers in Computational Linguistics, Semantics, Computer Science, Cognitive Science, and Artificial Intelligence.
This book brings together a selection of the best papers from the sixteenth edition of the Forum on specification and Design Languages Conference (FDL), which was held in September 2013 in Paris, France. FDL is a well-established international forum devoted to dissemination of research results, practical experiences and new ideas in the application of specification, design and verification languages to the design, modeling and verification of integrated circuits, complex hardware/software embedded systems and mixed-technology systems.
Presenting the concept and design and implementation of configurable intelligent optimization algorithms in manufacturing systems, this book provides a new configuration method to optimize manufacturing processes. It provides a comprehensive elaboration of basic intelligent optimization algorithms, and demonstrates how their improvement, hybridization and parallelization can be applied to manufacturing. Furthermore, various applications of these intelligent optimization algorithms are exemplified in detail, chapter by chapter. The intelligent optimization algorithm is not just a single algorithm; instead it is a general advanced optimization mechanism which is highly scalable with robustness and randomness. Therefore, this book demonstrates the flexibility of these algorithms, as well as their robustness and reusability in order to solve mass complicated problems in manufacturing. Since the genetic algorithm was presented decades ago, a large number of intelligent optimization algorithms and their improvements have been developed. However, little work has been done to extend their applications and verify their competence in solving complicated problems in manufacturing. This book will provide an invaluable resource to students, researchers, consultants and industry professionals interested in engineering optimization. It will also be particularly useful to three groups of readers: algorithm beginners, optimization engineers and senior algorithm designers. It offers a detailed description of intelligent optimization algorithms to algorithm beginners; recommends new configurable design methods for optimization engineers, and provides future trends and challenges of the new configuration mechanism to senior algorithm designers. |
You may like...
|