![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General
Nature-inspired algorithms such as cuckoo search and firefly algorithm have become popular and widely used in recent years in many applications. These algorithms are flexible, efficient and easy to implement. New progress has been made in the last few years, and it is timely to summarize the latest developments of cuckoo search and firefly algorithm and their diverse applications. This book will review both theoretical studies and applications with detailed algorithm analysis, implementation and case studies so that readers can benefit most from this book. Application topics are contributed by many leading experts in the field. Topics include cuckoo search, firefly algorithm, algorithm analysis, feature selection, image processing, travelling salesman problem, neural network, GPU optimization, scheduling, queuing, multi-objective manufacturing optimization, semantic web service, shape optimization, and others. This book can serve as an ideal reference for both graduates and researchers in computer science, evolutionary computing, machine learning, computational intelligence, and optimization, as well as engineers in business intelligence, knowledge management and information technology.
Towards Solid-State Quantum Repeaters: Ultrafast, Coherent Optical Control and Spin-Photon Entanglement in Charged InAs Quantum Dots summarizes several state-of-the-art coherent spin manipulation experiments in III-V quantum dots. Both high-fidelity optical manipulation, decoherence due to nuclear spins and the spin coherence extraction are discussed, as is the generation of entanglement between a single spin qubit and a photonic qubit. The experimental results are analyzed and discussed in the context of future quantum technologies, such as quantum repeaters. Single spins in optically active semiconductor host materials have emerged as leading candidates for quantum information processing (QIP). The quantum nature of the spin allows for encoding of stationary, memory quantum bits (qubits), and the relatively weak interaction with the host material preserves the spin coherence. On the other hand, optically active host materials permit direct interfacing with light, which can be used for all-optical qubit manipulation, and for efficiently mapping matter qubits into photonic qubits that are suited for long-distance quantum communication.
These proceedings are aimed at researchers, industry / market operators and students from different backgrounds (scientific, engineering and humanistic) whose work is either focused on or affined to Location Based Services (LBS). It contributes to the following areas: positioning / indoor positioning, smart environments and spatial intelligence, spatiotemporal data acquisition, processing, and analysis, data mining and knowledge discovery, personalization and context-aware adaptation, LBS visualization techniques, novel user interfaces and interaction techniques, smart phone navigation and LBS techniques, three-dimensional visualization in the LBS context, augmented reality in an LBS context, innovative LBS systems and applications, way finding /navigation ( indoor/outdoor), indoor navigation databases, user studies and evaluations, privacy issues in LBS, usability issues in LBS, legal and business aspects of LBS, LBS and Web 2.0, open source solutions and standards, ubiquitous computing, smart cities and seamless positioning.
By using various data inputs, ubiquitous computing systems detect their current usage context, automatically adapt their services to the user’s situational needs and interact with other services or resources in their environment on an ad-hoc basis. Designing such self-adaptive, context-aware knowledge processing systems is, in itself, a formidable challenge. This book presents core findings from the VENUS project at the Interdisciplinary Research Center for Information System Design (ITeG) at Kassel University, where researchers from different fields, such as computer science, information systems, human-computer interaction and law, together seek to find general principles and guidelines for the design of socially aware ubiquitous computing systems. To this end, system usability, user trust in the technology and adherence to privacy laws and regulations were treated as particularly important criteria in the context of socio-technical system design. During the project, a comprehensive blueprint for systematic, interdisciplinary software development was developed, covering the particular functional and non-functional design aspects of ubiquitous computing at the interface between technology and human beings. The organization of the book reflects the structure of the VENUS work program. After an introductory part I, part II provides the groundwork for VENUS by presenting foundational results from all four disciplines involved. Subsequently, part III focuses on methodological research funneling the development activities into a common framework. Part IV then covers the design of the demonstrators that were built in order to develop and evaluate the VENUS method. Finally, part V is dedicated to the evaluation phase to assess the user acceptance of the new approach and applications. The presented findings are especially important for researchers in computer science, information systems, and human-computer interaction, but also for everyone working on the acceptance of new technologies in society in general.
Computer vision is the science and technology of making machines that see. It is concerned with the theory, design and implementation of algorithms that can automatically process visual data to recognize objects, track and recover their shape and spatial layout. The International Computer Vision Summer School - ICVSS was established in 2007 to provide both an objective and clear overview and an in-depth analysis of the state-of-the-art research in Computer Vision. The courses are delivered by world renowned experts in the field, from both academia and industry and cover both theoretical and practical aspects of real Computer Vision problems. The school is organized every year by University of Cambridge (Computer Vision and Robotics Group) and University of Catania (Image Processing Lab). Different topics are covered each year. This edited volume contains a selection of articles covering some of the talks and tutorials held during the last editions of the school. The chapters provide an in-depth overview of challenging areas with key references to the existing literature.
This book collects ECM research from the academic discipline of Information Systems and related fields to support academics and practitioners who are interested in understanding the design, use and impact of ECM systems. It also provides a valuable resource for students and lecturers in the field. “Enterprise content management in Information Systems research – Foundations, methods and cases” consolidates our current knowledge on how today’s organizations can manage their digital information assets. The business challenges related to organizational information management include reducing search times, maintaining information quality, and complying with reporting obligations and standards. Many of these challenges are well-known in information management, but because of the vast quantities of information being generated today, they are more difficult to deal with than ever. Many companies use the term “enterprise content management” (ECM) to refer to the management of all forms of information, especially unstructured information. While ECM systems promise to increase and maintain information quality, to streamline content-related business processes, and to track the lifecycle of information, their implementation poses several questions and challenges: Which content objects should be put under the control of the ECM system? Which processes are affected by the implementation? How should outdated technology be replaced? Research is challenged to support practitioners in answering these questions.
Die Autoren führen auf anschauliche und systematische Weise in die mathematische und informatische Modellierung sowie in die Simulation als universelle Methodik ein. Es geht um Klassen von Modellen und um die Vielfalt an Beschreibungsarten. Aber es geht immer auch darum, wie aus Modellen konkrete Simulationsergebnisse gewonnen werden können. Nach einem kompakten Repetitorium zum benötigten mathematischen Apparat wird das Konzept anhand von Szenarien u. a. aus den Bereichen „Spielen – entscheiden – planen" und „Physik im Rechner" umgesetzt.
This research volume is a continuation of our previous volumes on intelligent machine. It is divided into three parts. Part I deals with big data and ontologies. It includes examples related to the text mining, rule mining and ontology. Part II is on knowledge-based systems. It includes context-centered systems, knowledge discovery, interoperability, consistency and systems of systems. The final part is on applications. The applications involve prediction, decision optimization and assessment. This book is directed to the researchers who wish to explore the field of knowledge engineering further.
This book discusses the semantic foundations of concurrent systems with nondeterministic and probabilistic behaviour. Particular attention is given to clarifying the relationship between testing and simulation semantics and characterising bisimulations from metric, logical, and algorithmic perspectives. Besides presenting recent research outcomes in probabilistic concurrency theory, the book exemplifies the use of many mathematical techniques to solve problems in computer science, which is intended to be accessible to postgraduate students in Computer Science and Mathematics. It can also be used by researchers and practitioners either for advanced study or for technical reference.
This book presents a study in knowledge discovery in data with knowledge understood as a set of relations among objects and their properties. Relations in this case are implicative decision rules and the paradigm in which they are induced is that of computing with granules defined by rough inclusions, the latter introduced and studied within rough mereology, the fuzzified version of mereology. In this book basic classes of rough inclusions are defined and based on them methods for inducing granular structures from data are highlighted. The resulting granular structures are subjected to classifying algorithms, notably k—nearest neighbors and bayesian classifiers. Experimental results are given in detail both in tabular and visualized form for fourteen data sets from UCI data repository. A striking feature of granular classifiers obtained by this approach is that preserving the accuracy of them on original data, they reduce substantially the size of the granulated data set as well as the set of granular decision rules. This feature makes the presented approach attractive in cases where a small number of rules providing a high classification accuracy is desirable. As basic algorithms used throughout the text are explained and illustrated with hand examples, the book may also serve as a textbook.
This book provides readers with a snapshot of the state-of-the art in fuzzy logic. Throughout the chapters, key theories developed in the last fifty years as well as important applications to practical problems are presented and discussed from different perspectives, as the authors hail from different disciplines and therefore use fuzzy logic for different purposes. The book aims at showing how fuzzy logic has evolved since the first theory formulation by Lotfi A. Zadeh in his seminal paper on Fuzzy Sets in 1965. Fuzzy theories and implementation grew at an impressive speed and achieved significant results, especially on the applicative side. The study of fuzzy logic and its practice spread all over the world, from Europe to Asia, America and Oceania. The editors believe that, thanks to the drive of young researchers, fuzzy logic will be able to face the challenging goals posed by computing with words. New frontiers of knowledge are waiting to be explored. In order to motivate young people to engage in the future development of fuzzy logic, fuzzy methodologies, fuzzy applications, etc., the editors invited a team of internationally respected experts to write the present collection of papers, which shows the present and future potentials of fuzzy logic from different disciplinary perspectives and personal standpoints.
This book is intended as an introduction to fuzzy algebraic hyperstructures. As the first in its genre, it includes a number of topics, most of which reflect the authors’ past research and thus provides a starting point for future research directions. The book is organized in five chapters. The first chapter introduces readers to the basic notions of algebraic structures and hyperstructures. The second covers fuzzy sets, fuzzy groups and fuzzy polygroups. The following two chapters are concerned with the theory of fuzzy Hv-structures: while the third chapter presents the concept of fuzzy Hv-subgroup of Hv-groups, the fourth covers the theory of fuzzy Hv-ideals of Hv-rings. The final chapter discusses several connections between hypergroups and fuzzy sets, and includes a study on the association between hypergroupoids and fuzzy sets endowed with two membership functions. In addition to providing a reference guide to researchers, the book is also intended as textbook for undergraduate and graduate students.
Intelligent information and database systems are two closely related subfields of modern computer science which have been known for over thirty years. They focus on the integration of artificial intelligence and classic database technologies to create the class of next generation information systems. The book focuses on new trends in intelligent information and database systems and discusses topics addressed to the foundations and principles of data, information, and knowledge models, methodologies for intelligent information and database systems analysis, design, and implementation, their validation, maintenance and evolution. They cover a broad spectrum of research topics discussed both from the practical and theoretical points of view such as: intelligent information retrieval, natural language processing, semantic web, social networks, machine learning, knowledge discovery, data mining, uncertainty management and reasoning under uncertainty, intelligent optimization techniques in information systems, security in databases systems, and multimedia data analysis. Intelligent information systems and their applications in business, medicine and industry, database systems applications, and intelligent internet systems are also presented and discussed in the book. The book consists of 38 chapters based on original works presented during the 7th Asian Conference on Intelligent Information and Database Systems (ACIIDS 2015) held on 23–25 March 2015 in Bali, Indonesia. The book is divided into six parts related to Advanced Machine Learning and Data Mining, Intelligent Computational Methods in Information Systems, Semantic Web, Social Networks and Recommendation Systems, Cloud Computing and Intelligent Internet Systems, Knowledge and Language Processing, and Intelligent Information and Database Systems: Applications.
The main focus of this edited volume is on three major areas of statistical quality control: statistical process control (SPC), acceptance sampling and design of experiments. The majority of the papers deal with statistical process control, while acceptance sampling and design of experiments are also treated to a lesser extent. The book is organized into four thematic parts, with Part I addressing statistical process control. Part II is devoted to acceptance sampling. Part III covers the design of experiments, while Part IV discusses related fields. The twenty-three papers in this volume stem from The 11th International Workshop on Intelligent Statistical Quality Control, which was held in Sydney, Australia from August 20 to August 23, 2013. The event was hosted by Professor Ross Sparks, CSIRO Mathematics, Informatics and Statistics, North Ryde, Australia and was jointly organized by Professors S. Knoth, W. Schmid and Ross Sparks. The papers presented here were carefully selected and reviewed by the scientific program committee, before being revised and adapted for this volume.
This volume focuses on uncovering the fundamental forces underlying dynamic decision making among multiple interacting, imperfect and selfish decision makers. The chapters are written by leading experts from different disciplines, all considering the many sources of imperfection in decision making, and always with an eye to decreasing the myriad discrepancies between theory and real world human decision making. Topics addressed include uncertainty, deliberation cost and the complexity arising from the inherent large computational scale of decision making in these systems. In particular, analyses and experiments are presented which concern: • task allocation to maximize “the wisdom of the crowd”; • design of a society of “edutainment” robots who account for one anothers’ emotional states; • recognizing and counteracting seemingly non-rational human decision making; • coping with extreme scale when learning causality in networks; • efficiently incorporating expert knowledge in personalized medicine; • the effects of personality on risky decision making. The volume is a valuable source for researchers, graduate students and practitioners in machine learning, stochastic control, robotics, and economics, among other fields.
IT changes everyday’s life, especially in education and medicine. The goal of ITME 2014 is to further explore the theoretical and practical issues of Ubiquitous Computing Application and Wireless Sensor Network. It also aims to foster new ideas and collaboration between researchers and practitioners. The organizing committee is soliciting unpublished papers for the main conference and its special tracks.
This book presents a comprehensive approach to protecting sensitive information when large data collections are released by their owners. It addresses three key requirements of data privacy: the protection of data explicitly released, the protection of information not explicitly released but potentially vulnerable due to a release of other data, and the enforcement of owner-defined access restrictions to the released data. It is also the first book with a complete examination of how to enforce dynamic read and write access authorizations on released data, applicable to the emerging data outsourcing and cloud computing situations. Private companies, public organizations and final users are releasing, sharing, and disseminating their data to take reciprocal advantage of the great benefits of making their data available to others. This book weighs these benefits against the potential privacy risks. A detailed analysis of recent techniques for privacy protection in data release and case studies illustrate crucial scenarios. Protecting Privacy in Data Release targets researchers, professionals and government employees working in security and privacy. Advanced-level students in computer science and electrical engineering will also find this book useful as a secondary text or reference.
As interactive systems are quickly becoming integral to our everyday lives, this book investigates how we can make these systems, from desktop and mobile apps to more wearable and immersive applications, more usable and maintainable by using HCI design patterns. It also examines how we can facilitate the reuse of design practices in the development lifecycle of multi-devices, multi-platforms and multi-contexts user interfaces. Effective design tools are provided for combining HCI design patterns and User Interface (UI) driven engineering to enhance design whilst differentiating between UI and the underlying system features. Several examples are used to demonstrate how HCI design patterns can support this decoupling by providing an architectural framework for pattern-oriented and model-driven engineering of multi-platforms and multi-devices user interfaces. Patterns of HCI Design and HCI Design of Patterns is for students, academics and Industry specialists who are concerned with user interfaces and usability within the software development community.
The book is a collection of peer-reviewed scientific papers submitted by active researchers in the 37th National System Conference (NSC 2013). NSC is an annual event of the Systems Society of India (SSI), primarily oriented to strengthen the systems movement and its applications for the welfare of humanity. A galaxy of academicians, professionals, scientists, statesman and researchers from different parts of the country and abroad are invited to attend the conference. The book presents research articles in the areas of system’s modelling, complex network modelling, cyber security, sustainable systems design, health care systems, socio-economic systems, and clean and green technologies. The book can be used as a tool for further research.
Computational intelligence techniques have enjoyed growing interest in recent decades among the earth and environmental science research communities for their powerful ability to solve and understand various complex problems and develop novel approaches toward a sustainable earth. This book compiles a collection of recent developments and rigorous applications of computational intelligence in these disciplines. Techniques covered include artificial neural networks, support vector machines, fuzzy logic, decision-making algorithms, supervised and unsupervised classification algorithms, probabilistic computing, hybrid methods and morphic computing. Further topics given treatment in this volume include remote sensing, meteorology, atmospheric and oceanic modeling, climate change, environmental engineering and management, catastrophic natural hazards, air and environmental pollution and water quality. By linking computational intelligence techniques with earth and environmental science oriented problems, this book promotes synergistic activities among scientists and technicians working in areas such as data mining and machine learning. We believe that a diverse group of academics, scientists, environmentalists, meteorologists and computing experts with a common interest in computational intelligence techniques within the earth and environmental sciences will find this book to be of great value.
This textbook introduces a concise approach to the design of molecular algorithms for students or researchers who are interested in dealing with complex problems. Through numerous examples and exercises, you will understand the main difference of molecular circuits and traditional digital circuits to manipulate the same problem and you will also learn how to design a molecular algorithm of solving any a problem from start to finish. The book starts with an introduction to computational aspects of digital computers and molecular computing, data representation of molecular computing, molecular operations of molecular computing and number representation of molecular computing and provides many molecular algorithm to construct the parity generator and the parity checker of error-detection codes on digital communication, to encode integers of different formats, single precision and double precision of floating-point numbers, to implement addition and subtraction of unsigned integers, to construct logic operations including NOT, OR, AND, NOR, NAND, Exclusive-OR (XOR) and Exclusive-NOR (XNOR), to implement comparators, shifters, increase, decrease, and to complete two specific operations that are to find the maximum number of “1” and to find the minimum number of “1”. The book is also a useful reference source to people new for the field of molecular computing.
Since the second half of the 20th century machine computations have played a critical role in science and engineering. Computer-based techniques have become especially important in molecular biology, since they often represent the only viable way to gain insights into the behavior of a biological system as a whole. The complexity of biological systems, which usually needs to be analyzed on different time- and size-scales and with different levels of accuracy, requires the application of different approaches, ranging from comparative analysis of sequences and structural databases, to the analysis of networks of interdependence between cell components and processes, through coarse-grained modeling to atomically detailed simulations, and finally to molecular quantum mechanics. This book provides a comprehensive overview of modern computer-based techniques for computing the structure, properties and dynamics of biomolecules and biomolecular processes. The twenty-two chapters, written by scientists from all over the world, address the theory and practice of computer simulation techniques in the study of biological phenomena. The chapters are grouped into four thematic sections dealing with the following topics: the methodology of molecular simulations; applications of molecular simulations; bioinformatics methods and use of experimental information in molecular simulations; and selected applications of molecular quantum mechanics. The book includes an introductory chapter written by Harold A. Scheraga, one of the true pioneers in simulation studies of biomacromolecules.
This book presents and develops new reinforcement learning methods that enable fast and robust learning on robots in real-time. Robots have the potential to solve many problems in society, because of their ability to work in dangerous places doing necessary jobs that no one wants or is able to do. One barrier to their widespread deployment is that they are mainly limited to tasks where it is possible to hand-program behaviors for every situation that may be encountered. For robots to meet their potential, they need methods that enable them to learn and adapt to novel situations that they were not programmed for. Reinforcement learning (RL) is a paradigm for learning sequential decision making processes and could solve the problems of learning and adaptation on robots. This book identifies four key challenges that must be addressed for an RL algorithm to be practical for robotic control tasks. These RL for Robotics Challenges are: 1) it must learn in very few samples; 2) it must learn in domains with continuous state features; 3) it must handle sensor and/or actuator delays; and 4) it should continually select actions in real time. This book focuses on addressing all four of these challenges. In particular, this book is focused on time-constrained domains where the first challenge is critically important. In these domains, the agent’s lifetime is not long enough for it to explore the domains thoroughly, and it must learn in very few samples.
This book is instrumental to building a bridge between scientists and clinicians in the field of spine imaging by introducing state-of-the-art computational methods in the context of clinical applications. Spine imaging via computed tomography, magnetic resonance imaging, and other radiologic imaging modalities, is essential for noninvasively visualizing and assessing spinal pathology. Computational methods support and enhance the physician’s ability to utilize these imaging techniques for diagnosis, non-invasive treatment, and intervention in clinical practice. Chapters cover a broad range of topics encompassing radiological imaging modalities, clinical imaging applications for common spine diseases, image processing, computer-aided diagnosis, quantitative analysis, data reconstruction and visualization, statistical modeling, image-guided spine intervention, and robotic surgery. This volume serves a broad audience as contributions were written by both clinicians and researchers, which reflects the intended readership as well, being a potentially comprehensive book for all spine related clinicians, technicians, scientists, and graduate students. |
![]() ![]() You may like...
Dynamic Stability of Columns under…
Yoshihiko Sugiyama, Mikael A. Langthjem, …
Hardcover
R4,367
Discovery Miles 43 670
|