Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 22 of 22 matches in All Departments
This handbook presents state-of-the-art research in reinforcement learning, focusing on its applications in the control and game theory of dynamic systems and future directions for related research and technology. The contributions gathered in this book deal with challenges faced when using learning and adaptation methods to solve academic and industrial problems, such as optimization in dynamic environments with single and multiple agents, convergence and performance analysis, and online implementation. They explore means by which these difficulties can be solved, and cover a wide range of related topics including: deep learning; artificial intelligence; applications of game theory; mixed modality learning; and multi-agent reinforcement learning. Practicing engineers and scholars in the field of machine learning, game theory, and autonomous control will find the Handbook of Reinforcement Learning and Control to be thought-provoking, instructive and informative.
Cooperative Control of Multi-Agent Systems extends optimal control and adaptive control design methods to multi-agent systems on communication graphs. It develops Riccati design techniques for general linear dynamics for cooperative state feedback design, cooperative observer design, and cooperative dynamic output feedback design. Both continuous-time and discrete-time dynamical multi-agent systems are treated. Optimal cooperative control is introduced and neural adaptive design techniques for multi-agent nonlinear systems with unknown dynamics, which are rarely treated in literature are developed. Results spanning systems with first-, second- and on up to general high-order nonlinear dynamics are presented. Each control methodology proposed is developed by rigorous proofs. All algorithms are justified by simulation examples. The text is self-contained and will serve as an excellent comprehensive source of information for researchers and graduate students working with multi-agent systems.
Manufacturing Systems Control Design details a matrix-based approach to the real-time application of control in discrete-event systems and flexible manufacturing systems (FMS) in particular. The "and/or" algebra in which matrix operations are carried out enables fast and efficient calculations with a minimum of computing power. In addition, the method uses standard task-sequencing and resource-requirements matrices which, if not in use already, can be easily derived with the help of this text. Matrix-based techniques are compared with Petri net and max-plus algebra ideas. Virtual modeling of complex physical systems has brought a new perspective to the investigation of phenomena in FMS. The software discussed in this book(and downloadable from the authorsa (TM) website at http: //flrcg.rasip.fer.hr/) supplies the reader with a graphical user interface that can do many things to make the design and control of FMS easier. The examples presented herein tackle the real-world problems faced by engineers trying to put into practice methods developed in academia, bringing together catholic experience of sensors, control systems, robotics, industrial automation, simulation, agile assembly and supply chains. Common concerns confronted include: a [ predictability: issues of control system modeling and analysis are addressed; a [ producibility: by looking at the design and synthesis of cellular workcells; a [ productivity: in terms of dynamic sensing and control. Covering all the steps from identification of operations and resources through modeling of the system and simulation of its dynamics in a virtual environment to the transformation of those models into real-worldalgorithms, this monograph is a sound practical basis for the design of controllers for manufacturing systems. It will interest both the academic and practising control or manufacturing engineer wishing to enhance the control of flexible systems and operations researchers looking at manufacturing performance. The end-of-chapter exercises provided and the easy-to-read introduction to the subject will also suit the final-year undergraduate and the beginning graduate in these disciplines.
A complete reference to adaptive control of systems with nonsmooth industrial nonlinearities such as:- backlash- dead-zones- component failure- friction- hysteresis- saturation- time delays. These nonlinearities in industrial actuators cause severe problems in the motion control of industrial processes, particularly in view of modern requirements of speed and precision of movement such as occur in semiconductor manufacturing, precision machining, and elsewhere. Actuator nonlinearities are ubiquitous in engineering practice and limit control system performance. While standard feedback control alone cannot handle these nonsmooth nonlinearities effectively, this book, with unified and systematic adaptive design methods developed in 16 chapters, shows how such nonlinear characteristics can be effectively compensated for by using adaptive and intelligent control techniques. This allows desired system performance to be achieved in the presence of uncertain nonlinearities. With extensive surveys of literature and comprehensive summaries of various design methods, the authors of the book chapters, who are experts in their areas of interest, present new solutions to some important issues in adaptive control of systems with various sorts of nonsmooth nonlinearities.In addition to providing solutions, the book is also aimed at motivating more research activities in the important field of adaptive control of nonsmooth nonlinear industrial systems by formulating several challenging open problems in related areas.
It has long been the goal of engineers to develop tools that enhance our ability to do work, increase our quality of life, or perform tasks that are either beyond our ability, too hazardous, or too tedious to be left to human efforts. Autonomous mobile robots are the culmination of decades of research and development, and their potential is seemingly unlimited. Roadmap to the Future Serving as the first comprehensive reference on this interdisciplinary technology, Autonomous Mobile Robots: Sensing, Control, Decision Making, and Applications authoritatively addresses the theoretical, technical, and practical aspects of the field. The book examines in detail the key components that form an autonomous mobile robot, from sensors and sensor fusion to modeling and control, map building and path planning, and decision making and autonomy, and to the final integration of these components for diversified applications. Trusted Guidance A duo of accomplished experts leads a team of renowned international researchers and professionals who provide detailed technical reviews and the latest solutions to a variety of important problems. They share hard-won insight into the practical implementation and integration issues involved in developing autonomous and open robotic systems, along with in-depth examples, current and future applications, and extensive illustrations. For anyone involved in researching, designing, or deploying autonomous robotic systems, Autonomous Mobile Robots is the perfect resource.
The authors present algorithms for H2 and H-infinity design for nonlinear systems which provide solution techniques which can be implemented in real systems; neural networks are used to solve the nonlinear control design equations. Constraints on the control actuator inputs are dealt with. Results are proven to give confidence and performance guarantees. The algorithms can be used to obtain practical controllers. Nearly optimal applications to constrained-state and mimimum-time problems are discussed as is discrete-time design for digital controllers. Nonlinear H2/H-infinity Constrained Feedback Control is of importance to control designers working in a variety of industrial systems. Case studies are given and the design of nonlinear control systems of the same caliber as those obtained in recent years using linear optimal and bounded-norm designs is explained. The book will also be of interest to academics and graduate students in control systems as a complete foundation for H2 and H-infinity design.
Apply Sliding Mode Theory to Solve Control Problems Interest in SMC has grown rapidly since the first edition of this book was published. This second edition includes new results that have been achieved in SMC throughout the past decade relating to both control design methodology and applications. In that time, Sliding Mode Control (SMC) has continued to gain increasing importance as a universal design tool for the robust control of linear and nonlinear electro-mechanical systems. Its strengths result from its simple, flexible, and highly cost-effective approach to design and implementation. Most importantly, SMC promotes inherent order reduction and allows for the direct incorporation of robustness against system uncertainties and disturbances. These qualities lead to dramatic improvements in stability and help enable the design of high-performance control systems at low cost. Written by three of the most respected experts in the field, including one of its originators, this updated edition of Sliding Mode Control in Electro-Mechanical Systems reflects developments in the field over the past decade. It builds on the solid fundamentals presented in the first edition to promote a deeper understanding of the conventional SMC methodology, and it examines new design principles in order to broaden the application potential of SMC. SMC is particularly useful for the design of electromechanical systems because of its discontinuous structure. In fact, where the hardware of many electromechanical systems (such as electric motors) prescribes discontinuous inputs, SMC becomes the natural choice for direct implementation. This book provides a unique combination of theory, implementation issues, and examples of real-life applications reflective of the authors' own industry-leading work in the development of robotics, automobiles, and other technological breakthroughs.
Unique in scope, Optimal Control: Weakly Coupled Systems and Applications provides complete coverage of modern linear, bilinear, and nonlinear optimal control algorithms for both continuous-time and discrete-time weakly coupled systems, using deterministic as well as stochastic formulations. This book presents numerous applications to real world systems from various industries, including aerospace, and discusses the design of subsystem-level optimal filters. Organized into independent chapters for easy access to the material, this text also contains several case studies, examples, exercises, computer assignments, and formulations of research problems to help instructors and students.
Bipedal locomotion is among the most difficult challenges in control engineering. Most books treat the subject from a quasi-static perspective, overlooking the hybrid nature of bipedal mechanics. Feedback Control of Dynamic Bipedal Robot Locomotion is the first book to present a comprehensive and mathematically sound treatment of feedback design for achieving stable, agile, and efficient locomotion in bipedal robots. In this unique and groundbreaking treatise, expert authors lead you systematically through every step of the process, including: -Mathematical modeling of walking and running gaits in planar robots -Analysis of periodic orbits in hybrid systems -Design and analysis of feedback systems for achieving stable periodic motions -Algorithms for synthesizing feedback controllers -Detailed simulation examples -Experimental implementations on two bipedal test beds The elegance of the authors' approach is evident in the marriage of control theory and mechanics, uniting control-based presentation and mathematical custom with a mechanics-based approach to the problem and computational rendering. Concrete examples and numerous illustrations complement and clarify the mathematical discussion. A supporting Web site offers links to videos of several experiments along with MATLAB(R) code for several of the models. This one-of-a-kind book builds a solid understanding of the theoretical and practical aspects of truly dynamic locomotion in planar bipedal robots.
More than a decade ago, world-renowned control systems authority Frank L. Lewis introduced what would become a standard textbook on estimation, under the title Optimal Estimation, used in top universities throughout the world. The time has come for a new edition of this classic text, and Lewis enlisted the aid of two accomplished experts to bring the book completely up to date with the estimation methods driving today's high-performance systems. A Classic Revisited Optimal and Robust Estimation: With an Introduction to Stochastic Control Theory, Second Edition reflects new developments in estimation theory and design techniques. As the title suggests, the major feature of this edition is the inclusion of robust methods. Three new chapters cover the robust Kalman filter, H-infinity filtering, and H-infinity filtering of discrete-time systems. Modern Tools for Tomorrow's Engineers This text overflows with examples that highlight practical applications of the theory and concepts. Design algorithms appear conveniently in tables, allowing students quick reference, easy implementation into software, and intuitive comparisons for selecting the best algorithm for a given application. In addition, downloadable MATLAB(R) code allows students to gain hands-on experience with industry-standard software tools for a wide variety of applications. This cutting-edge and highly interactive text makes teaching, and learning, estimation methods easier and more modern than ever.
It has long been the goal of engineers to develop tools that enhance our ability to do work, increase our quality of life, or perform tasks that are either beyond our ability, too hazardous, or too tedious to be left to human efforts. Autonomous mobile robots are the culmination of decades of research and development, and their potential is seemingly unlimited. Roadmap to the Future Serving as the first comprehensive reference on this interdisciplinary technology, Autonomous Mobile Robots: Sensing, Control, Decision Making, and Applications authoritatively addresses the theoretical, technical, and practical aspects of the field. The book examines in detail the key components that form an autonomous mobile robot, from sensors and sensor fusion to modeling and control, map building and path planning, and decision making and autonomy, and to the final integration of these components for diversified applications. Trusted Guidance A duo of accomplished experts leads a team of renowned international researchers and professionals who provide detailed technical reviews and the latest solutions to a variety of important problems. They share hard-won insight into the practical implementation and integration issues involved in developing autonomous and open robotic systems, along with in-depth examples, current and future applications, and extensive illustrations. For anyone involved in researching, designing, or deploying autonomous robotic systems, Autonomous Mobile Robots is the perfect resource.
Robot Manipulator Control offers a complete survey of control
systems for serial-link robot arms and acknowledges how robotic
device performance hinges upon a well-developed control system.
Containing over 750 essential equations, this thoroughly up-to-date
Second Edition, the book explicates theoretical and mathematical
requisites for controls design and summarizes current techniques in
computer simulation and implementation of controllers. It also
addresses procedures and issues in computed-torque, robust,
adaptive, neural network, and force control. New chapters relay
practical information on commercial robot manipulators and devices
and cutting-edge methods in neural network control.
In an era of intense competition where plant operating efficiencies must be maximized, downtime due to machinery failure has become more costly. To cut operating costs and increase revenues, industries have an urgent need to predict fault progression and remaining lifespan of industrial machines, processes, and systems. An engineer who mounts an acoustic sensor onto a spindle motor wants to know when the ball bearings will wear out without having to halt the ongoing milling processes. A scientist working on sensor networks wants to know which sensors are redundant and can be pruned off to save operational and computational overheads. These scenarios illustrate a need for new and unified perspectives in system analysis and design for engineering applications. Intelligent Diagnosis and Prognosis of Industrial Networked Systems proposes linear mathematical tool sets that can be applied to realistic engineering systems. The book offers an overview of the fundamentals of vectors, matrices, and linear systems theory required for intelligent diagnosis and prognosis of industrial networked systems. Building on this theory, it then develops automated mathematical machineries and formal decision software tools for real-world applications. The book includes portable tool sets for many industrial applications, including: Forecasting machine tool wear in industrial cutting machines Reduction of sensors and features for industrial fault detection and isolation (FDI) Identification of critical resonant modes in mechatronic systems for system design of R&D Probabilistic small-signal stability in large-scale interconnected power systems Discrete event command and control for military applications The book also proposes future directions for intelligent diagnosis and prognosis in energy-efficient manufacturing, life cycle assessment, and systems of systems architecture. Written in a concise and accessible style, it presents tools that are mathematically rigorous but not involved. Bridging academia, research, and industry, this reference supplies the know-how for engineers and managers making decisions about equipment maintenance, as well as researchers and students in the field.
In an era of intense competition where plant operating efficiencies must be maximized, downtime due to machinery failure has become more costly. To cut operating costs and increase revenues, industries have an urgent need to predict fault progression and remaining lifespan of industrial machines, processes, and systems. An engineer who mounts an acoustic sensor onto a spindle motor wants to know when the ball bearings will wear out without having to halt the ongoing milling processes. A scientist working on sensor networks wants to know which sensors are redundant and can be pruned off to save operational and computational overheads. These scenarios illustrate a need for new and unified perspectives in system analysis and design for engineering applications. Intelligent Diagnosis and Prognosis of Industrial Networked Systems proposes linear mathematical tool sets that can be applied to realistic engineering systems. The book offers an overview of the fundamentals of vectors, matrices, and linear systems theory required for intelligent diagnosis and prognosis of industrial networked systems. Building on this theory, it then develops automated mathematical machineries and formal decision software tools for real-world applications. The book includes portable tool sets for many industrial applications, including:
The book also proposes future directions for intelligent diagnosis and prognosis in energy-efficient manufacturing, life cycle assessment, and systems of systems architecture. Written in a concise and accessible style, it presents tools that are mathematically rigorous but not involved. Bridging academia, research, and industry, this reference supplies the know-how for engineers and managers making decisions about equipment maintenance, as well as researchers and students in the field.
This handbook presents state-of-the-art research in reinforcement learning, focusing on its applications in the control and game theory of dynamic systems and future directions for related research and technology. The contributions gathered in this book deal with challenges faced when using learning and adaptation methods to solve academic and industrial problems, such as optimization in dynamic environments with single and multiple agents, convergence and performance analysis, and online implementation. They explore means by which these difficulties can be solved, and cover a wide range of related topics including: deep learning; artificial intelligence; applications of game theory; mixed modality learning; and multi-agent reinforcement learning. Practicing engineers and scholars in the field of machine learning, game theory, and autonomous control will find the Handbook of Reinforcement Learning and Control to be thought-provoking, instructive and informative.
Cooperative Control of Multi-Agent Systems extends optimal control and adaptive control design methods to multi-agent systems on communication graphs. It develops Riccati design techniques for general linear dynamics for cooperative state feedback design, cooperative observer design, and cooperative dynamic output feedback design. Both continuous-time and discrete-time dynamical multi-agent systems are treated. Optimal cooperative control is introduced and neural adaptive design techniques for multi-agent nonlinear systems with unknown dynamics, which are rarely treated in literature are developed. Results spanning systems with first-, second- and on up to general high-order nonlinear dynamics are presented. Each control methodology proposed is developed by rigorous proofs. All algorithms are justified by simulation examples. The text is self-contained and will serve as an excellent comprehensive source of information for researchers and graduate students working with multi-agent systems.
This book covers all the steps from identification of operations and resources to the transformation of virtual models into real-world algorithms. The matrix-based approach presented here is a solution to the real-time application of control in discrete event systems and flexible manufacturing systems (FMS), and offers a sound practical basis for the design of controllers for manufacturing systems.
This book provides techniques to produce robust, stable and useable solutions to problems of H-infinity and H2 control in high-performance, non-linear systems for the first time. The book is of importance to control designers working in a variety of industrial systems. Case studies are given and the design of nonlinear control systems of the same caliber as those obtained in recent years using linear optimal and bounded-norm designs is explained.
Deterministic Learning Theory for Identification, Recognition, and Control presents a unified conceptual framework for knowledge acquisition, representation, and knowledge utilization in uncertain dynamic environments. It provides systematic design approaches for identification, recognition, and control of linear uncertain systems. Unlike many books currently available that focus on statistical principles, this book stresses learning through closed-loop neural control, effective representation and recognition of temporal patterns in a deterministic way. A Deterministic View of Learning in Dynamic Environments The authors begin with an introduction to the concepts of deterministic learning theory, followed by a discussion of the persistent excitation property of RBF networks. They describe the elements of deterministic learning, and address dynamical pattern recognition and pattern-based control processes. The results are applicable to areas such as detection and isolation of oscillation faults, ECG/EEG pattern recognition, robot learning and control, and security analysis and control of power systems. A New Model of Information Processing This book elucidates a learning theory which is developed using concepts and tools from the discipline of systems and control. Fundamental knowledge about system dynamics is obtained from dynamical processes, and is then utilized to achieve rapid recognition of dynamical patterns and pattern-based closed-loop control via the so-called internal and dynamical matching of system dynamics. This actually represents a new model of information processing, i.e. a model of dynamical parallel distributed processing (DPDP).
Many of the non-smooth, non-linear phenomena covered in this well-balanced book are of vital importance in almost any field of engineering. Contributors from all over the world ensure that no one area 's slant on the subjects predominates.
Adaptive controllers and optimal controllers are two distinct methods for the design of automatic control systems. Adaptive controllers learn online in real time how to control systems but do not yield optimal performance, whereas optimal controllers must be designed offline using full knowledge of the systems dynamics. This book shows how approximate dynamic programming a reinforcement machine learning technique that is motivated by learning mechanisms in biological and animal systems can be used to design a family of adaptive optimal control algorithms that converge in realtime to optimal control solutions by measuring data along the system trajectories. The book also describes how to use approximate dynamic programming methods to solve multiplayer differential games online. Differential games have been shown to be important in Hinfinity robust control for disturbance rejection, and in coordinating activities among multiple agents in networked teams. The focus of this book is on continuoustime systems, whose dynamical models can be derived directly from physical principles based on Hamiltonian or Lagrangian dynamics. Simulation examples are given throughout the book, and several methods are described that do not require full state dynamics information. Optimal Adaptive Control and Differential Games by Reinforcement Learning Principles is an essential addition to the bookshelves of mechanical, electrical, and aerospace engineers working in feedback control systems design."
Deterministic Learning Theory for Identification, Recognition, and Control presents a unified conceptual framework for knowledge acquisition, representation, and knowledge utilization in uncertain dynamic environments. It provides systematic design approaches for identification, recognition, and control of linear uncertain systems. Unlike many books currently available that focus on statistical principles, this book stresses learning through closed-loop neural control, effective representation and recognition of temporal patterns in a deterministic way. A Deterministic View of Learning in Dynamic Environments The authors begin with an introduction to the concepts of deterministic learning theory, followed by a discussion of the persistent excitation property of RBF networks. They describe the elements of deterministic learning, and address dynamical pattern recognition and pattern-based control processes. The results are applicable to areas such as detection and isolation of oscillation faults, ECG/EEG pattern recognition, robot learning and control, and security analysis and control of power systems. A New Model of Information Processing This book elucidates a learning theory which is developed using concepts and tools from the discipline of systems and control. Fundamental knowledge about system dynamics is obtained from dynamical processes, and is then utilized to achieve rapid recognition of dynamical patterns and pattern-based closed-loop control via the so-called internal and dynamical matching of system dynamics. This actually represents a new model of information processing, i.e. a model of dynamical parallel distributed processing (DPDP).
|
You may like...
|