![]() |
![]() |
Your cart is empty |
||
Books > Reference & Interdisciplinary > Communication studies > Information theory
The book you hold in your hands is the outcome of the "ISCS 2013: Interdisciplinary Symposium on Complex Systems" held at the historical capital of Bohemia as a continuation of our series of symposia in the science of complex systems. Prague, one of the most beautiful European cities, has its own beautiful genius loci. Here, a great number of important discoveries were made and many important scientists spent fruitful and creative years to leave unforgettable traces. The perhaps most significant period was the time of Rudolf II who was a great supporter of the art and the science and attracted a great number of prominent minds to Prague. This trend would continue. Tycho Brahe, Niels Henrik Abel, Johannes Kepler, Bernard Bolzano, August Cauchy Christian Doppler, Ernst Mach, Albert Einstein and many others followed developing fundamental mathematical and physical theories or expanding them. Thus in the beginning of the 17th century, Kepler formulated here the first two of his three laws of planetary motion on the basis of Tycho Brahe's observations. In the 19th century, nowhere differentiable continuous functions (of a fractal character) were constructed here by Bolzano along with a treatise on infinite sets, titled "Paradoxes of Infinity" (1851). Weierstrass would later publish a similar function in 1872. In 1842, Doppler as a professor of mathematics at the Technical University of Prague here first lectured about a physical effect to bear his name later. And the epoch-making physicist Albert Einstein - while being a chaired professor of theoretical physics at the German University of Prague - arrived at the decisive steps of his later finished theory of general relativity during the years 1911-1912. In Prague, also many famous philosophers and writers accomplished their works; for instance, playwright arel ape coined the word "robot" in Prague ("robot" comes from the Czech word "robota" which means "forced labor").
The aim of this book is to furnish the reader with a rigorous and detailed exposition of the concept of control parametrization and time scaling transformation. It presents computational solution techniques for a special class of constrained optimal control problems as well as applications to some practical examples. The book may be considered an extension of the 1991 monograph A Unified Computational Approach Optimal Control Problems, by K.L. Teo, C.J. Goh, and K.H. Wong. This publication discusses the development of new theory and computational methods for solving various optimal control problems numerically and in a unified fashion. To keep the book accessible and uniform, it includes those results developed by the authors, their students, and their past and present collaborators. A brief review of methods that are not covered in this exposition, is also included. Knowledge gained from this book may inspire advancement of new techniques to solve complex problems that arise in the future. This book is intended as reference for researchers in mathematics, engineering, and other sciences, graduate students and practitioners who apply optimal control methods in their work. It may be appropriate reading material for a graduate level seminar or as a text for a course in optimal control.
Proceedings volume contains carefully selected papers presented during the 17th IFIP Conference on System Modelling and Optimization. Optimization theory and practice, optimal control, system modelling, stochastic optimization, and technical and non-technical applications of the existing theory are among areas mostly addressed in the included papers. Main directions are treated in addition to several survey papers based on invited presentations of leading specialists in the respective fields. Publication provides state-of-the-art in the area of system theory and optimization and points out several new areas (e.g fuzzy set, neural nets), where classical optimization topics intersects with computer science methodology.
Creative Space summarizes and integrates the various up-to-date approaches of computational intelligence to knowledge and technology creation including the specific novel feature of utilizing the creative abilities of the human mind, such as tacit knowledge, emotions and instincts, and intuition. It analyzes several important approaches of this new paradigm such as the Shinayakana Systems Approach, the organizational knowledge creation theory, in particular SECI Spiral, and the Rational Theory of Intuition - resulting in the concept of Creative Space. This monograph presents and analyzes in detail this new concept together with its ontology - the list and meanings of the analyzed nodes of this space and of the character of transitions linking these nodes.
This book focuses on solving different types of time-varying problems. It presents various Zhang dynamics (ZD) models by defining various Zhang functions (ZFs) in real and complex domains. It then provides theoretical analyses of such ZD models and illustrates their results. It also uses simulations to substantiate their efficacy and show the feasibility of the presented ZD approach (i.e., different ZFs leading to different ZD models), which is further applied to the repetitive motion planning (RMP) of redundant robots, showing its application potential.
The book outlines selected projects conducted under the supervision of the author. Moreover, it discusses significant relations between Interactive Granular Computing (IGrC) and numerous dynamically developing scientific domains worldwide, along with features characteristic of the author's approach to IGrC. The results presented are a continuation and elaboration of various aspects of Wisdom Technology, initiated and developed in cooperation with Professor Andrzej Skowron. Based on the empirical findings from these projects, the author explores the following areas: (a) understanding the causes of the theory and practice gap problem (TPGP) in complex systems engineering (CSE);(b) generalizing computing models of complex adaptive systems (CAS) (in particular, natural computing models) by constructing an interactive granular computing (IGrC) model of networks of interrelated interacting complex granules (c-granules), belonging to a single agent and/or to a group of agents; (c) developing methodologies based on the IGrC model to minimize the negative consequences of the TPGP. The book introduces approaches to the above issues, using the proposed IGrC model. In particular, the IGrC model refers to the key mechanisms used to control the processes related to the implementation of CSE projects. One of the main aims was to develop a mechanism of IGrC control over computations that model a project's implementation processes to maximize the chances of its success, while at the same time minimizing the emerging risks. In this regard, the IGrC control is usually performed by means of properly selected and enforced (among project participants) project principles. These principles constitute examples of c-granules, expressed by complex vague concepts (represented by c-granules too). The c-granules evolve with time (in particular, the meaning of the concepts is also subject of change). This methodology is illustrated using project principles applied by the author during the implementation of the POLTAX, AlgoTradix, Merix, and Excavio projects outlined in the book.
Our world is composed of systems within systems-the machines we build, the information we share, the organizations we form, and elements of nature that surround us. Therefore, nearly every field of study and practice embodies behaviors stemming from system dynamics. Yet the study of systems has remained somewhat fragmented based on philosophies, methodologies, and intentions. Many methodologies for analyzing complex systems extend far beyond the traditional framework of deduction evaluation and may, thus, appear mysterious to the uninitiated. This book seeks to dispel the mysteries of systems analysis by holistically explaining the philosophies, methodologies, and intentions in the context of understanding how all types of systems in our world form and how these systems break. This presentation is made at the level of conceptual understanding, with plenty of figures but no mathematical formulas, for the beginning student and interested readers new to studying systems. Through the conceptual understanding provided, students are given a powerful capability to see the hidden behaviors and unexplained consequences in the world around us.
This three-volume work presents a coherent description of the
theoretical and practical aspects of coloured Petri nets (CP-nets).
The second volume contains a detailed presentation of the analysis
methods for CP-nets. They allow the modeller to investigate dynamic
properties of CP-nets.
This book presents the fmdings of a comparative study of three European metropolitan regions: Vienna, Barcelona and Stockholm. The heart of the work consists of empirical studies carefully designed and developed in order to identify the main actors and mechanisms supporting technological innovation in each of the metropolitan regions. The authors have also highlighted the similarities and differences across regions and countries, investigating how these came to be, and discussing the possible implications. The introductory as well as the concluding Chapter was written by Manfred M. Fischer who, assisted by Attila Varga, was also responsible for Chapter 2 on the Metropolitan Region of Vienna. Javier Revilla Diez contributed Chapter 3 on the Barcelona Metropolitan Region. Folke Snickars has provided Chapter 4 which examines the Metropolitan Region of Stockholm and. All authors have reviewed and commented on the whole contents so that the volume represents a collective endeavour which has been rendered as homogeneous as possible. A particular effort has been made to ensure that the study is based on a common conceptual framework.
This book develops applications of novel generalizations of fuzzy information measures in the field of pattern recognition, medical diagnosis, multi-criteria and multi-attribute decision making and suitability in linguistic variables. The focus of this presentation lies on introducing consistently strong and efficient generalizations of information and information-theoretic divergence measures in fuzzy and intuitionistic fuzzy environment covering different practical examples. The target audience comprises primarily researchers and practitioners in the involved fields but the book may also be beneficial for graduate students.
Modeling and Simulation: Theory and Practice provides a comprehensive review of both methodologies and applications of simulation and modeling. The methodology section includes such topics as the philosophy of simulation, inverse problems in simulation, simulation model compilers, treatment of ill-defined systems, and a survey of simulation languages. The application section covers a wide range of topics, including applications to environmental management, biology and medicine, neural networks, collaborative visualization and intelligent interfaces. The book consists of 13 invited chapters written by former colleagues and students of Professor Karplus. Also included are several short 'reminiscences' describing Professor Karplus' impact on the professional careers of former colleagues and students who worked closely with him over the years.
The underlying technologies enabling the realization of recent advances in areas like mobile and enterprise computing are artificial intelligence (AI), modeling and simulation, and software engineering. A disciplined, multifaceted, and unified approach to modeling and simulation is now essential in new frontiers, such as Simulation Based Acquisition. This volume is an edited survey of international scientists, academicians, and professionals who present their latest research findings in the various fields of AI; collaborative/distributed computing; and modeling, simulation, and their integration. Whereas some of these areas continue to seek answers to basic fundamental scientific inquiries, new questions have emerged only recently due to advances in computing infrastructures, technologies, and tools. The book¿s principal goal is to provide a unifying forum for developing postmodern, AI-based modeling and simulation environments and their utilization in both traditional and modern application domains. Features and topics: * Blends comprehensive, advanced modeling and simulation theories and methodologies in a presentation founded on formal, system-theoretic and AI-based approaches * Uses detailed, real-world examples to illustrate key concepts in systems theory, modeling, simulation, object orientation, and intelligent systems * Addresses a broad range of critical topics in the areas of modeling frameworks, distributed and high-performance object-oriented simulation approaches, as well as robotics, learning, multi-scale and multi-resolution models, and multi-agent systems * Includes new results pertaining to intelligent and agent-based modeling, the relationship between AI-based reasoning and Discrete-Event System Specification, and large-scale distributed modeling and simulation frameworks * Provides cross-disciplinary insight into how computer science, computer engineering, and systems engineering can collectively provide a rich set of theories and methods enabling contemporary modeling and simulation This state-of-the-art survey on collaborative/distributed modeling and simulation computing environments is an essential resource for the latest developments and tools in the field for all computer scientists, systems engineers, and software engineers. Professionals, practitioners, and graduate students will find this reference invaluable to their work involving computer simulation, distributed modeling, discrete-event systems, AI, and software engineering.
The biennial conferences of the Society for Underwater Technology have achieved an excellent reputation for the quality of their presentations, which cover topics of the most acute current interest, as well as those at the forefront of review and development. The 1994 conference on Subsea Control and Data Acquisition formed no exception, since it covers subjects at the cutting edge of modern technology. It is a matter of increasing concern that products are becoming overspecified, resulting in excessive costs and longer development schedules, while not conferring an equivalent benefit in reliability of the finished product. Subsea Control and Data Acquisition is vital reading for all subsea control system designers, manufacturers and operators, equipment consultants, application engineers, academics in the subsea engineering field, and all subsea engineers.
The theory of finite fields, whose origins can be traced back to the works of Gauss and Galois, has played a part in various branches of mathematics, in recent years there has been a resurgence of interest in finite fields, and this is partly due to important applications in coding theory and cryptography. Applications of Finite Fields introduces some of these recent developments. This book focuses attention on some specific recent developments in the theory and applications of finite fields. While the topics selected are treated in some depth, Applications of Finite Fields does not attempt to be encyclopedic. Among the topics studied are different methods of representing the elements of a finite field (including normal bases and optimal normal bases), algorithms for factoring polynomials over finite fields, methods for constructing irreducible polynomials, the discrete logarithm problem and its implications to cryptography, the use of elliptic curves in constructing public key cryptosystems, and the uses of algebraic geometry in constructing good error-correcting codes. This book is developed from a seminar held at the University of Waterloo. The purpose of the seminar was to bridge the knowledge of the participants whose expertise and interests ranged from the purely theoretical to the applied. As a result, this book will be of interest to a wide range of students, researchers and practitioners in the disciplines of computer science, engineering and mathematics. Applications of Finite Fields is an excellent reference and may be used as a text for a course on the subject.
The Second Edition of Quantum Information Processing, Quantum Computing, and Quantum Error Correction: An Engineering Approach presents a self-contained introduction to all aspects of the area, teaching the essentials such as state vectors, operators, density operators, measurements, and dynamics of a quantum system. In additional to the fundamental principles of quantum computation, basic quantum gates, basic quantum algorithms, and quantum information processing, this edition has been brought fully up to date, outlining the latest research trends. These include: Key topics include: Quantum error correction codes (QECCs), including stabilizer codes, Calderbank-Shor-Steane (CSS) codes, quantum low-density parity-check (LDPC) codes, entanglement-assisted QECCs, topological codes, and surface codes Quantum information theory, and quantum key distribution (QKD) Fault-tolerant information processing and fault-tolerant quantum error correction, together with a chapter on quantum machine learning. Both quantum circuits- and measurement-based quantum computational models are described The next part of the book is spent investigating physical realizations of quantum computers, encoders and decoders; including photonic quantum realization, cavity quantum electrodynamics, and ion traps In-depth analysis of the design and realization of a quantum information processing and quantum error correction circuits This fully up-to-date new edition will be of use to engineers, computer scientists, optical engineers, physicists and mathematicians.
1. 1 Introduction This book is written in two major parts. The ?rst part includes the int- ductory chapters consisting of Chapters 1 through 6. In part two, Chapters 7-26, we present the applications. This book continues our research into simulating fuzzy systems. We started with investigating simulating discrete event fuzzy systems ([7],[13],[14]). These systems can usually be described as queuing networks. Items (transactions) arrive at various points in the s- tem and go into a queue waiting for service. The service stations, preceded by a queue, are connected forming a network of queues and service, until the transaction ?nally exits the system. Examples considered included - chine shops, emergency rooms, project networks, bus routes, etc. Analysis of all of these systems depends on parameters like arrival rates and service rates. These parameters are usually estimated from historical data. These estimators are generally point estimators. The point estimators are put into the model to compute system descriptors like mean time an item spends in the system, or the expected number of transactions leaving the system per unit time. We argued that these point estimators contain uncertainty not shown in the calculations. Our estimators of these parameters become fuzzy numbers, constructed by placing a set of con?dence intervals one on top of another. Using fuzzy number parameters in the model makes it into a fuzzy system. The system descriptors we want (time in system, number leaving per unit time) will be fuzzy numbers.
Driven by the request for increased productivity, flexibility, and competitiveness, modern civilization increasingly has created high-performance discrete event dynamic systems (DEDSs). These systems exhibit concurrent, sequential, competitive activities among their components. They are often complex and large in scale, and necessarily flexible and thus highly capital-intensive. Examples of systems are manufacturing systems, communication networks, traffic and logistic systems, and military command and control systems. Modeling and performance evaluation play a vital role in the design and operation of such high-performance DEDSs and thus have received widespread attention from researchers over the past two decades. One methodology resulting from this effort is based on timed Petri nets and related graphical and mathematical tools. The popularity that Petri nets have been gaining in modeling of DEDSs is due to their powerful representational ability of concurrency and synchronization; however these properties of DEDSs cannot be expressed easily in traditional formalisms developed for analysis of classical' systems with sequential behaviors. This book introduces the theories and applications of timed Petri nets systematically. Moreover, it also presents many practical applications in addition to theoretical developments, together with the latest research results and industrial applications of timed Petri nets. Timed Petri Nets: Theory and Application is intended for use by researchers and practitioners in the area of Discrete Event Dynamic Systems.
Experimental Econophysics describes the method of controlled human experiments, which is developed by physicists to study some problems in economics or finance, namely, stylized facts, fluctuation phenomena, herd behavior, contrarian behavior, hedge behavior, cooperation, business cycles, partial information, risk management, and stock prediction. Experimental econophysics together with empirical econophysics are two branches of the field of econophysics. The latter one has been extensively discussed in the existing books, while the former one has been seldom touched. In this book, the author will focus on the branch of experimental econophysics. Empirical econophysics is based on the analysis of data in real markets by using some statistical tools borrowed from traditional statistical physics. Differently, inspired by the role of controlled experiments and system modelling (for computer simulations and/or analytical theory) in developing modern physics, experimental econophysics specially relies on controlled human experiments in the laboratory (producing data for analysis) together with agent-based modelling (for computer simulations and/or analytical theory), with an aim at revealing the general cause-effect relationship between specific parameters and emergent properties of real economic/financial markets. This book covers the basic concepts, experimental methods, modelling approaches, and latest progress in the field of experimental econophysics.
This book highlights current research into virtual tutoring software and presents a case study of the design and application of a social tutor for children with autism. Best practice guidelines for developing software-based educational interventions are discussed, with a major emphasis on facilitating the generalisation of skills to contexts outside of the software itself, and on maintaining these skills over time. Further, the book presents the software solution Thinking Head Whiteboard, which provides a framework for families and educators to create unique educational activities utilising virtual character technology and customised to match learners' needs and interests. In turn, the book describes the development and evaluation of a social tutor incorporating multiple life-like virtual humans, leading to an exploration of the lessons learned and recommendations for the future development of related technologies.
In the context of life sciences, we are constantly confronted with information that possesses precise semantic values and appears essentially immersed in a specific evolutionary trend. In such a framework, Nature appears, in Monod's words, as a tinkerer characterized by the presence of precise principles of self-organization. However, while Monod was obliged to incorporate his brilliant intuitions into the framework of first-order cybernetics and a theory of information with an exclusively syntactic character such as that defined by Shannon, research advances in recent decades have led not only to the definition of a second-order cybernetics but also to an exploration of the boundaries of semantic information. As H. Atlan states, on a biological level "the function self-organizes together with its meaning". Hence the need to refer to a conceptual theory of complexity and to a theory of self-organization characterized in an intentional sense. There is also a need to introduce, at the genetic level, a distinction between coder and ruler as well as the opportunity to define a real software space for natural evolution. The recourse to non-standard model theory, the opening to a new general semantics, and the innovative definition of the relationship between coder and ruler can be considered, today, among the most powerful theoretical tools at our disposal in order to correctly define the contours of that new conceptual revolution increasingly referred to as metabiology. This book focuses on identifying and investigating the role played by these particular theoretical tools in the development of this new scientific paradigm. Nature "speaks" by means of mathematical forms: we can observe these forms, but they are, at the same time, inside us as they populate our organs of cognition. In this context, the volume highlights how metabiology appears primarily to refer to the growth itself of our instruments of participatory knowledge of the world.
Intelligent technical systems, which combine mechanical, electrical and software engineering with control engineering and advanced mathematics, go far beyond the state of the art in mechatronics and open up fascinating perspectives. Among these systems are so-called self-optimizing systems, which are able to adapt their behavior autonomously and flexibly to changing operating conditions. Self-optimizing systems create high value for example in terms of energy and resource efficiency as well as reliability. The Collaborative Research Center 614 "Self-optimizing Concepts and Structures in Mechanical Engineering" pursued the long-term aim to open up the active paradigm of self-optimization for mechanical engineering and to enable others to develop self-optimizing systems. This book is directed to researchers and practitioners alike. It provides a design methodology for the development of self-optimizing systems consisting of a reference process, methods, and tools. The reference process is divided into two phases the domain-spanning conceptual design and the domain-specific design and development. For the conceptual design a holistic approach is provided. Domain-specific methods and tools developed especially for the design and development of self-optimizing systems are described and illustrated by application examples. This book will enable the reader to identify the potential for self-optimization and to develop self-optimizing systems independently."
This book compiles recent developments on sliding mode control theory and its applications. Each chapter presented in the book proposes new dimension in the sliding mode control theory such as higher order sliding mode control, event triggered sliding mode control, networked control, higher order discrete-time sliding mode control and sliding mode control for multi-agent systems. Special emphasis has been given to practical solutions to design involving new types of sliding mode control. This book is a reference guide for graduate students and researchers working in the domain for designing sliding mode controllers. The book is also useful to professional engineers working in the field to design robust controllers for various applications.
Stabilization of Navier Stokes Flows presents recent notable progress in the mathematical theory of stabilization of Newtonian fluid flows. Finite-dimensional feedback controllers are used to stabilize exponentially the equilibrium solutions of Navier Stokes equations, reducing or eliminating turbulence. Stochastic stabilization and robustness of stabilizable feedback are also discussed. The analysis developed here provides a rigorous pattern for the design of efficient stabilizable feedback controllers to meet the needs of practical problems and the conceptual controllers actually detailed will render the reader 's task of application easier still.Stabilization of Navier Stokes Flows avoids the tedious and technical details often present in mathematical treatments of control and Navier Stokes equations and will appeal to a sizeable audience of researchers and graduate students interested in the mathematics of flow and turbulence control and in Navier-Stokes equations in particular.
Infinite dimensional systems is now an established area of research. Given the recent trend in systems theory and in applications towards a synthesis of time- and frequency-domain methods, there is a need for an introductory text which treats both state-space and frequency-domain aspects in an integrated fashion. The authors' primary aim is to write an introductory textbook for a course on infinite dimensional linear systems. An important consideration by the authors is that their book should be accessible to graduate engineers and mathematicians with a minimal background in functional analysis. Consequently, all the mathematical background is summarized in an extensive appendix. For the majority of students, this would be their only acquaintance with infinite dimensional systems. |
![]() ![]() You may like...
Handbook of Research on Applied…
Snehanshu Saha, Abhyuday Mandal, …
Hardcover
R6,758
Discovery Miles 67 580
Binary Bullets - The Ethics of…
Fritz Allhoff, Adam Henschke, …
Hardcover
R3,698
Discovery Miles 36 980
Navigating Information Literacy
Theo Bothma, Erica Cosijn, …
Paperback
Encyclopedia of Information Science and…
Mehdi Khosrow-Pour, D.B.A.
Hardcover
R23,328
Discovery Miles 233 280
Computer Aided Verification
Hana Chockler, Georg Weissenbacher
Hardcover
R2,205
Discovery Miles 22 050
|