![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
Analysis and Control of Boolean Networks presents a systematic new approach to the investigation of Boolean control networks. The fundamental tool in this approach is a novel matrix product called the semi-tensor product (STP). Using the STP, a logical function can be expressed as a conventional discrete-time linear system. In the light of this linear expression, certain major issues concerning Boolean network topology - fixed points, cycles, transient times and basins of attractors - can be easily revealed by a set of formulae. This framework renders the state-space approach to dynamic control systems applicable to Boolean control networks. The bilinear-systemic representation of a Boolean control network makes it possible to investigate basic control problems including controllability, observability, stabilization, disturbance decoupling etc.
This book presents intellectual, innovative, information
technologies (I3-technologies) based on logical and probabilistic
(LP) risk models. The technologies presented here consider such
models for structurally complex systems and processes with logical
links and with random events in economics and technology. A number of applications is given to show the effectiveness of risk management technologies. In addition, topics of lectures and practical computer exercises intended for a two-semester course Risk management technologies are suggested."
This collaborative book presents recent trends on the study of sequences, including combinatorics on words and symbolic dynamics, and new interdisciplinary links to group theory and number theory. Other chapters branch out from those areas into subfields of theoretical computer science, such as complexity theory and theory of automata. The book is built around four general themes: number theory and sequences, word combinatorics, normal numbers, and group theory. Those topics are rounded out by investigations into automatic and regular sequences, tilings and theory of computation, discrete dynamical systems, ergodic theory, numeration systems, automaton semigroups, and amenable groups. This volume is intended for use by graduate students or research mathematicians, as well as computer scientists who are working in automata theory and formal language theory. With its organization around unified themes, it would also be appropriate as a supplemental text for graduate level courses.
This book is a comprehensive, systematic survey of the synthesis problem, and of region theory which underlies its solution, covering the related theory, algorithms, and applications. The authors focus on safe Petri nets and place/transition nets (P/T-nets), treating synthesis as an automated process which, given behavioural specifications or partial specifications of a system to be realized, decides whether the specifications are feasible, and then produces a Petri net realizing them exactly, or if this is not possible produces a Petri net realizing an optimal approximation of the specifications. In Part I the authors introduce elementary net synthesis. In Part II they explain variations of elementary net synthesis and the unified theory of net synthesis. The first three chapters of Part III address the linear algebraic structure of regions, synthesis of P/T-nets from finite initialized transition systems, and the synthesis of unbounded P/T-nets. Finally, the last chapter in Part III and the chapters in Part IV cover more advanced topics and applications: P/T-net with the step firing rule, extracting concurrency from transition systems, process discovery, supervisory control, and the design of speed-independent circuits. Most chapters conclude with exercises, and the book is a valuable reference for both graduate students of computer science and electrical engineering and researchers and engineers in this domain.
This book is devoted to Professor Jurgen Lehn, who passed away on September 29, 2008, at the age of 67. It contains invited papers that were presented at the Wo- shop on Recent Developments in Applied Probability and Statistics Dedicated to the Memory of Professor Jurgen Lehn, Middle East Technical University (METU), Ankara, April 23-24, 2009, which was jointly organized by the Technische Univ- sitat Darmstadt (TUD) and METU. The papers present surveys on recent devel- ments in the area of applied probability and statistics. In addition, papers from the Panel Discussion: Impact of Mathematics in Science, Technology and Economics are included. Jurgen Lehn was born on the 28th of April, 1941 in Karlsruhe. From 1961 to 1968 he studied mathematics in Freiburg and Karlsruhe, and obtained a Diploma in Mathematics from the University of Karlsruhe in 1968. He obtained his Ph.D. at the University of Regensburg in 1972, and his Habilitation at the University of Karlsruhe in 1978. Later in 1978, he became a C3 level professor of Mathematical Statistics at the University of Marburg. In 1980 he was promoted to a C4 level professorship in mathematics at the TUD where he was a researcher until his death."
This book considers logical proof systems from the point of view of their space complexity. After an introduction to propositional proof complexity the author structures the book into three main parts. Part I contains two chapters on resolution, one containing results already known in the literature before this work and one focused on space in resolution, and the author then moves on to polynomial calculus and its space complexity with a focus on the combinatorial technique to prove monomial space lower bounds. The first chapter in Part II addresses the proof complexity and space complexity of the pigeon principles. Then there is an interlude on a new type of game, defined on bipartite graphs, essentially independent from the rest of the book, collecting some results on graph theory. Finally Part III analyzes the size of resolution proofs in connection with the Strong Exponential Time Hypothesis (SETH) in complexity theory. The book is appropriate for researchers in theoretical computer science, in particular computational complexity.
This book is open access under a CC BY 4.0 license. This easy-to-read book introduces the basics of solving partial differential equations by means of finite difference methods. Unlike many of the traditional academic works on the topic, this book was written for practitioners. Accordingly, it especially addresses: the construction of finite difference schemes, formulation and implementation of algorithms, verification of implementations, analyses of physical behavior as implied by the numerical solutions, and how to apply the methods and software to solve problems in the fields of physics and biology.
This book presents a collection of research papers that address the challenge of how to develop software in a principled way that, in particular, enables reasoning. The individual papers approach this challenge from various perspectives including programming languages, program verification, and the systematic variation of software. Topics covered include programming abstractions for concurrent and distributed software, specification and verification techniques for imperative programs, and development techniques for software product lines. With this book the editors and authors wish to acknowledge - on the occasion of his 60th birthday - the work of Arnd Poetzsch-Heffter, who has made major contributions to software technology throughout his career. It features articles on Arnd's broad research interests including, among others, the implementation of programming languages, formal semantics, specification and verification of object-oriented and concurrent programs, programming language design, distributed systems, software modeling, and software product lines. All contributing authors are leading experts in programming languages and software engineering who have collaborated with Arnd in the course of his career. Overall, the book offers a collection of high-quality articles, presenting original research results, major case studies, and inspiring visions. Some of the work included here was presented at a symposium in honor of Arnd Poetzsch-Heffter, held in Kaiserslautern, Germany, in November 2018.
Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovi, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational algorithms, and multidisciplinary applications. Special features of this volume: - Presents results and approximation methods in various computational settings including: polynomial and orthogonal systems, analytic functions, and differential equations. - Provides a historical overview of approximation theory and many of its subdisciplines; - Contains new results from diverse areas of research spanning mathematics, engineering, and the computational sciences. "Approximation and Computation" is intended for mathematicians and researchers focusing on approximation theory and numerical analysis, but can also be a valuable resource to students and researchers in the computational and applied sciences."
This book focuses on the different representations and cryptographic properties of Booleans functions, presents constructions of Boolean functions with some good cryptographic properties. More specifically, Walsh spectrum description of the traditional cryptographic properties of Boolean functions, including linear structure, propagation criterion, nonlinearity, and correlation immunity are presented. Constructions of symmetric Boolean functions and of Boolean permutations with good cryptographic properties are specifically studied. This book is not meant to be comprehensive, but with its own focus on some original research of the authors in the past. To be self content, some basic concepts and properties are introduced. This book can serve as a reference for cryptographic algorithm designers, particularly the designers of stream ciphers and of block ciphers, and for academics with interest in the cryptographic properties of Boolean functions.
Images are all around us The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something--an artery, a road, a DNA marker, an oil spill--from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over a two or higher dimensional space, and to which standard image-processing algorithms may not apply. There are many important data analysis methods developed in this text for such statistical image problems. Examples abound throughout remote sensing (satellite data mapping, data assimilation, climate-change studies, land use), medical imaging (organ segmentation, anomaly detection), computer vision (image classification, segmentation), and other 2D/3D problems (biological imaging, porous media). The goal, then, of this text is to address methods for solving multidimensional statistical problems. The text strikes a balance between mathematics and theory on the one hand, versus applications and algorithms on the other, by deliberately developing the basic theory (Part I), the mathematical modeling (Part II), and the algorithmic and numerical methods (Part III) of solving a given problem. The particular emphases of the book include inverse problems, multidimensional modeling, random fields, and hierarchical methods.
This unique textbook/reference presents unified coverage of bioinformatics topics relating to both biological sequences and biological networks, providing an in-depth analysis of cutting-edge distributed algorithms, as well as of relevant sequential algorithms. In addition to introducing the latest algorithms in this area, more than fifteen new distributed algorithms are also proposed. Topics and features: reviews a range of open challenges in biological sequences and networks; describes in detail both sequential and parallel/distributed algorithms for each problem; suggests approaches for distributed algorithms as possible extensions to sequential algorithms, when the distributed algorithms for the topic are scarce; proposes a number of new distributed algorithms in each chapter, to serve as potential starting points for further research; concludes each chapter with self-test exercises, a summary of the key points, a comparison of the algorithms described, and a literature review.
In this essay collection, leading physicists, philosophers, and historians attempt to fill the empty theoretical ground in the foundations of information and address the related question of the limits to our knowledge of the world. Over recent decades, our practical approach to information and its exploitation has radically outpaced our theoretical understanding - to such a degree that reflection on the foundations may seem futile. But it is exactly fields such as quantum information, which are shifting the boundaries of the physically possible, that make a foundational understanding of information increasingly important. One of the recurring themes of the book is the claim by Eddington and Wheeler that information involves interaction and putting agents or observers centre stage. Thus, physical reality, in their view, is shaped by the questions we choose to put to it and is built up from the information residing at its core. This is the root of Wheeler's famous phrase "it from bit." After reading the stimulating essays collected in this volume, readers will be in a good position to decide whether they agree with this view.
The latest work by the world's leading authorities on the use of formal methods in computer science is presented in this volume, based on the 1995 International Summer School in Marktoberdorf, Germany. Logic is of special importance in computer science, since it provides the basis for giving correct semantics of programs, for specification and verification of software, and for program synthesis. The lectures presented here provide the basic knowledge a researcher in this area should have and give excellent starting points for exploring the literature. Topics covered include semantics and category theory, machine based theorem proving, logic programming, bounded arithmetic, proof theory, algebraic specifications and rewriting, algebraic algorithms, and type theory.
The second volume of the two volumes book is dedicated to various extensions and generalizations of Dyadic (Walsh) analysis and related applications. Considered are dyadic derivatives on Vilenkin groups and various other Abelian and finite non-Abelian groups. Since some important results were developed in former Soviet Union and China, we provide overviews of former work in these countries. Further, we present translations of three papers that were initially published in Chinese. The presentation continues with chapters written by experts in the area presenting discussions of applications of these results in specific tasks in the area of signal processing and system theory. Efficient computing of related differential operators on contemporary hardware, including graphics processing units, is also considered, which makes the methods and techniques of dyadic analysis and generalizations computationally feasible. The volume 2 of the book ends with a chapter presenting open problems pointed out by several experts in the area.
This monograph proposes a new way of implementing interaction in logic. It also provides an elementary introduction to Constructive Type Theory (CTT). The authors equally emphasize basic ideas and finer technical details. In addition, many worked out exercises and examples will help readers to better understand the concepts under discussion. One of the chief ideas animating this study is that the dialogical understanding of definitional equality and its execution provide both a simple and a direct way of implementing the CTT approach within a game-theoretical conception of meaning. In addition, the importance of the play level over the strategy level is stressed, binding together the matter of execution with that of equality and the finitary perspective on games constituting meaning. According to this perspective the emergence of concepts are not only games of giving and asking for reasons (games involving Why-questions), they are also games that include moves establishing how it is that the reasons brought forward accomplish their explicative task. Thus, immanent reasoning games are dialogical games of Why and How.
Improved geospatial instrumentation and technology such as in laser scanning has now resulted in millions of data being collected, e.g., point clouds. It is in realization that such huge amount of data requires efficient and robust mathematical solutions that this third edition of the book extends the second edition by introducing three new chapters: Robust parameter estimation, Multiobjective optimization and Symbolic regression. Furthermore, the linear homotopy chapter is expanded to include nonlinear homotopy. These disciplines are discussed first in the theoretical part of the book before illustrating their geospatial applications in the applications chapters where numerous numerical examples are presented. The renewed electronic supplement contains these new theoretical and practical topics, with the corresponding Mathematica statements and functions supporting their computations introduced and applied. This third edition is renamed in light of these technological advancements.
This book offers a comprehensive and accessible exposition of Euclidean Distance Matrices (EDMs) and rigidity theory of bar-and-joint frameworks. It is based on the one-to-one correspondence between EDMs and projected Gram matrices. Accordingly the machinery of semidefinite programming is a common thread that runs throughout the book. As a result, two parallel approaches to rigidity theory are presented. The first is traditional and more intuitive approach that is based on a vector representation of point configuration. The second is based on a Gram matrix representation of point configuration. Euclidean Distance Matrices and Their Applications in Rigidity Theory begins by establishing the necessary background needed for the rest of the book. The focus of Chapter 1 is on pertinent results from matrix theory, graph theory and convexity theory, while Chapter 2 is devoted to positive semidefinite (PSD) matrices due to the key role these matrices play in our approach. Chapters 3 to 7 provide detailed studies of EDMs, and in particular their various characterizations, classes, eigenvalues and geometry. Chapter 8 serves as a transitional chapter between EDMs and rigidity theory. Chapters 9 and 10 cover local and universal rigidities of bar-and-joint frameworks. This book is self-contained and should be accessible to a wide audience including students and researchers in statistics, operations research, computational biochemistry, engineering, computer science and mathematics.
Initial training in pure and applied sciences tends to present problem-solving as the process of elaborating explicit closed-form solutions from basic principles, and then using these solutions in numerical applications. This approach is only applicable to very limited classes of problems that are simple enough for such closed-form solutions to exist. Unfortunately, most real-life problems are too complex to be amenable to this type of treatment. "Numerical Methods a Consumer Guide "presents methods for dealing with them. Shifting the paradigm from formal calculus to numerical computation, the text makes it possible for the reader to . discover how to escape the dictatorship of those particular cases that are simple enough to receive a closed-form solution, and thus gain the ability to solve complex, real-life problems; . understand the principles behind recognized algorithms used in state-of-the-art numerical software; . learn the advantages and limitations of these algorithms, to facilitate the choice of which pre-existing bricks to assemble for solving a given problem; and . acquire methods that allow a critical assessment of numerical results. "Numerical Methods a Consumer Guide "will be of interest to engineers and researchers who solve problems numerically with computers or supervise people doing so, and to students of both engineering and applied mathematics. "
This book presents a systematic exposition of the main ideas and methods in treating inverse problems for PDEs arising in basic mathematical models, though it makes no claim to being exhaustive. Mathematical models of most physical phenomena are governed by initial and boundary value problems for PDEs, and inverse problems governed by these equations arise naturally in nearly all branches of science and engineering. The book's content, especially in the Introduction and Part I, is self-contained and is intended to also be accessible for beginning graduate students, whose mathematical background includes only basic courses in advanced calculus, PDEs and functional analysis. Further, the book can be used as the backbone for a lecture course on inverse and ill-posed problems for partial differential equations. In turn, the second part of the book consists of six nearly-independent chapters. The choice of these chapters was motivated by the fact that the inverse coefficient and source problems considered here are based on the basic and commonly used mathematical models governed by PDEs. These chapters describe not only these inverse problems, but also main inversion methods and techniques. Since the most distinctive features of any inverse problems related to PDEs are hidden in the properties of the corresponding solutions to direct problems, special attention is paid to the investigation of these properties. For the second edition, the authors have added two new chapters focusing on real-world applications of inverse problems arising in wave and vibration phenomena. They have also revised the whole text of the first edition.
This book focuses on three core knowledge requirements for effective and thorough data analysis for solving business problems. These are a foundational understanding of: 1. statistical, econometric, and machine learning techniques; 2. data handling capabilities; 3. at least one programming language. Practical in orientation, the volume offers illustrative case studies throughout and examples using Python in the context of Jupyter notebooks. Covered topics include demand measurement and forecasting, predictive modeling, pricing analytics, customer satisfaction assessment, market and advertising research, and new product development and research. This volume will be useful to business data analysts, data scientists, and market research professionals, as well as aspiring practitioners in business data analytics. It can also be used in colleges and universities offering courses and certifications in business data analytics, data science, and market research.
Meshfree methods are a modern alternative to classical mesh-based discretization techniques such as finite differences or finite element methods. Especially in a time-dependent setting or in the treatment of problems with strongly singular solutions their independence of a mesh makes these methods highly attractive. This volume collects selected papers presented at the Sixth International Workshop on Meshfree Methods held in Bonn, Germany in October 2011. They address various aspects of this very active research field and cover topics from applied mathematics, physics and engineering.
From the reviews of the previous editions ..".. The book is a first class textbook and seems to be indispensable for everybody who has to teach combinatorial optimization. It is very helpful for students, teachers, and researchers in this area. The author finds a striking synthesis of nice and interesting mathematical results and practical applications. ... the author pays much attention to the inclusion of well-chosen exercises. The reader does not remain helpless; solutions or at least hints are given in the appendix. Except for some small basic mathematical and algorithmic knowledge the book is self-contained. ..." K.Engel, Mathematical Reviews 2002 The substantial development effort of this text, involving multiple editions and trailing in the context of various workshops, university courses and seminar series, clearly shows through in this new edition with its clear writing, good organisation, comprehensive coverage of essential theory, and well-chosen applications. The proofs of important results and the representation of key algorithms in a Pascal-like notation allow this book to be used in a high-level undergraduate or low-level graduate course on graph theory, combinatorial optimization or computer science algorithms. The well-worked solutions to exercises are a real bonus for self study by students. The book is highly recommended. P .B. Gibbons, Zentralblatt fur Mathematik 2005 Once again, the new edition has been thoroughly revised. In particular, some further material has been added: more on NP-completeness (especially on dominating sets), a section on the Gallai-Edmonds structure theory for matchings, and about a dozen additional exercises as always, with solutions. Moreover, the section on the 1-factor theorem has been completely rewritten: it now presents a short direct proof for the more general Berge-Tutte formula. Several recent research developments are discussed and quite a few references have been added."
These are the proceedings of the 20th international conference on domain decomposition methods in science and engineering. Domain decomposition methods are iterative methods for solving the often very large linearor nonlinear systems of algebraic equations that arise when various problems in continuum mechanics are discretized using finite elements. They are designed for massively parallel computers and take the memory hierarchy of such systems in mind. This is essential for approaching peak floating point performance. There is an increasingly well developed theory whichis having a direct impact on the development and improvements of these algorithms.
This book presents selected peer-reviewed contributions from the International Work-Conference on Time Series, ITISE 2017, held in Granada, Spain, September 18-20, 2017. It discusses topics in time series analysis and forecasting, including advanced mathematical methodology, computational intelligence methods for time series, dimensionality reduction and similarity measures, econometric models, energy time series forecasting, forecasting in real problems, online learning in time series as well as high-dimensional and complex/big data time series. The series of ITISE conferences provides a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing computer science, mathematics, statistics and econometrics. |
![]() ![]() You may like...
Web-Based Services - Concepts…
Information Reso Management Association
Hardcover
R18,334
Discovery Miles 183 340
Insightful Data Visualization with SAS…
Falko Schulz, Travis Murphy
Hardcover
R1,239
Discovery Miles 12 390
Java Foundations - Pearson New…
John Lewis, Peter DePasquale, …
Paperback
R2,777
Discovery Miles 27 770
Conversations with Things - UX Design…
Diana Deibel, Rebecca Evanhoe
Paperback
R1,157
Discovery Miles 11 570
|