![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This volume contains 27 contributions to the Forth Russian-German Advanced Research Workshop on Computational Science and High Performance Computing presented in October 2009 in Freiburg, Germany. The workshop was organized jointly by the High Performance Computing Center Stuttgart (HLRS), the Institute of Computational Technologies of the Siberian Branch of the Russian Academy of Sciences (ICT SB RAS) and the Section of Applied Mathematics of the University of Freiburg (IAM Freiburg) The contributions range from computer science, mathematics and high performance computing to applications in mechanical and aerospace engineering. They show a wealth of theoretical work and simulation experience with a potential of bringing together theoretical mathematical modelling and usage of high performance computing systems presenting the state of the art of computational technologies.
When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, control, and finance.
This book brings together historical notes, reviews of research developments, fresh ideas on how to make VC (Vapnik-Chervonenkis) guarantees tighter, and new technical contributions in the areas of machine learning, statistical inference, classification, algorithmic statistics, and pattern recognition. The contributors are leading scientists in domains such as statistics, mathematics, and theoretical computer science, and the book will be of interest to researchers and graduate students in these domains.
This book discusses major milestones in Rohit Jivanlal Parikh's scholarly work. Highlighting the transition in Parikh's interest from formal languages to natural languages, and how he approached Wittgenstein's philosophy of language, it traces the academic trajectory of a brilliant scholar whose work opened up various new avenues in research. This volume is part of Springer's book series Outstanding Contributions to Logic, and honours Rohit Parikh and his works in many ways. Parikh is a leader in the realm of ideas, offering concepts and definitions that enrich the field and lead to new research directions. Parikh has contributed to a variety of areas in logic, computer science and game theory. In mathematical logic his contributions have been in recursive function theory, proof theory and non-standard analysis; in computer science, in the areas of modal, temporal and dynamic logics of programs and semantics of programs, as well as logics of knowledge; in artificial intelligence in the area of belief revision; and in game theory in the formal analysis of social procedures, with a strong undercurrent of philosophy running through all his work.This is not a collection of articles limited to one theme, or even directly connected to specific works by Parikh, but instead all papers are inspired and influenced by Parikh in some way, adding structures to and enriching "Parikh-land". The book presents a brochure-like overview of Parikh-land before providing an "introductory video" on the sights and sounds that you experience when reading the book.
Graphs are widely used to represent structural information in the form of objects and connections between them. Graph transformation is the rule-based manipulation of graphs, an increasingly important concept in computer science and related fields. This is the first textbook treatment of the algebraic approach to graph transformation, based on algebraic structures and category theory. Part I is an introduction to the classical case of graph and typed graph transformation. In Part II basic and advanced results are first shown for an abstract form of replacement systems, so-called adhesive high-level replacement systems based on category theory, and are then instantiated to several forms of graph and Petri net transformation systems. Part III develops typed attributed graph transformation, a technique of key relevance in the modeling of visual languages and in model transformation. Part IV contains a practical case study on model transformation and a presentation of the AGG (attributed graph grammar) tool environment. Finally the appendix covers the basics of category theory, signatures and algebras. The book addresses both research scientists and graduate students in computer science, mathematics and engineering.
This text centers around three main subjects. The first is the concept of modularity and independence in classical logic and nonmonotonic and other nonclassical logic, and the consequences on syntactic and semantical interpolation and language change. In particular, we will show the connection between interpolation for nonmonotonic logic and manipulation of an abstract notion of size. Modularity is essentially the ability to put partial results achieved independently together for a global result. The second aspect of the book is the authors' uniform picture of conditionals, including many-valued logics and structures on the language elements themselves and on the truth value set. The third topic explained by the authors is neighbourhood semantics, their connection to independence, and their common points and differences for various logics, e.g., for defaults and deontic logic, for the limit version of preferential logics, and for general approximation. The book will be of value to researchers and graduate students in logic and theoretical computer science.
Computer simulation and mathematical modelling are the most important approaches in the quantitative analysis of the diffusive processes fundamental to many physical, chemical, biological, and geological systems. This comprehensive text/reference addresses the key issues in the "Modelling and Simulation of Diffusive Processes" from a broad range of different application areas. Applying an holistic approach, the book presents illuminating viewpoints drawn from an international selection of experts across a wide spectrum of disciplines, from computer science, mathematics and engineering, to natural resource management, environmental sciences, applied geo-sciences, agricultural sciences, and theoretical medicine. Topics and features: presents a detailed introduction to diffusive processes and modelling; discusses diffusion and molecular transport in living cells, and suspended sediment in open channels; examines the mathematical modelling of peristaltic transport of nanofluids, and isotachophoretic separation of ionic samples in microfluidics; reviews thermal characterization of non-homogeneous media, and scale-dependent porous dispersion resulting from velocity fluctuations; describes the modelling of nitrogen fate and transport at the sediment-water interface, and groundwater flow in unconfined aquifers; investigates two-dimensional solute transport from a varying pulse type point source, and futile cycles in metabolic flux modelling; studies contaminant concentration prediction along unsteady groundwater flow, and modelling synovial fluid flow in human joints; explores the modelling of soil organic carbon, and crop growth simulation. This interdisciplinary volume will be invaluable to researchers, lecturers and graduate students from such diverse fields as computer science, mathematics, hydrology, agriculture and biology.
The past decades have seen significant improvements in 3D imaging where the related techniques and technologies have advanced to a mature state. These exciting developments have sparked increasing interest in the challenges and opportunities afforded by 3D sensing. As a consequence, the emerging area of safety and security related imaging incorporates these important new technologies beyond the limitations of 2D image processing.This book presents the thoroughly revised versions of lectures given by leading researchers during the Workshop on Advanced 3D Imaging for Safety and Security in conjunction with the International Conference on Computer Vision and Pattern Recognition CVPR 2005, held in San Diego, CA, USA in June 2005.It covers the current state of the art in 3D imaging for safety and security.
The growing demand of speed, accuracy, and reliability in scientific and engineering computing has been accelerating the merging of symbolic and numeric computations. These two types of computation coexist in mathematics yet are separated in traditional research of mathematical computation. This book presents 27 research articles on the integration and interaction of symbolic and numeric computation.
This book presents the theory of continuum mechanics for mechanical, thermodynamical, and electrodynamical systems. It shows how to obtain governing equations and it applies them by computing the reality. It uses only open-source codes developed under the FEniCS project and includes codes for 20 engineering applications from mechanics, fluid dynamics, applied thermodynamics, and electromagnetism. Moreover, it derives and utilizes the constitutive equations including coupling terms, which allow to compute multiphysics problems by incorporating interactions between primitive variables, namely, motion, temperature, and electromagnetic fields. An engineering system is described by the primitive variables satisfying field equations that are partial differential equations in space and time. The field equations are mostly coupled and nonlinear, in other words, difficult to solve. In order to solve the coupled, nonlinear system of partial differential equations, the book uses a novel collection of open-source packages developed under the FEniCS project. All primitive variables are solved at once in a fully coupled fashion by using finite difference method in time and finite element method in space.
This bookisan outgrowthoften yearsof researchatthe Universityof Florida Computational NeuroEngineering Laboratory (CNEL) in the general area of statistical signal processing and machine learning. One of the goals of writing the book is exactly to bridge the two ?elds that share so many common problems and techniques but are not yet e?ectively collaborating. Unlikeotherbooks thatcoverthe state ofthe artinagiven?eld, this book cuts across engineering (signal processing) and statistics (machine learning) withacommontheme: learningseenfromthepointofviewofinformationt- orywithanemphasisonRenyi'sde?nitionofinformation.Thebasicapproach is to utilize the information theory descriptors of entropy and divergence as nonparametric cost functions for the design of adaptive systems in unsup- vised or supervised training modes. Hence the title: Information-Theoretic Learning (ITL). In the course of these studies, we discovered that the main idea enabling a synergistic view as well as algorithmic implementations, does not involve the conventional central moments of the data (mean and covariance). Rather, the core concept is the ?-norm of the PDF, in part- ular its expected value (? = 2), which we call the information potential. This operator and related nonparametric estimators link information theory, optimization of adaptive systems, and reproducing kernel Hilbert spaces in a simple and unconventional way.
Digital arithmetic plays an important role in the design of
general-purpose digital processors and of embedded systems for
signal processing, graphics, and communications. In spite of a
mature body of knowledge in digital arithmetic, each new generation
of processors or digital systems creates new arithmetic design
problems. Designers, researchers, and graduate students will find
solid solutions to these problems in this comprehensive,
state-of-the-art exposition of digital arithmetic.
Future Data and Knowledge Base Systems will require new functionalities: richer data modelling capabilities, more powerful query languages, and new concepts of query answers. Future query languages will include functionalities such as hypothetical reasoning, abductive reasoning, modal reasoning, and metareasoning, involving knowledge and belief. Intentional answers will lead to cooperative query answering in which the answer to a query takes into consideration user's expectations. Non-classical logic plays an important role in this book for the formalization of new queries and new answers. It is shown how logic permits precise definitions for concepts like cooperative answers, subjective queries, or reliable sources of information, and gives a precise framework for reasoning about these complex concepts. It is worth noting that advances in knowledge management are not just an application domain for existing results in logic, but also require new developments in logic. The book is organized into 10 chapters which cover the areas of cooperative query answering (in the first three chapters), metareasoning and abductive reasoning (chapters 5 to 7), and, finally, hypothetical and subjunctive reasoning (last three chapters).
"Set Theory for Computing" provides a comprehensive account of set-oriented symbolic manipulation methods suitable for automated reasoning. Its main objective is twofold: 1) to provide a flexible formalization for a variety of set languages, and 2) to clarify the semantics of set constructs firmly established in modern specification languages and in the programming practice. Topics include: semantic unification, decision algorithms, modal logics, declarative programming, tableau-based proof techniques, and theory-based theorem proving. The style of presentation is self-contained, rigorous and accurate. Some familiarity with symbolic logic is helpful but not a requirement. This book is a useful resource for all advanced students, professionals, and researchers in computing sciences, artificial intelligence, automated reasoning, logic, and computational mathematics. It will serve to complement their intuitive understanding of set concepts with the ability to master them by symbolic and logically based algorithmic methods and deductive techniques.
This book presents the refereed proceedings of the Twelfth International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at Stanford University (California) in August 2016. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising in particular, in finance, statistics, computer graphics and the solution of PDEs.
This volume, the 6th volume in the DRUMS Handbook series, is part of the after math of the successful ESPRIT project DRUMS (Defeasible Reasoning and Un certainty Management Systems) which took place in two stages from 1989-1996. In the second stage (1993-1996) a work package was introduced devoted to the topics Reasoning and Dynamics, covering both the topics of 'Dynamics of Rea soning', where reasoning is viewed as a process, and 'Reasoning about Dynamics', which must be understood as pertaining to how both designers of and agents within dynamic systems may reason about these systems. The present volume presents work done in this context. This work has an emphasis on modelling and formal techniques in the investigation of the topic "Reasoning and Dynamics," but it is not mere theory that occupied us. Rather research was aimed at bridging the gap between theory and practice. Therefore also real-life applications of the modelling techniques were considered, and we hope this also shows in this volume, which is focused on the dynamics of reasoning processes. In order to give the book a broader perspective, we have invited a number of well-known researchers outside the project but working on similar topics to contribute as well. We have very pleasant recollections of the project, with its lively workshops and other meetings, with the many sites and researchers involved, both within and outside our own work package."
This book discusses recent developments and contemporary research in mathematics, statistics and their applications in computing. All contributing authors are eminent academicians, scientists, researchers and scholars in their respective fields, hailing from around the world. The conference has emerged as a powerful forum, offering researchers a venue to discuss, interact and collaborate and stimulating the advancement of mathematics and its applications in computer science. The book will allow aspiring researchers to update their knowledge of cryptography, algebra, frame theory, optimizations, stochastic processes, compressive sensing, functional analysis, complex variables, etc. Educating future consumers, users, producers, developers and researchers in mathematics and computing is a challenging task and essential to the development of modern society. Hence, mathematics and its applications in computer science are of vital importance to a broad range of communities, including mathematicians and computing professionals across different educational levels and disciplines.
Refinement is one of the cornerstones of the formal approach to software engineering, and its use in various domains has led to research on new applications and generalisation. This book brings together this important research in one volume, with the addition of examples drawn from different application areas. It covers four main themes: Data refinement and its application to Z Generalisations of refinement that change the interface and atomicity of operations Refinement in Object-Z Modelling state and behaviour by combining Object-Z with CSP Refinement in Z and Object-Z: Foundations and Advanced Applications provides an invaluable overview of recent research for academic and industrial researchers, lecturers teaching formal specification and development, industrial practitioners using formal methods in their work, and postgraduate and advanced undergraduate students. This second edition is a comprehensive update to the first and includes the following new material: Early chapters have been extended to also include trace refinement, based directly on partial relations rather than through totalisation Provides an updated discussion on divergence, non-atomic refinements and approximate refinement Includes a discussion of the differing semantics of operations and outputs and how they affect the abstraction of models written using Object-Z and CSP Presents a fuller account of the relationship between relational refinement and various models of refinement in CSP Bibliographic notes at the end of each chapter have been extended with the most up to date citations and research
Integral equations have wide applications in various fields, including continuum mechanics, potential theory, geophysics, electricity and magnetism, kinetic theory of gases, hereditary phenomena in physics and biology, renewal theory, quantum mechanics, radiation, optimization, optimal control systems, communication theory, mathematical economics, population genetics, queueing theory, and medicine. Computational Methods for Linear Integral Equations presents basic theoretical material that deals with numerical analysis, convergence, error estimates, and accuracy. The unique computational aspect leads the reader from theoretical and practical problems all the way through to computation with hands-on guidance for input files and the execution of computer programs. Features: * Offers all supporting MathematicaA(R) files related to the book via the Internet at the authors' Web sites: www.math.uno.edu/fac/pkythe.html or www.math.uno.edu/fac/ppuri.html * Contains identification codes for problems, related methods, and computer programs that are cross-referenced throughout the book to make the connections easy to understand * Illustrates a how-to approach to computational work in the development of algorithms, construction of input files, timing, and accuracy analysis * Covers linear integral equations of Fredholm and Volterra types of the first and second kinds as well as associated singular integral equations, integro-differential equations, and eigenvalue problems * Provides clear, step-by-step guidelines for solving difficult and complex computational problems This book is an essential reference and authoritative resource for all professionals, graduate students, and researchers in mathematics, physical sciences, and engineering. Researchers interested in the numerical solution of integral equations will find its practical problem-solving style both accessible and useful for their work.
The topic of level sets is currently very timely and useful for creating realistic 3-D images and animations. They are powerful numerical techniques for analyzing and computing interface motion in a host of application settings. In computer vision, it has been applied to stereo and segmentation, whereas in graphics it has been applied to the postproduction process of in-painting and 3-D model construction. Osher is co-inventor of the Level Set Methods, a pioneering framework introduced jointly with James Sethian from the University of Berkeley in 1998. This methodology has been used up to now to provide solutions to a wide application range not limited to image processing, computer vision, robotics, fluid mechanics, crystallography, lithography, and computer graphics. The topic is of great interest to advanced students, professors, and R&D professionals working in the areas of graphics (post-production), video-based surveillance, visual inspection, augmented reality, document image processing, and medical image processing. These techniques are already employed to provide solutions and products in the industry (Cognitech, Siemens, Philips, Focus Imaging). An essential compilation of survey chapters from the leading researchers in the field, emphasizing the applications of the methods. This book can be suitable for a short professional course related with the processing of visual information.
Fundamental concepts of mathematical modeling Modeling is one of the most effective, commonly used tools in engineering and the applied sciences. In this book, the authors deal with mathematical programming models both linear and nonlinear and across a wide range of practical applications. Whereas other books concentrate on standard methods of analysis, the authors focus on the power of modeling methods for solving practical problems–clearly showing the connection between physical and mathematical realities–while also describing and exploring the main concepts and tools at work. This highly computational coverage includes:
Building and Solving Mathematical Programming Models in Engineering and Science is practically suited for use as a professional reference for mathematicians, engineers, and applied or industrial scientists, while also tutorial and illustrative enough for advanced students in mathematics or engineering.
This book describes a novel methodology for studying algorithmic skills, intended as cognitive activities related to rule-based symbolic transformation, and argues that some human computational abilities may be interpreted and analyzed as genuine examples of extended cognition. It shows that the performance of these abilities relies not only on innate neurocognitive systems or language-related skills, but also on external tools and general agent-environment interactions. Further, it asserts that a low-level analysis, based on a set of core neurocognitive systems linking numbers and language, is not sufficient to explain some specific forms of high-level numerical skills, like those involved in algorithm execution. To this end, it reports on the design of a cognitive architecture for modeling all the relevant features involved in the execution of algorithmic strategies, including external tools, such as paper and pencils. The first part of the book discusses the philosophical premises for endorsing and justifying a position in philosophy of mind that links a modified form of computationalism with some recent theoretical and scientific developments, like those introduced by the so-called dynamical approach to cognition. The second part is dedicated to the description of a Turing-machine-inspired cognitive architecture, expressly designed to formalize all kinds of algorithmic strategies.
All over the world sport plays a prominent role in society: as a leisure activity for many, as an ingredient of culture, as a business and as a matter of national prestige in such major events as the World Cup in soccer or the Olympic Games. Hence, it is not surprising that science has entered the realm of sports, and, in particular, that computer simulation has become highly relevant in recent years. This is explored in this book by choosing five different sports as examples, demonstrating that computational science and engineering (CSE) can make essential contributions to research on sports topics on both the fundamental level and, eventually, by supporting athletes performance."
The related fields of fractal image encoding and fractal image
analysis have blossomed in recent years. This book, originating
from a NATO Advanced Study Institute held in 1995, presents work by
leading researchers. It is developing the subjects at an
introductory level, but it also has some recent and exciting
results in both fields.
New Approaches to Circle Packing into the Square is devoted to the most recent results on the densest packing of equal circles in a square. In the last few decades, many articles have considered this question, which has been an object of interest since it is a hard challenge both in discrete geometry and in mathematical programming. The authors have studied this geometrical optimization problem for a long time, and they developed several new algorithms to solve it. The book completely covers the investigations on this topic. |
You may like...
Mem-elements for Neuromorphic Circuits…
Christos Volos, Viet-Thanh Pham
Paperback
R3,613
Discovery Miles 36 130
Albert Gallatin's Vision of Democratic…
Louis B. Kuppenheimer
Hardcover
R2,042
Discovery Miles 20 420
No Code Required - Giving Users Tools to…
Allen Cypher, Mira Dontcheva, …
Paperback
R1,151
Discovery Miles 11 510
Emotion in Video Game Soundtracking
Duncan Williams, Newton Lee
Hardcover
R3,332
Discovery Miles 33 320
Public Governance Paradigms - Competing…
Jacob Torfing, Lotte Bogh Andersen, …
Paperback
R956
Discovery Miles 9 560
|