![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This book introduces the properties of conservative extensions of First Order Logic (FOL) to new Intensional First Order Logic (IFOL). This extension allows for intensional semantics to be used for concepts, thus affording new and more intelligent IT systems. Insofar as it is conservative, it preserves software applications and constitutes a fundamental advance relative to the current RDB databases, Big Data with NewSQL, Constraint databases, P2P systems, and Semantic Web applications. Moreover, the many-valued version of IFOL can support the AI applications based on many-valued logics.
This book is particularly concerned with heuristic state-space search for combinatorial optimization. Its two central themes are the average-case complexity of state-space search algorithms and the applications of the results notably to branch-and-bound techniques. Primarily written for researchers in computer science, the author presupposes a basic familiarity with complexity theory, and it is assumed that the reader is familiar with the basic concepts of random variables and recursive functions. Two successful applications are presented in depth: one is a set of state-space transformation methods which can be used to find approximate solutions quickly, and the second is forward estimation for constructing more informative evaluation functions.
Designed for a proof-based course on linear algebra, this rigorous and concise textbook intentionally introduces vector spaces, inner products, and vector and matrix norms before Gaussian elimination and eigenvalues so students can quickly discover the singular value decomposition (SVD)-arguably the most enlightening and useful of all matrix factorizations. Gaussian elimination is then introduced after the SVD and the four fundamental subspaces and is presented in the context of vector spaces rather than as a computational recipe. This allows the authors to use linear independence, spanning sets and bases, and the four fundamental subspaces to explain and exploit Gaussian elimination and the LU factorization, as well as the solution of overdetermined linear systems in the least squares sense and eigenvalues and eigenvectors. This unique textbook also includes examples and problems focused on concepts rather than the mechanics of linear algebra. The problems at the end of each chapter and in an associated website encourage readers to explore how to use the notions introduced in the chapter in a variety of ways. Additional problems, quizzes, and exams will be posted on an accompanying website and updated regularly. The Less Is More Linear Algebra of Vector Spaces and Matrices is for students and researchers interested in learning linear algebra who have the mathematical maturity to appreciate abstract concepts that generalize intuitive ideas. The early introduction of the SVD makes the book particularly useful for those interested in using linear algebra in applications such as scientific computing and data science. It is appropriate for a first proof-based course in linear algebra.
Combinatorial optimization algorithms are used in many applications including the design, management, and operations of communication networks. The objective of this book is to advance and promote the theory and applications of combinatorial optimization in communication networks. Each chapter of the book is written by an expert dealing with theoretical, computational, or applied aspects of combinatorial optimization. Topics covered in the book include the combinatorial optimization problems arising in optical networks, wireless ad hoc networks, sensor networks, mobile communication systems, and satellite networks. A variety of problems are addressed using combinatorial optimization techniques, ranging from routing and resource allocation to QoS provisioning.
The fourteenth volume of the Second Edition covers central topics in philosophical logic that have been studied for thousands of years, since Aristotle: Inconsistency, Causality, Conditionals, and Quantifiers. These topics are central in many applications of logic in central disciplines and this book is indispensable to any advanced student or researcher using logic in these areas. The chapters are comprehensive and written by major figures in the field.
This text explains the fundamental principles of algorithms available for performing arithmetic operations on digital computers. These include basic arithmetic operations like addition, subtraction, multiplication, and division in fixed-point and floating-point number systems as well as more complex operations such as square root extraction and evaluation of exponential, logarithmic, and trigonometric functions. The algorithms described are independent of the particular technology employed for their implementation.
This is the first book on cut-elimination in first-order predicate logic from an algorithmic point of view. Instead of just proving the existence of cut-free proofs, it focuses on the algorithmic methods transforming proofs with arbitrary cuts to proofs with only atomic cuts (atomic cut normal forms, so-called ACNFs). The first part investigates traditional reductive methods from the point of view of proof rewriting. Within this general framework, generalizations of Gentzen's and Sch\"utte-Tait's cut-elimination methods are defined and shown terminating with ACNFs of the original proof. Moreover, a complexity theoretic comparison of Gentzen's and Tait's methods is given. The core of the book centers around the cut-elimination method CERES (cut elimination by resolution) developed by the authors. CERES is based on the resolution calculus and radically differs from the reductive cut-elimination methods. The book shows that CERES asymptotically outperforms all reductive methods based on Gentzen's cut-reduction rules. It obtains this result by heavy use of subsumption theorems in clause logic. Moreover, several applications of CERES are given (to interpolation, complexity analysis of cut-elimination, generalization of proofs, and to the analysis of real mathematical proofs). Lastly, the book demonstrates that CERES can be extended to nonclassical logics, in particular to finitely-valued logics and to G\"odel logic.
This volume of lecture notes briefly introduces the basic concepts needed in any computational physics course: software and hardware, programming skills, linear algebra, and differential calculus. It then presents more advanced numerical methods to tackle the quantum many-body problem: it reviews the numerical renormalization group and then focuses on tensor network methods, from basic concepts to gauge invariant ones. Finally, in the last part, the author presents some applications of tensor network methods to equilibrium and out-of-equilibrium correlated quantum matter. The book can be used for a graduate computational physics course. After successfully completing such a course, a student should be able to write a tensor network program and can begin to explore the physics of many-body quantum systems. The book can also serve as a reference for researchers working or starting out in the field.
Encompassing all the major topics students will encounter in courses on the subject, the authors teach both the underlying mathematical foundations and how these ideas are implemented in practice. They illustrate all the concepts with both worked examples and plenty of exercises, and, in addition, provide software so that students can try out numerical methods and so hone their skills in interpreting the results. As a result, this will make an ideal textbook for all those coming to the subject for the first time. Authors' note: A problem recently found with the software is due to a bug in Formula One, the third party commercial software package that was used for the development of the interface. It occurs when the date, currency, etc. format is set to a non-United States version. Please try setting your computer date/currency option to the United States option . The new version of Formula One, when ready, will be posted on WWW.
Functions as a self-study guide and textbook containing over 110 examples and 165 problem sets with answers, a comprehensive solutions manual, and computer programs that clarify arithmetic concepts-ideal for a two-semester course in structural dynamics, analysis and design of seismic structures, matrix methods of structural analysis, numerical methods in structural engineering, and advanced structural mechanics and design This book uses state-of-the-art computer technology to formulate displacement method with matrix algebra, facilitating analysis of structural dynamics and applications to earthquake engineering and UBC and IBC seismic building codes. Links code provisions to analytical derivations and compares individual specifications across codes, including the IBC-2000 With 3700 equations and 660 drawings and tables, Matrix Analysis of Structural Dynamics: Applications and Earthquake Engineering examines vibration of trusses, rigid and elastic frames, plane grid systems, and 3-D building systems with slabs, walls, bracings, beam-columns, and rigid zones presents single and multiple degree-of-freedom systems and various response behaviors for different types of time-dependent excitations outlines determinant, iteration, Jacobian, Choleski decomposition, and Sturm sequence eigensolution methods details proportional and nonproportional damping, steady-state vibration for undamped harmonic excitation, and transient vibration for general forcing function includes P-? effects, elastic media, coupling vibrations, Timoshenko theory, and geometric and material nonlinearity illustrates free and forced vibrations of frameworks and plates stressing isoparametric finite element formulation offers several numerical integration methods with solution criteria for error and stability behavior details models and computer calculations for bracings, RC beams and columns, coupling bending, and shear of low-rise walls and more Matrix Analysis
This monograph develops techniques for equational reasoning in higher-order logic. Due to its expressiveness, higher-order logic is used for specification and verification of hardware, software, and mathematics. In these applica tions, higher-order logic provides the necessary level of abstraction for con cise and natural formulations. The main assets of higher-order logic are quan tification over functions or predicates and its abstraction mechanism. These allow one to represent quantification in formulas and other variable-binding constructs. In this book, we focus on equational logic as a fundamental and natural concept in computer science and mathematics. We present calculi for equa tional reasoning modulo higher-order equations presented as rewrite rules. This is followed by a systematic development from general equational rea soning towards effective calculi for declarative programming in higher-order logic and A-calculus. This aims at integrating and generalizing declarative programming models such as functional and logic programming. In these two prominent declarative computation models we can view a program as a logical theory and a computation as a deduction."
This book addresses the challenging tasks of verifying and debugging structurally complex multipliers. In the area of verification, the authors first investigate the challenges of Symbolic Computer Algebra (SCA)-based verification, when it comes to proving the correctness of multipliers. They then describe three techniques to improve and extend SCA: vanishing monomials removal, reverse engineering, and dynamic backward rewriting. This enables readers to verify a wide variety of multipliers, including highly complex and optimized industrial benchmarks. The authors also describe a complete debugging flow, including bug localization and fixing, to find the location of bugs in structurally complex multipliers and make corrections.
This volume contains papers which are based primarily on talks given at an inter national conference on Algorithmic Problems in Groups and Semigroups held at the University of Nebraska-Lincoln from May ll-May 16, 1998. The conference coincided with the Centennial Celebration of the Department of Mathematics and Statistics at the University of Nebraska-Lincoln on the occasion of the one hun dredth anniversary of the granting of the first Ph.D. by the department. Funding was provided by the US National Science Foundation, the Department of Math ematics and Statistics, and the College of Arts and Sciences at the University of Nebraska-Lincoln, through the College's focus program in Discrete, Experimental and Applied Mathematics. The purpose of the conference was to bring together researchers with interests in algorithmic problems in group theory, semigroup theory and computer science. A particularly useful feature of this conference was that it provided a framework for exchange of ideas between the research communities in semigroup theory and group theory, and several of the papers collected here reflect this interac tion of ideas. The papers collected in this volume represent a cross section of some of the results and ideas that were discussed in the conference. They reflect a synthesis of overlapping ideas and techniques stimulated by problems concerning finite monoids, finitely presented mono ids, finitely presented groups and free groups."
This book constitutes the refereed proceedings of the 22nd International TRIZ Future Conference on Automated Invention for Smart Industries, TFC 2022, which took place in Warsaw, Poland, in September 2022; the event was sponsored by IFIP WG 5.4.The 39 full papers presented were carefully reviewed and selected from 43 submissions. They are organized in the following thematic sections: New perspectives of TRIZ; AI in systematic innovation; systematic innovations supporting IT and AI; TRIZ applications; TRIZ education and ecosystem.
The book focuses on advanced computer algebra methods and special functions that have striking applications in the context of quantum field theory. It presents the state of the art and new methods for (infinite) multiple sums, multiple integrals, in particular Feynman integrals, difference and differential equations in the format of survey articles. The presented techniques emerge from interdisciplinary fields: mathematics, computer science and theoretical physics; the articles are written by mathematicians and physicists with the goal that both groups can learn from the other field, including most recent developments. Besides that, the collection of articles also serves as an up-to-date handbook of available algorithms/software that are commonly used or might be useful in the fields of mathematics, physics or other sciences.
This book discusses state estimation of stochastic dynamic systems from noisy measurements, specifically sequential Bayesian estimation and nonlinear or stochastic filtering. The class of solutions presented in this book is based on the Monte Carlo statistical method. Although the resulting algorithms, known as particle filters, have been around for more than a decade, the recent theoretical developments of sequential Bayesian estimation in the framework of random set theory have provided new opportunities which are not widely known and are covered in this book. This book is ideal for graduate students, researchers, scientists and engineers interested in Bayesian estimation.
Proceedings of the second conference on Applied Mathematics and Scientific Computing, held June 4-9, 2001 in Dubrovnik, Croatia. The main idea of the conference was to bring together applied mathematicians both from outside academia, as well as experts from other areas (engineering, applied sciences) whose work involves advanced mathematical techniques. During the meeting there were one complete mini-course, invited presentations, contributed talks and software presentations. A mini-course Schwarz Methods for Partial Differential Equations was given by Prof Marcus Sarkis (Worcester Polytechnic Institute, USA), and invited presentations were given by active researchers from the fields of numerical linear algebra, computational fluid dynamics, matrix theory and mathematical physics (fluid mechanics and elasticity). This volume contains the mini-course and review papers by invited speakers (Part I), as well as selected contributed presentations from the field of analysis, numerical mathematics, and engineering applications.
The nationwide research project Deduktion', funded by the Deutsche Forschungsgemeinschaft (DFG)' for a period of six years, brought together almost all research groups within Germany engaged in the field of automated reasoning. Intensive cooperation and exchange of ideas led to considerable progress both in the theoretical foundations and in the application of deductive knowledge. This three-volume book covers these original contributions moulded into the state of the art of automated deduction. The three volumes are intended to document and advance a development in the field of automated deduction that can now be observed all over the world. Rather than restricting the interest to purely academic research, the focus now is on the investigation of problems derived from realistic applications. In fact industrial applications are already pursued on a trial basis. In consequence the emphasis of the volumes is not on the presentation of the theoretical foundations of logical deduction as such, as in a handbook; rather the books present the concepts and methods now available in automated deduction in a form which can be easily accessed by scientists working in applications outside of the field of deduction. This reflects the strong conviction that automated deduction is on the verge of being fully included in the evolution of technology. Volume I focuses on basic research in deduction and on the knowledge on which modern deductive systems are based. Volume II presents techniques of implementation and details about system building. Volume III deals with applications of deductive techniques mainly, but not exclusively, to mathematics and the verification of software. Each chapter was read bytwo referees, one an international expert from abroad and the other a knowledgeable participant in the national project. It has been accepted for inclusion on the basis of these review reports. Audience: Researchers and developers in software engineering, formal methods, certification, verification, validation, specification of complex systems and software, expert systems, natural language processing.
A strong and fluent competency in mathematics is a necessary condition for scientific, technological and economic progress. However, it is widely recognized that problem solving, reasoning, and thinking processes are critical areas in which students' performance lags far behind what should be expected and desired. Mathematics is indeed an important subject, but is also important to be able to use it in extra-mathematical contexts. Thinking strictly in terms of mathematics or thinking in terms of its relations with the real world involve quite different processes and issues. This book includes the revised papers presented at the NATO ARW "Information Technology and Mathematical Problem Solving Research," held in April 1991, in Viana do Castelo, Portugal, which focused on the implications of computerized learning environments and cognitive psychology research for these mathematical activities. In recent years, several committees, professional associations, and distinguished individuals throughout the world have put forward proposals to renew mathematics curricula, all emphasizing the importance of problem solving. In order to be successful, these reforming intentions require a theory-driven research base. But mathematics problem solving may be considered a "chaotic field" in which progress has been quite slow.
This book introduces the quantum mechanical framework to information retrieval scientists seeking a new perspective on foundational problems. As such, it concentrates on the main notions of the quantum mechanical framework and describes an innovative range of concepts and tools for modeling information representation and retrieval processes. The book is divided into four chapters. Chapter 1 illustrates the main modeling concepts for information retrieval (including Boolean logic, vector spaces, probabilistic models, and machine-learning based approaches), which will be examined further in subsequent chapters. Next, chapter 2 briefly explains the main concepts of the quantum mechanical framework, focusing on approaches linked to information retrieval such as interference, superposition and entanglement. Chapter 3 then reviews the research conducted at the intersection between information retrieval and the quantum mechanical framework. The chapter is subdivided into a number of topics, and each description ends with a section suggesting the most important reference resources. Lastly, chapter 4 offers suggestions for future research, briefly outlining the most essential and promising research directions to fully leverage the quantum mechanical framework for effective and efficient information retrieval systems. This book is especially intended for researchers working in information retrieval, database systems and machine learning who want to acquire a clear picture of the potential offered by the quantum mechanical framework in their own research area. Above all, the book offers clear guidance on whether, why and when to effectively use the mathematical formalism and the concepts of the quantum mechanical framework to address various foundational issues in information retrieval.
This is a thorough introduction to the fundamental concepts of functional programming.KEY TOPICS:The book clearly expounds the construction of functional programming as a process of mathematical calculation, but restricts itself to the mathematics relevant to actual program construction. It covers simple and abstract datatypes, numbers, lists, examples, trees, and efficiency. It includes a simple, yet coherent treatment of the Haskell class; a calculus of time complexity; and new coverage of monadic input-output.MARKET:For anyone interested in the theory and practice of functional programming.
This book is the first easy-to-read text on nonsmooth optimization (NSO, not necessarily di erentiable optimization). Solving these kinds of problems plays a critical role in many industrial applications and real-world modeling systems, for example in the context of image denoising, optimal control, neural network training, data mining, economics and computational chemistry and physics. The book covers both the theory and the numerical methods used in NSO and provide an overview of di erent problems arising in the eld. It is organized into three parts: 1. convex and nonconvex analysis and the theory of NSO; 2. test problems and practical applications; 3. a guide to NSO software.The book is ideal for anyone teaching or attending NSO courses. As an accessible introduction to the eld, it is also well suited as an independent learning guide for practitioners already familiar with the basics of optimization."
The aim of this textbook is to present an account of the theory of computation. After introducing the concept of a model of computation and presenting various examples, the author explores the limitations of effective computation via basic recursion theory. Self-reference and other methods are introduced as fundamental and basic tools for constructing and manipulating algorithms. From there the book considers the complexity of computations and the notion of a complexity measure is introduced. Finally, the book culminates in considering time and space measures and in classifying computable functions as being either feasible or not. The author assumes only a basic familiarity with discrete mathematics and computing, making this textbook ideal for a graduate-level introductory course. It is based on many such courses presented by the author and so numerous exercises are included. In addition, the solutions to most of these exercises are provided.
In recent years, deep learning has fundamentally changed the landscapes of a number of areas in artificial intelligence, including speech, vision, natural language, robotics, and game playing. In particular, the striking success of deep learning in a wide variety of natural language processing (NLP) applications has served as a benchmark for the advances in one of the most important tasks in artificial intelligence. This book reviews the state of the art of deep learning research and its successful applications to major NLP tasks, including speech recognition and understanding, dialogue systems, lexical analysis, parsing, knowledge graphs, machine translation, question answering, sentiment analysis, social computing, and natural language generation from images. Outlining and analyzing various research frontiers of NLP in the deep learning era, it features self-contained, comprehensive chapters written by leading researchers in the field. A glossary of technical terms and commonly used acronyms in the intersection of deep learning and NLP is also provided. The book appeals to advanced undergraduate and graduate students, post-doctoral researchers, lecturers and industrial researchers, as well as anyone interested in deep learning and natural language processing.
In 1994 Peter Shor [65] published a factoring algorithm for a quantum computer that finds the prime factors of a composite integer N more efficiently than is possible with the known algorithms for a classical com puter. Since the difficulty of the factoring problem is crucial for the se curity of a public key encryption system, interest (and funding) in quan tum computing and quantum computation suddenly blossomed. Quan tum computing had arrived. The study of the role of quantum mechanics in the theory of computa tion seems to have begun in the early 1980s with the publications of Paul Benioff [6]' [7] who considered a quantum mechanical model of computers and the computation process. A related question was discussed shortly thereafter by Richard Feynman [35] who began from a different perspec tive by asking what kind of computer should be used to simulate physics. His analysis led him to the belief that with a suitable class of "quantum machines" one could imitate any quantum system. |
![]() ![]() You may like...
Central Nigeria Unmasked - Arts of the…
Marla C. Berns, Richard Fardon, …
Paperback
Instruments of Land Policy - Dealing…
Jean-David Gerber, Thomas Hartmann, …
Hardcover
R4,724
Discovery Miles 47 240
The Assets Perspective - The Rise of…
R Cramer, T. Shanks
Hardcover
Seminar on Stochastic Analysis, Random…
Robert Dalang, Marco Dozzi, …
Hardcover
R3,126
Discovery Miles 31 260
|