![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This special volume of the conference will be of immense use to the researchers and academicians. In this conference, academicians, technocrats and researchers will get an opportunity to interact with eminent persons in the field of Applied Mathematics and Scientific Computing. The topics to be covered in this International Conference are comprehensive and will be adequate for developing and understanding about new developments and emerging trends in this area. High-Performance Computing (HPC) systems have gone through many changes during the past two decades in their architectural design to satisfy the increasingly large-scale scientific computing demand. Accurate, fast, and scalable performance models and simulation tools are essential for evaluating alternative architecture design decisions for the massive-scale computing systems. This conference recounts some of the influential work in modeling and simulation for HPC systems and applications, identifies some of the major challenges, and outlines future research directions which we believe are critical to the HPC modeling and simulation community.
This monograph treats comprehensively central aspects of string rewriting systems in the form of semi-Thue systems. These are so general as to enable the discussion of all the basic notions and questions that arise in arbitrary replacement systems as used in various areas of computer science. The Church-Rosser property is used in its original meaning and the existence of complete monoid and group presentations is the central point of discussion. Decidability problems with their complexity are surveyed and congruential languages including the deterministic context-free NTS languages are discussed. The book contains a number of generalizations of results published elsewhere, e.g., the uniqueness of complete string rewriting systems with respect to the underlying order. Completely new and unpublished results which serve as an exposition of techniques and new methods are discussed in detail. With the help of semi-Thue systems it is shown in which situations the famous Knuth-Bendix completion method does not terminate and why, and that in general complete replacement systems cannot always be used as algorithms to solve the word problem. It is suggested how these situations can be stated by using a certain control under which the rewriting is to be performed. This monograph is a reference for graduate students and active researchers in theoretical computer science. The reader is led to the forefront of current research in the area of string rewriting and monoid presentations.
The sequential quadratic hamiltonian (SQH) method is a novel numerical optimization procedure for solving optimal control problems governed by differential models. It is based on the characterisation of optimal controls in the framework of the Pontryagin maximum principle (PMP). The SQH method is a powerful computational methodology that is capable of development in many directions. The Sequential Quadratic Hamiltonian Method: Solving Optimal Control Problems discusses its analysis and use in solving nonsmooth ODE control problems, relaxed ODE control problems, stochastic control problems, mixed-integer control problems, PDE control problems, inverse PDE problems, differential Nash game problems, and problems related to residual neural networks. This book may serve as a textbook for undergraduate and graduate students, and as an introduction for researchers in sciences and engineering who intend to further develop the SQH method or wish to use it as a numerical tool for solving challenging optimal control problems and for investigating the Pontryagin maximum principle on new optimisation problems. Feature Provides insight into mathematical and computational issues concerning optimal control problems, while discussing many differential models of interest in different disciplines. Suitable for undergraduate and graduate students and as an introduction for researchers in sciences and engineering. Accompanied by codes which allow the reader to apply the SQH method to solve many different optimal control and optimisation problems
This book is intended as a study aid for the finite element method. Based on the free computer algebra system Maxima, we offer routines to symbolically or numerically solve problems from the context of two-dimensional problems. For this rather advanced topic, classical 'hand calculations' are difficult to perform and the incorporation of a computer algebra system is a convenient approach to handle, for example, larger matrix operations. The mechanical theories focus on the classical two-dimensional structural elements, i.e., plane elements, thin or classical plates, and thick or shear deformable plate elements. The use of a computer algebra system and the incorporated functions, e.g., for matrix operations, allows to focus more on the methodology of the finite element method and not on standard procedures. Furthermore, we offer a graphical user interface (GUI) to facilitate the model definition. Thus, the user may enter the required definitions in a source code manner directly in wxMaxima or use the GUI which is able to execute wxMaxime to perform the calculations.
For some years, specification of software and hardware systems has been influenced not only by algebraic methods but also by new developments in logic. These new developments in logic are partly based on the use of algorithmic techniques in deduction and proving methods, but are alsodue to new theoretical advances, to a great extent stimulated by computer science, which have led to new types of logic and new logical calculi. The new techniques, methods and tools from logic, combined with algebra-based ones, offer very powerful and useful tools for the computer scientist, which may soon become practical for commercial use, where, in particular, more powerful specification tools are needed for concurrent and distributed systems. This volume contains papers based on lectures by leading researchers which were originally given at an international summer school held in Marktoberdorf in 1991. The papers aim to give a foundation for combining logic and algebra for the purposes of specification under the aspects of automated deduction, proving techniques, concurrency and logic, abstract data types and operational semantics, and constructive methods.
This book provides an introduction to decision making in a distributed computational framework. Classical detection theory assumes a centralized configuration. All observations are processed by a central processor to produce the decision. In the decentralized detection system, distributed detectors generate decisions based on locally available observations; these decisions are then conveyed to the fusion center that makes the global decision. Using numerous examples throughout the book, the author discusses such distributed detection processes under several different formulations and in a wide variety of network topologies.
Presenting a strong and clear relationship between theory and practice, Linear and Integer Optimization: Theory and Practice is divided into two main parts. The first covers the theory of linear and integer optimization, including both basic and advanced topics. Dantzig's simplex algorithm, duality, sensitivity analysis, integer optimization models, and network models are introduced. More advanced topics also are presented including interior point algorithms, the branch-and-bound algorithm, cutting planes, complexity, standard combinatorial optimization models, the assignment problem, minimum cost flow, and the maximum flow/minimum cut theorem. The second part applies theory through real-world case studies. The authors discuss advanced techniques such as column generation, multiobjective optimization, dynamic optimization, machine learning (support vector machines), combinatorial optimization, approximation algorithms, and game theory. Besides the fresh new layout and completely redesigned figures, this new edition incorporates modern examples and applications of linear optimization. The book now includes computer code in the form of models in the GNU Mathematical Programming Language (GMPL). The models and corresponding data files are available for download and can be readily solved using the provided online solver. This new edition also contains appendices covering mathematical proofs, linear algebra, graph theory, convexity, and nonlinear optimization. All chapters contain extensive examples and exercises. This textbook is ideal for courses for advanced undergraduate and graduate students in various fields including mathematics, computer science, industrial engineering, operations research, and management science.
This book organizes principles and methods of signal processing and machine learning into the framework of coherence. The book contains a wealth of classical and modern methods of inference, some reported here for the first time. General results are applied to problems in communications, cognitive radio, passive and active radar and sonar, multi-sensor array processing, spectrum analysis, hyperspectral imaging, subspace clustering, and related. The reader will find new results for model fitting; for dimension reduction in models and ambient spaces; for detection, estimation, and space-time series analysis; for subspace averaging; and for uncertainty quantification. Throughout, the transformation invariances of statistics are clarified, geometries are illuminated, and null distributions are given where tractable. Stochastic representations are emphasized, as these are central to Monte Carlo simulations. The appendices contain a comprehensive account of matrix theory, the SVD, the multivariate normal distribution, and many of the important distributions for coherence statistics. The book begins with a review of classical results in the physical and engineering sciences where coherence plays a fundamental role. Then least squares theory and the theory of minimum mean-squared error estimation are developed, with special attention paid to statistics that may be interpreted as coherence statistics. A chapter on classical hypothesis tests for covariance structure introduces the next three chapters on matched and adaptive subspace detectors. These detectors are derived from likelihood reasoning, but it is their geometries and invariances that qualify them as coherence statistics. A chapter on independence testing in space-time data sets leads to a definition of broadband coherence, and contains novel applications to cognitive radio and the analysis of cyclostationarity. The chapter on subspace averaging reviews basic results and derives an order-fitting rule for determining the dimension of an average subspace. These results are used to enumerate sources of acoustic and electromagnetic radiation and to cluster subspaces into similarity classes. The chapter on performance bounds and uncertainty quantification emphasizes the geometry of the Cramer-Rao bound and its related information geometry.
With the development of Big Data platforms for managing massive amount of data and wide availability of tools for processing these data, the biggest limitation is the lack of trained experts who are qualified to process and interpret the results. This textbook is intended for graduate students and experts using methods of cluster analysis and applications in various fields. Suitable for an introductory course on cluster analysis or data mining, with an in-depth mathematical treatment that includes discussions on different measures, primitives (points, lines, etc.) and optimization-based clustering methods, Cluster Analysis and Applications also includes coverage of deep learning based clustering methods. With clear explanations of ideas and precise definitions of concepts, accompanied by numerous examples and exercises together with Mathematica programs and modules, Cluster Analysis and Applications may be used by students and researchers in various disciplines, working in data analysis or data science.
Three approaches can be applied to determine the performance of parallel and distributed computer systems: measurement, simulation, and mathematical methods. This book introduces various network architectures for parallel and distributed systems as well as for systems-on-chips, and presents a strategy for developing a generator for automatic model derivation. It will appeal to researchers and students in network architecture design and performance analysis.
In this book we develop powerful techniques based on formal methods for the verification of correctness, consistency and safety properties related to dynamic reconfiguration and communication in complex distributed systems. In particular, static analysis techniques based on types and type systems are an adequate methodology considering their success in guaranteeing not only basic safety properties, but also more sophisticated ones like deadlock or lock freedom in concurrent settings.The main contributions of this book are twofold. i) We design a type system for a concurrent object-oriented calculus to statically ensure consistency of dynamic reconfigurations. ii) We define an encoding of the session pi-calculus, which models communication in distributed systems, into the standard typed pi-calculus. We use this encoding to derive properties like type safety and progress in the session pi-calculus by exploiting the corresponding properties in the standard typed pi-calculus.
This book provides an advanced understanding of cyber threats as well as the risks companies are facing. It includes a detailed analysis of many technologies and approaches important to decreasing, mitigating or remediating those threats and risks. Cyber security technologies discussed in this book are futuristic and current. Advanced security topics such as secure remote work, data security, network security, application and device security, cloud security, and cyber risk and privacy are presented in this book. At the end of every chapter, an evaluation of the topic from a CISO's perspective is provided. This book also addresses quantum computing, artificial intelligence and machine learning for cyber security The opening chapters describe the power and danger of quantum computing, proposing two solutions for protection from probable quantum computer attacks: the tactical enhancement of existing algorithms to make them quantum-resistant, and the strategic implementation of quantum-safe algorithms and cryptosystems. The following chapters make the case for using supervised and unsupervised AI/ML to develop predictive, prescriptive, cognitive and auto-reactive threat detection, mitigation, and remediation capabilities against advanced attacks perpetrated by sophisticated threat actors, APT and polymorphic/metamorphic malware. CISOs must be concerned about current on-going sophisticated cyber-attacks, and can address them with advanced security measures. The latter half of this book discusses some current sophisticated cyber-attacks and available protective measures enabled by the advancement of cybersecurity capabilities in various IT domains. Chapters 6-10 discuss secure remote work; chapters 11-17, advanced data security paradigms; chapters 18-28, Network Security; chapters 29-35, application and device security; chapters 36-39, Cloud security; and chapters 40-46 organizational cyber risk measurement and event probability. Security and IT engineers, administrators and developers, CIOs, CTOs, CISOs, and CFOs will want to purchase this book. Risk personnel, CROs, IT and Security Auditors as well as security researchers and journalists will also find this useful.
This advanced textbook presents a broad and up-to-date view of the computational complexity theory of Boolean circuits. It combines the algorithmic and the computability-based approach, and includes extensive discussion of the literature to facilitate further study.It begins with efficient Boolean circuits for problems with high practical relevance, e.g., arithmetic operations, sorting, and transitive closure, then compares the computational model of Boolean circuits with other models such as Turing machines and parallel machines. Examination of the complexity of specific problems leads to the definition of complexity classes. The theory of circuit complexity classes is then thoroughly developed, including the theory of lower bounds and advanced topics such as connections to algebraic structures and to finite model theory.
This classroom-tested and clearly-written textbook presents a focused guide to the conceptual foundations of compilation, explaining the fundamental principles and algorithms used for defining the syntax of languages, and for implementing simple translators. This significantly updated and expanded third edition has been enhanced with additional coverage of regular expressions, visibly pushdown languages, bottom-up and top-down deterministic parsing algorithms, and new grammar models. Topics and features: describes the principles and methods used in designing syntax-directed applications such as parsing and regular expression matching; covers translations, semantic functions (attribute grammars), and static program analysis by data flow equations; introduces an efficient method for string matching and parsing suitable for ambiguous regular expressions (NEW); presents a focus on extended BNF grammars with their general parser and with LR(1) and LL(1) parsers (NEW); introduces a parallel parsing algorithm that exploits multiple processing threads to speed up syntax analysis of large files; discusses recent formal models of input-driven automata and languages (NEW); includes extensive use of theoretical models of automata, transducers and formal grammars, and describes all algorithms in pseudocode; contains numerous illustrative examples, and supplies a large set of exercises with solutions at an associated website. Advanced undergraduate and graduate students of computer science will find this reader-friendly textbook to be an invaluable guide to the essential concepts of syntax-directed compilation. The fundamental paradigms of language structures are elegantly explained in terms of the underlying theory, without requiring the use of software tools or knowledge of implementation, and through algorithms simple enough to be practiced by paper and pencil.
This textbook grew out of notes for the ECE143 Programming for Data Analysis class that the author has been teaching at University of California, San Diego, which is a requirement for both graduate and undergraduate degrees in Machine Learning and Data Science. This book is ideal for readers with some Python programming experience. The book covers key language concepts that must be understood to program effectively, especially for data analysis applications. Certain low-level language features are discussed in detail, especially Python memory management and data structures. Using Python effectively means taking advantage of its vast ecosystem. The book discusses Python package management and how to use third-party modules as well as how to structure your own Python modules. The section on object-oriented programming explains features of the language that facilitate common programming patterns. After developing the key Python language features, the book moves on to third-party modules that are foundational for effective data analysis, starting with Numpy. The book develops key Numpy concepts and discusses internal Numpy array data structures and memory usage. Then, the author moves onto Pandas and details its many features for data processing and alignment. Because strong visualizations are important for communicating data analysis, key modules such as Matplotlib are developed in detail, along with web-based options such as Bokeh, Holoviews, Altair, and Plotly. The text is sprinkled with many tricks-of-the-trade that help avoid common pitfalls. The author explains the internal logic embodied in the Python language so that readers can get into the Python mindset and make better design choices in their codes, which is especially helpful for newcomers to both Python and data analysis. To get the most out of this book, open a Python interpreter and type along with the many code samples.
BASIC Microcomputing and Biostatistics is designed as the first practical "how to" guide to both computer programming in BASIC and the statis tical data processing techniques needed to analyze experimental, clinical, and other numerical data. It provides a small vocabulary of essential com puter statements and shows how they are used to solve problems in the bio logical, physical, and medical sciences. No mathematical background be yond algebra and an inkling of the principles of calculus is assumed. All more advanced mathematical techniques are developed from "scratch" before they are used. The computing language is BASIC, a high-level lan guage that is easy to learn and widely available using time-sharing com puter systems and personal microcomputers. The strategy of the book is to present computer programming at the outset and to use it throughout. BASIC is developed in a way reminiscent of graded readers used in human languages; the first programs are so sim ple that they can be read almost without an introduction to the language. Each program thereafter contains new vocabulary and one or more con cepts, explained in the text, not used in the previous ones. By gradual stages, the reader can progress from programs that do nothing more than count from one to ten to sophisticated programs for nonlinear curve fitting, matrix algebra, and multiple regression. There are 33 working programs and, except for the introductory ones, each performs a useful function in everyday data processing problems encountered by the experimentalist in many diverse fields."
AI Metaheuristics for Information Security in Digital Media examines the latest developments in AI-based metaheuristics algorithms with applications in information security for digital media. It highlights the importance of several security parameters, their analysis, and validations for different practical applications. Drawing on multidisciplinary research including computer vision, machine learning, artificial intelligence, modified/newly developed metaheuristics algorithms, it will enhance information security for society. It includes state-of-the-art research with illustrations and exercises throughout.
Starting with an introduction to the numerous features of Mathematica (R), this book continues with more complex material. It provides the reader with lots of examples and illustrations of how the benefits of Mathematica (R) can be used. Composed of eleven chapters, it includes the following: A chapter on several sorting algorithms Functions (planar and solid) with many interesting examples Ordinary differential equations Advantages of Mathematica (R) dealing with the Pi number The power of Mathematica (R) working with optimal control problems Introduction to Mathematica (R) with Applications will appeal to researchers, professors and students requiring a computational tool.
This volume contains refereed papers and extended abstracts of papers presented at the NATO Advanced Research Workshop entitled 'Numerical Integration: Recent Developments, Software and Applications', held at Dalhousie University, Halifax, Canada, August 11-15, 1986. The Workshop was attended by thirty-six scientists from eleven NATO countries. Thirteen invited lectures and twenty-two contributed lectures were presented, of which twenty-five appear in full in this volume, together with extended abstracts of the remaining ten. It is more than ten years since the last workshop of this nature was held, in Los Alamos in 1975. Many developments have occurred in quadrature in the intervening years, and it seemed an opportune time to bring together again researchers in this area. The development of QUADPACK by Piessens, de Doncker, Uberhuber and Kahaner has changed the focus of research in the area of one dimensional quadrature from the construction of new rules to an emphasis on reliable robust software. There has been a dramatic growth in interest in the testing and evaluation of software, stimulated by the work of Lyness and Kaganove, Einarsson, and Piessens. The earlier research of Patterson into Kronrod extensions of Gauss rules, followed by the work of Monegato, and Piessens and Branders, has greatly increased interest in Gauss-based formulas for one-dimensional integration.
This volume represents the refereed proceedings of the Eighth International C- ference on Monte Carlo and Quasi-Monte Carlo Methods in Scienti c Computing, which was held at the University of Montreal, from 6-11 July, 2008. It contains a limited selection of articles based on presentations made at the conference. The program was arranged with the help of an international committee consisting of: Ronald Cools, Katholieke Universiteit Leuven Luc Devroye, McGill University Henri Faure, CNRS Marseille Paul Glasserman, Columbia University Peter W. Glynn, Stanford University Stefan Heinrich, University of Kaiserslautern Fred J. Hickernell, Illinois Institute of Technology Aneta Karaivanova, Bulgarian Academy of Science Alexander Keller, mental images GmbH, Berlin Adam Kolkiewicz, University of Waterloo Frances Y. Kuo, University of New South Wales Christian Lecot, Universite de Savoie, Chambery Pierre L'Ecuyer, Universite de Montreal (Chair and organizer) Jun Liu, Harvard University Peter Mathe, Weierstrass Institute Berlin Makoto Matsumoto, Hiroshima University Thomas Muller-Gronbach, Otto von Guericke Universitat Harald Niederreiter, National University of Singapore Art B. Owen, Stanford University Gilles Pages, Universite Pierre et Marie Curie (Paris 6) Klaus Ritter, TU Darmstadt Karl Sabelfeld, Weierstrass Institute Berlin Wolfgang Ch. Schmid, University of Salzburg Ian H. Sloan, University of New South Wales Jerome Spanier, University of California, Irvine Bruno Tuf n, IRISA-INRIA, Rennes Henryk Wozniak ' owski, Columbia University. v vi Preface The local arrangements (program production, publicity, web site, registration, social events, etc.
Computers are essential for the functioning of our society. Despite the incredible power of existing computers, computing technology is progressing beyond today's conventional models. Quantum Computing (QC) is surfacing as a promising disruptive technology. QC is built on the principles of quantum mechanics. QC can run algorithms that are not trivial to run on digital computers. QC systems are being developed for the discovery of new materials and drugs and improved methods for encoding information for secure communication over the Internet. Unprecedented new uses for this technology are bound to emerge from ongoing research. The development of conventional digital computing technology for the arts and humanities has been progressing in tandem with the evolution of computers since the 1950s. Today, computers are absolutely essential for the arts and humanities. Therefore, future developments in QC are most likely to impact on the way in which artists will create and perform, and how research in the humanities will be conducted. This book presents a comprehensive collection of chapters by pioneers of emerging interdisciplinary research at the crossroads of quantum computing, and the arts and humanities, from philosophy and social sciences to visual arts and music. Prof. Eduardo Reck Miranda is a composer and a professor in Computer Music at Plymouth University, UK, where he is a director of the Interdisciplinary Centre for Computer Music Research (ICCMR). His previous publications include the Springer titles Handbook of Artificial Intelligence for Music, Guide to Unconventional Computing for Music, Guide to Brain-Computer Music Interfacing and Guide to Computing for Expressive Music Performance.
This book reviews evidence for the existence of information storing states present in specific materials systems called Topological Materials. It discusses how quantum computation, a possible technology for the future, demands unique paradigms where the information storing states are just not disturbed by classical forces. They are protected from environmental disturbance, suggesting that whatever information is stored in such states would could be safe forever. The authors explain how the topological aspect arises from the configuration or the shape of energy space. He further explains that the existence of related topological states has not been conclusively established in spite of significant experimental effort over the past decade. And The book as such illustrates the necessity for such investigations as well as application of the topological states for new computational technologies. The scope of coverage includes all the necessary mathematical and physics preliminaries (starting at the undergraduate level) enabling researchers to quickly understand the state of the art literature.
This book comprises selected peer-reviewed papers presented at the 7th Topical Conference of the Indian Society of Atomic and Molecular Physics, jointly held at IISER Tirupati and IIT Tirupati, India. The contributions address current topics of interest in atomic and molecular physics, both from the theoretical and experimental perspective. The major focus areas include quantum collisions, spectroscopy of atomic and molecular clusters, photoionization, Wigner time delay in collisions, laser cooling, Bose-Einstein condensates, atomic clocks, quantum computing, and trapping and manipulation of quantum systems. The book also discusses emerging topics such as ultrafast quantum processes including those at the attosecond time-scale. This book will prove to be a valuable reference for students and researchers working in the field of atomic and molecular physics.
This book provides readers with a guide to both ordinal analysis, and to proof theory. It mainly focuses on ordinal analysis, a research topic in proof theory that is concerned with the ordinal theoretic content of formal theories. However, the book also addresses ordinal analysis and basic materials in proof theory of first-order or omega logic, presenting some new results and new proofs of known ones.Primarily intended for graduate students and researchers in mathematics, especially in mathematical logic, the book also includes numerous exercises and answers for selected exercises, designed to help readers grasp and apply the main results and techniques discussed.
In the last decades, various mathematical problems have been solved by computer-assisted proofs, among them the Kepler conjecture, the existence of chaos, the existence of the Lorenz attractor, the famous four-color problem, and more. In many cases, computer-assisted proofs have the remarkable advantage (compared with a "theoretical" proof) of additionally providing accurate quantitative information. The authors have been working more than a quarter century to establish methods for the verified computation of solutions for partial differential equations, mainly for nonlinear elliptic problems of the form - u=f(x,u, u) with Dirichlet boundary conditions. Here, by "verified computation" is meant a computer-assisted numerical approach for proving the existence of a solution in a close and explicit neighborhood of an approximate solution. The quantitative information provided by these techniques is also significant from the viewpoint of a posteriori error estimates for approximate solutions of the concerned partial differential equations in a mathematically rigorous sense. In this monograph, the authors give a detailed description of the verified computations and computer-assisted proofs for partial differential equations that they developed. In Part I, the methods mainly studied by the authors Nakao and Watanabe are presented. These methods are based on a finite dimensional projection and constructive a priori error estimates for finite element approximations of the Poisson equation. In Part II, the computer-assisted approaches via eigenvalue bounds developed by the author Plum are explained in detail. The main task of this method consists of establishing eigenvalue bounds for the linearization of the corresponding nonlinear problem at the computed approximate solution. Some brief remarks on other approaches are also given in Part III. Each method in Parts I and II is accompanied by appropriate numerical examples that confirm the actual usefulness of the authors' methods. Also in some examples practical computer algorithms are supplied so that readers can easily implement the verification programs by themselves. |
![]() ![]() You may like...
Machine Learning with Quantum Computers
Maria Schuld, Francesco Petruccione
Hardcover
R3,651
Discovery Miles 36 510
Logic and Implication - An Introduction…
Petr Cintula, Carles Noguera
Hardcover
R3,451
Discovery Miles 34 510
Hajnal Andreka and Istvan Nemeti on…
Judit Madarasz, Gergely Szekely
Hardcover
R2,971
Discovery Miles 29 710
Reality and Measurement in Algebraic…
Masanao Ozawa, Jeremy Butterfield, …
Hardcover
R5,148
Discovery Miles 51 480
Numerical Time-Dependent Partial…
Moysey Brio, Gary M. Webb, …
Hardcover
Voronoi Diagrams And Delaunay…
Franz Aurenhammer, Rolf Klein, …
Hardcover
R2,502
Discovery Miles 25 020
Analytic Combinatorics for Multiple…
Roy Streit, Robert Blair Angle, …
Hardcover
R3,626
Discovery Miles 36 260
|