![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This book highlights the different types of data architecture and illustrates the many possibilities hidden behind the term "Big Data", from the usage of No-SQL databases to the deployment of stream analytics architecture, machine learning, and governance. Scalable Big Data Architecture covers real-world, concrete industry use cases that leverage complex distributed applications , which involve web applications, RESTful API, and high throughput of large amount of data stored in highly scalable No-SQL data stores such as Couchbase and Elasticsearch. This book demonstrates how data processing can be done at scale from the usage of NoSQL datastores to the combination of Big Data distribution. When the data processing is too complex and involves different processing topology like long running jobs, stream processing, multiple data sources correlation, and machine learning, it's often necessary to delegate the load to Hadoop or Spark and use the No-SQL to serve processed data in real time. This book shows you how to choose a relevant combination of big data technologies available within the Hadoop ecosystem. It focuses on processing long jobs, architecture, stream data patterns, log analysis, and real time analytics. Every pattern is illustrated with practical examples, which use the different open sourceprojects such as Logstash, Spark, Kafka, and so on. Traditional data infrastructures are built for digesting and rendering data synthesis and analytics from large amount of data. This book helps you to understand why you should consider using machine learning algorithms early on in the project, before being overwhelmed by constraints imposed by dealing with the high throughput of Big data. Scalable Big Data Architecture is for developers, data architects, and data scientists looking for a better understanding of how to choose the most relevant pattern for a Big Data project and which tools to integrate into that pattern.
This book presents computer programming as a key method for solving mathematical problems. There are two versions of the book, one for MATLAB and one for Python. The book was inspired by the Springer book TCSE 6: A Primer on Scientific Programming with Python (by Langtangen), but the style is more accessible and concise, in keeping with the needs of engineering students. The book outlines the shortest possible path from no previous experience with programming to a set of skills that allows the students to write simple programs for solving common mathematical problems with numerical methods in engineering and science courses. The emphasis is on generic algorithms, clean design of programs, use of functions, and automatic tests for verification.
This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2015. It presents recent advances in computational optimization. The volume includes important real life problems like parameter settings for controlling processes in bioreactor, control of ethanol production, minimal convex hill with application in routing algorithms, graph coloring, flow design in photonic data transport system, predicting indoor temperature, crisis control center monitoring, fuel consumption of helicopters, portfolio selection, GPS surveying and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others. This research demonstrates how some real-world problems arising in engineering, economics, medicine and other domains can be formulated as optimization problems.
Gerhard Gentzen has been described as logic's lost genius, whom Goedel called a better logician than himself. This work comprises articles by leading proof theorists, attesting to Gentzen's enduring legacy to mathematical logic and beyond. The contributions range from philosophical reflections and re-evaluations of Gentzen's original consistency proofs to the most recent developments in proof theory. Gentzen founded modern proof theory. His sequent calculus and natural deduction system beautifully explain the deep symmetries of logic. They underlie modern developments in computer science such as automated theorem proving and type theory.
Marking the 30th anniversary of the European Conference on Modelling and Simulation (ECMS), this inspirational text/reference reviews significant advances in the field of modelling and simulation, as well as key applications of simulation in other disciplines. The broad-ranging volume presents contributions from a varied selection of distinguished experts chosen from high-impact keynote speakers and best paper winners from the conference, including a Nobel Prize recipient, and the first president of the European Council for Modelling and Simulation (also abbreviated to ECMS). This authoritative book will be of great value to all researchers working in the field of modelling and simulation, in addition to scientists from other disciplines who make use of modelling and simulation approaches in their work.
In this volume, different aspects of logics for dependence and independence are discussed, including both the logical and computational aspects of dependence logic, and also applications in a number of areas, such as statistics, social choice theory, databases, and computer security. The contributing authors represent leading experts in this relatively new field, each of whom was invited to write a chapter based on talks given at seminars held at the Schloss Dagstuhl Leibniz Center for Informatics in Wadern, Germany (in February 2013 and June 2015) and an Academy Colloquium at the Royal Netherlands Academy of Arts and Sciences (March 2014). Altogether, these chapters provide the most up-to-date look at this developing and highly interdisciplinary field and will be of interest to a broad group of logicians, mathematicians, statisticians, philosophers, and scientists. Topics covered include a comprehensive survey of many propositional, modal, and first-order variants of dependence logic; new results concerning expressive power of several variants of dependence logic with different sets of logical connectives and generalized dependence atoms; connections between inclusion logic and the least-fixed point logic; an overview of dependencies in databases by addressing the relationships between implication problems for fragments of statistical conditional independencies, embedded multivalued dependencies, and propositional logic; various Markovian models used to characterize dependencies and causality among variables in multivariate systems; applications of dependence logic in social choice theory; and an introduction to the theory of secret sharing, pointing out connections to dependence and independence logic.
Get started with MATLAB for deep learning and AI with this in-depth primer. In this book, you start with machine learning fundamentals, then move on to neural networks, deep learning, and then convolutional neural networks. In a blend of fundamentals and applications, MATLAB Deep Learning employs MATLAB as the underlying programming language and tool for the examples and case studies in this book. With this book, you'll be able to tackle some of today's real world big data, smart bots, and other complex data problems. You'll see how deep learning is a complex and more intelligent aspect of machine learning for modern smart data analysis and usage. What You'll Learn Use MATLAB for deep learning Discover neural networks and multi-layer neural networks Work with convolution and pooling layers Build a MNIST example with these layers Who This Book Is For Those who want to learn deep learning using MATLAB. Some MATLAB experience may be useful.
This book constitutes the refereed proceedings of the 22nd International Static Analysis Symposium, SAS 2015, held in Saint-Malo, France, in September 2015. The 18 papers presented in this volume were carefully reviewed and selected from 44 submissions. All fields of static analysis as a fundamental tool for program verification, bug detection, compiler optimization, program understanding, and software maintenance are addressed, featuring theoretical, practical, and application advances in the area
In his master thesis, Vladimir Herdt presents a novel approach, called complete symbolic simulation, for a more efficient verification of much larger (non-terminating) SystemC programs. The approach combines symbolic simulation with stateful model checking and allows to verify safety properties in (cyclic) finite state spaces, by exhaustive exploration of all possible inputs and process schedulings. The state explosion problem is alleviated by integrating two complementary reduction techniques. Compared to existing approaches, the complete symbolic simulation works more efficiently, and therefore can provide correctness proofs for larger systems, which is one of the most challenging tasks, due to the ever increasing complexity.
This volume presents 38 classic texts in formal epistemology, and strengthens the ties between research into this area of philosophy and its neighbouring intellectual disciplines. The editors provide introductions to five subsections: Bayesian Epistemology, Belief Change, Decision Theory, Interactive Epistemology and Epistemic Logic. 'Formal epistemology' is a term coined in the late 1990s for a new constellation of interests in philosophy, the origins of which are found in earlier works of epistemologists, philosophers of science and logicians. It addresses a growing agenda of problems concerning knowledge, belief, certainty, rationality, deliberation, decision, strategy, action and agent interaction - and it does so using methods from logic, probability, computability, decision and game theory. The volume also includes a thorough index and suggestions for further reading, and thus offers a complete teaching and research package for students as well as research scholars of formal epistemology, philosophy, logic, computer science, theoretical economics and cognitive psychology.
Refinement is one of the cornerstones of the formal approach to software engineering, and its use in various domains has led to research on new applications and generalisation. This book brings together this important research in one volume, with the addition of examples drawn from different application areas. It covers four main themes: Data refinement and its application to Z Generalisations of refinement that change the interface and atomicity of operations Refinement in Object-Z Modelling state and behaviour by combining Object-Z with CSP Refinement in Z and Object-Z: Foundations and Advanced Applications provides an invaluable overview of recent research for academic and industrial researchers, lecturers teaching formal specification and development, industrial practitioners using formal methods in their work, and postgraduate and advanced undergraduate students. This second edition is a comprehensive update to the first and includes the following new material: Early chapters have been extended to also include trace refinement, based directly on partial relations rather than through totalisation Provides an updated discussion on divergence, non-atomic refinements and approximate refinement Includes a discussion of the differing semantics of operations and outputs and how they affect the abstraction of models written using Object-Z and CSP Presents a fuller account of the relationship between relational refinement and various models of refinement in CSP Bibliographic notes at the end of each chapter have been extended with the most up to date citations and research
This book gathers threads that have evolved across different mathematical disciplines into seamless narrative. It deals with condition as a main aspect in the understanding of the performance ---regarding both stability and complexity--- of numerical algorithms. While the role of condition was shaped in the last half-century, so far there has not been a monograph treating this subject in a uniform and systematic way. The book puts special emphasis on the probabilistic analysis of numerical algorithms via the analysis of the corresponding condition. The exposition's level increases along the book, starting in the context of linear algebra at an undergraduate level and reaching in its third part the recent developments and partial solutions for Smale's 17th problem which can be explained within a graduate course. Its middle part contains a condition-based course on linear programming that fills a gap between the current elementary expositions of the subject based on the simplex method and those focusing on convex programming.
Given the extensive application of random walks in virtually every science related discipline, we may be at the threshold of yet another problem solving paradigm with the advent of quantum walks. Over the past decade, quantum walks have been explored for their non-intuitive dynamics, which may hold the key to radically new quantum algorithms. This growing interest has been paralleled by a flurry of research into how one can implement quantum walks in laboratories. This book presents numerous proposals as well as actual experiments for such a physical realization, underpinned by a wide range of quantum, classical and hybrid technologies.
This volume presents selected peer-reviewed contributions from The International Work-Conference on Time Series, ITISE 2015, held in Granada, Spain, July 1-3, 2015. It discusses topics in time series analysis and forecasting, advanced methods and online learning in time series, high-dimensional and complex/big data time series as well as forecasting in real problems. The International Work-Conferences on Time Series (ITISE) provide a forum for scientists, engineers, educators and students to discuss the latest ideas and implementations in the foundations, theory, models and applications in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing the disciplines of computer science, mathematics, statistics and econometrics.
This two volume set LNCS 9234 and 9235 constitutes the refereed conference proceedings of the 40th International Symposium on Mathematical Foundations of Computer Science, MFCS 2015, held in Milan, Italy, in August 2015. The 82 revised full papers presented together with 5 invited talks were carefully selected from 201 submissions. The papers feature high-quality research in all branches of theoretical computer science. They have been organized in the following topical main sections: logic, semantics, automata, and theory of programming (volume 1) and algorithms, complexity, and games (volume 2).
Universal codes efficiently compress sequences generated by stationary and ergodic sources with unknown statistics, and they were originally designed for lossless data compression. In the meantime, it was realized that they can be used for solving important problems of prediction and statistical analysis of time series, and this book describes recent results in this area. The first chapter introduces and describes the application of universal codes to prediction and the statistical analysis of time series; the second chapter describes applications of selected statistical methods to cryptography, including attacks on block ciphers; and the third chapter describes a homogeneity test used to determine authorship of literary texts. The book will be useful for researchers and advanced students in information theory, mathematical statistics, time-series analysis, and cryptography. It is assumed that the reader has some grounding in statistics and in information theory.
This book discusses examples in parametric inference with R. Combining basic theory with modern approaches, it presents the latest developments and trends in statistical inference for students who do not have an advanced mathematical and statistical background. The topics discussed in the book are fundamental and common to many fields of statistical inference and thus serve as a point of departure for in-depth study. The book is divided into eight chapters: Chapter 1 provides an overview of topics on sufficiency and completeness, while Chapter 2 briefly discusses unbiased estimation. Chapter 3 focuses on the study of moments and maximum likelihood estimators, and Chapter 4 presents bounds for the variance. In Chapter 5, topics on consistent estimator are discussed. Chapter 6 discusses Bayes, while Chapter 7 studies some more powerful tests. Lastly, Chapter 8 examines unbiased and other tests. Senior undergraduate and graduate students in statistics and mathematics, and those who have taken an introductory course in probability, will greatly benefit from this book. Students are expected to know matrix algebra, calculus, probability and distribution theory before beginning this course. Presenting a wealth of relevant solved and unsolved problems, the book offers an excellent tool for teachers and instructors who can assign homework problems from the exercises, and students will find the solved examples hugely beneficial in solving the exercise problems.
In this monograph we introduce and examine four new temporal logic formalisms that can be used as specification languages for the automated verification of the reliability of hardware and software designs with respect to a desired behavior. The work is organized in two parts. In the first part two logics for computations, the graded computation tree logic and the computation tree logic with minimal model quantifiers are discussed. These have proved to be useful in describing correct executions of monolithic closed systems. The second part focuses on logics for strategies, strategy logic and memoryful alternating-time temporal logic, which have been successfully applied to formalize several properties of interactive plays in multi-entities systems modeled as multi-agent games.
This volume considers the computational complexity of determining whether a system of equations over a fixed algebra A has a solution. It examines in detail the two problems this leads to: SysTermSat(A) and SysPolSat(A), in which equations are built out of terms or polynomials, respectively. The book characterizes those algebras for which SysPolSat can be solved in a polynomial time. So far, studies and their outcomes have not covered algebras that generate a variety admitting type 1 in the sense of Tame Congruence Theory. Since unary algebras admit only type 1, this book focuses on these algebras to tackle the main problem. It discusses several aspects of unary algebras and proves that the Constraint Satisfaction Problem for relational structures is polynomially equivalent to SysTermSat over unary algebras. The book's final chapters discuss partial characterizations, present conclusions, and describe the problems that are still open.
This textbook provides concise coverage of the basics of linear and integer programming which, with megatrends toward optimization, machine learning, big data, etc., are becoming fundamental toolkits for data and information science and technology. The authors' approach is accessible to students from almost all fields of engineering, including operations research, statistics, machine learning, control system design, scheduling, formal verification and computer vision. The presentations enables the basis for numerous approaches to solving hard combinatorial optimization problems through randomization and approximation. Readers will learn to cast various problems that may arise in their research as optimization problems, understand the cases where the optimization problem will be linear, choose appropriate solution methods and interpret results appropriately.
Focusing on the mathematics that lies at the intersection of probability theory, statistical physics, combinatorics and computer science, this volume collects together lecture notes on recent developments in the area. The common ground of these subjects is perhaps best described by the three terms in the title: Random Walks, Random Fields and Disordered Systems. The specific topics covered include a study of Branching Brownian Motion from the perspective of disordered (spin-glass) systems, a detailed analysis of weakly self-avoiding random walks in four spatial dimensions via methods of field theory and the renormalization group, a study of phase transitions in disordered discrete structures using a rigorous version of the cavity method, a survey of recent work on interacting polymers in the ballisticity regime and, finally, a treatise on two-dimensional loop-soup models and their connection to conformally invariant systems and the Gaussian Free Field. The notes are aimed at early graduate students with a modest background in probability and mathematical physics, although they could also be enjoyed by seasoned researchers interested in learning about recent advances in the above fields.
With the proliferation of huge amounts of (heterogeneous) data on the Web, the importance of information retrieval (IR) has grown considerably over the last few years. Big players in the computer industry, such as Google, Microsoft and Yahoo!, are the primary contributors of technology for fast access to Web-based information; and searching capabilities are now integrated into most information systems, ranging from business management software and customer relationship systems to social networks and mobile phone applications. Ceri and his co-authors aim at taking their readers from the foundations of modern information retrieval to the most advanced challenges of Web IR. To this end, their book is divided into three parts. The first part addresses the principles of IR and provides a systematic and compact description of basic information retrieval techniques (including binary, vector space and probabilistic models as well as natural language search processing) before focusing on its application to the Web. Part two addresses the foundational aspects of Web IR by discussing the general architecture of search engines (with a focus on the crawling and indexing processes), describing link analysis methods (specifically Page Rank and HITS), addressing recommendation and diversification, and finally presenting advertising in search (the main source of revenues for search engines). The third and final part describes advanced aspects of Web search, each chapter providing a self-contained, up-to-date survey on current Web research directions. Topics in this part include meta-search and multi-domain search, semantic search, search in the context of multimedia data, and crowd search. The book is ideally suited to courses on information retrieval, as it covers all Web-independent foundational aspects. Its presentation is self-contained and does not require prior background knowledge. It can also be used in the context of classic courses on data management, allowing the instructor to cover both structured and unstructured data in various formats. Its classroom use is facilitated by a set of slides, which can be downloaded from www.search-computing.org.
At the intersection of mathematics, engineering, and computer science sits the thriving field of compressive sensing. Based on the premise that data acquisition and compression can be performed simultaneously, compressive sensing finds applications in imaging, signal processing, and many other domains. In the areas of applied mathematics, electrical engineering, and theoretical computer science, an explosion of research activity has already followed the theoretical results that highlighted the efficiency of the basic principles. The elegant ideas behind these principles are also of independent interest to pure mathematicians. A Mathematical Introduction to Compressive Sensing gives a detailed account of the core theory upon which the field is build. With only moderate prerequisites, it is an excellent textbook for graduate courses in mathematics, engineering, and computer science. It also serves as a reliable resource for practitioners and researchers in these disciplines who want to acquire a careful understanding of the subject. A Mathematical Introduction to Compressive Sensing uses a mathematical perspective to present the core of the theory underlying compressive sensing.
Fundamentals of Matrix-Analytic Methods targets advanced-level students in mathematics, engineering and computer science. It focuses on the fundamental parts of Matrix-Analytic Methods, Phase-Type Distributions, Markovian arrival processes and Structured Markov chains and matrix geometric solutions. New materials and techniques are presented for the first time in research and engineering design. This book emphasizes stochastic modeling by offering probabilistic interpretation and constructive proofs for Matrix-Analytic Methods. Such an approach is especially useful for engineering analysis and design. Exercises and examples are provided throughout the book.
Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language.The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis on the Java Platform is a great choice for those who want to learn how statistical data analysis can be done using popular programming languages, who want to integrate data analysis algorithms in full-scale applications, and deploy such calculations on the web pages or computational servers regardless of their operating system. It is an excellent reference for scientific computations to solve real-world problems using a comprehensive stack of open-source Java libraries included in the DataMelt (DMelt) project and will be appreciated by many data-analysis scientists, engineers and students. |
![]() ![]() You may like...
Quantitative Risk Assessment in Fire…
Ganapathy Ramachandran, David Charters
Paperback
R1,571
Discovery Miles 15 710
The Wheat Situation: May-June 1944…
United States Department of Agriculture
Paperback
R378
Discovery Miles 3 780
|