![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Mathematical theory of computation
This book explores the most significant computational methods and the history of their development. It begins with the earliest mathematical / numerical achievements made by the Babylonians and the Greeks, followed by the period beginning in the 16th century. For several centuries the main scientific challenge concerned the mechanics of planetary dynamics, and the book describes the basic numerical methods of that time. In turn, at the end of the Second World War scientific computing took a giant step forward with the advent of electronic computers, which greatly accelerated the development of numerical methods. As a result, scientific computing became established as a third scientific method in addition to the two traditional branches: theory and experimentation. The book traces numerical methods' journey back to their origins and to the people who invented them, while also briefly examining the development of electronic computers over the years. Featuring 163 references and more than 100 figures, many of them portraits or photos of key historical figures, the book provides a unique historical perspective on the general field of scientific computing - making it a valuable resource for all students and professionals interested in the history of numerical analysis and computing, and for a broader readership alike.
The book contains a selection of high quality papers, chosen among the best presentations during the International Conference on Spectral and High-Order Methods (2012), and provides an overview of the depth and breath of the activities within this important research area. The carefully reviewed selection of the papers will provide the reader with a snapshot of state-of-the-art and help initiate new research directions through the extensive bibliography.
Philosophers of science have produced a variety of definitions for the notion of one sentence, theory or hypothesis being closer to the truth, more verisimilar, or more truthlike than another one. The definitions put forward by philosophers presuppose at least implicitly that the subject matter with which the compared sentences, theories or hypotheses are concerned has been specified,! and the property of closeness to the truth, verisimilitude or truth likeness appearing in such definitions should be understood as closeness to informative truth about that subject matter. This monograph is concerned with a special case of the problem of defining verisimilitude, a case in which this subject matter is of a rather restricted kind. Below, I shall suppose that there is a finite number of interrelated quantities which are used for characterizing the state of some system. Scientists might arrive at different hypotheses concerning the values of such quantities in a variety of ways. There might be various theories that give different predictions (whose informativeness might differ , too) on which combinations of the values of these quantities are possible. Scientists might also have measured all or some of the quantities in question with some accuracy. Finally, they might also have combined these two methods of forming hypotheses on their values by first measuring some of the quantities and then deducing the values of some others from the combination of a theory and the measurement results.
The aim of this book is to provide insight into Data Science and Artificial Learning Techniques based on Industry 4.0, conveys how Machine Learning & Data Science are becoming an essential part of industrial and academic research. Varying from healthcare to social networking and everywhere hybrid models for Data Science, Al, and Machine Learning are being used. The book describes different theoretical and practical aspects and highlights how new systems are being developed. Along with focusing on the research trends, challenges and future of AI in Data Science, the book explores the potential for integration of advanced AI algorithms, addresses the challenges of Data Science for Industry 4.0, covers different security issues, includes qualitative and quantitative research, and offers case studies with working models. This book also provides an overview of AI and Data Science algorithms for readers who do not have a strong mathematical background. Undergraduates, postgraduates, academicians, researchers, and industry professionals will benefit from this book and use it as a guide.
This book contains more than 15 essays that explore issues in truth, existence, and explanation. It features cutting-edge research in the philosophy of mathematics and logic. Renowned philosophers, mathematicians, and younger scholars provide an insightful contribution to the lively debate in this interdisciplinary field of inquiry. The essays look at realism vs. anti-realism as well as inflationary vs. deflationary theories of truth. The contributors also consider mathematical fictionalism, structuralism, the nature and role of axioms, constructive existence, and generality. In addition, coverage also looks at the explanatory role of mathematics and the philosophical relevance of mathematical explanation. The book will appeal to a broad mathematical and philosophical audience. It contains work from FilMat, the Italian Network for the Philosophy of Mathematics. These papers collected here were also presented at their second international conference, held at the University of Chieti-Pescara, May 2016.
This book explores minimum divergence methods of statistical machine learning for estimation, regression, prediction, and so forth, in which we engage in information geometry to elucidate their intrinsic properties of the corresponding loss functions, learning algorithms, and statistical models. One of the most elementary examples is Gauss's least squares estimator in a linear regression model, in which the estimator is given by minimization of the sum of squares between a response vector and a vector of the linear subspace hulled by explanatory vectors. This is extended to Fisher's maximum likelihood estimator (MLE) for an exponential model, in which the estimator is provided by minimization of the Kullback-Leibler (KL) divergence between a data distribution and a parametric distribution of the exponential model in an empirical analogue. Thus, we envisage a geometric interpretation of such minimization procedures such that a right triangle is kept with Pythagorean identity in the sense of the KL divergence. This understanding sublimates a dualistic interplay between a statistical estimation and model, which requires dual geodesic paths, called m-geodesic and e-geodesic paths, in a framework of information geometry. We extend such a dualistic structure of the MLE and exponential model to that of the minimum divergence estimator and the maximum entropy model, which is applied to robust statistics, maximum entropy, density estimation, principal component analysis, independent component analysis, regression analysis, manifold learning, boosting algorithm, clustering, dynamic treatment regimes, and so forth. We consider a variety of information divergence measures typically including KL divergence to express departure from one probability distribution to another. An information divergence is decomposed into the cross-entropy and the (diagonal) entropy in which the entropy associates with a generative model as a family of maximum entropy distributions; the cross entropy associates with a statistical estimation method via minimization of the empirical analogue based on given data. Thus any statistical divergence includes an intrinsic object between the generative model and the estimation method. Typically, KL divergence leads to the exponential model and the maximum likelihood estimation. It is shown that any information divergence leads to a Riemannian metric and a pair of the linear connections in the framework of information geometry. We focus on a class of information divergence generated by an increasing and convex function U, called U-divergence. It is shown that any generator function U generates the U-entropy and U-divergence, in which there is a dualistic structure between the U-divergence method and the maximum U-entropy model. We observe that a specific choice of U leads to a robust statistical procedure via the minimum U-divergence method. If U is selected as an exponential function, then the corresponding U-entropy and U-divergence are reduced to the Boltzmann-Shanon entropy and the KL divergence; the minimum U-divergence estimator is equivalent to the MLE. For robust supervised learning to predict a class label we observe that the U-boosting algorithm performs well for contamination of mislabel examples if U is appropriately selected. We present such maximal U-entropy and minimum U-divergence methods, in particular, selecting a power function as U to provide flexible performance in statistical machine learning.
This book presents selected peer-reviewed contributions from the International Conference on Time Series and Forecasting, ITISE 2018, held in Granada, Spain, on September 19-21, 2018. The first three parts of the book focus on the theory of time series analysis and forecasting, and discuss statistical methods, modern computational intelligence methodologies, econometric models, financial forecasting, and risk analysis. In turn, the last three parts are dedicated to applied topics and include papers on time series analysis in the earth sciences, energy time series forecasting, and time series analysis and prediction in other real-world problems. The book offers readers valuable insights into the different aspects of time series analysis and forecasting, allowing them to benefit both from its sophisticated and powerful theory, and from its practical applications, which address real-world problems in a range of disciplines. The ITISE conference series provides a valuable forum for scientists, engineers, educators and students to discuss the latest advances and implementations in the field of time series analysis and forecasting. It focuses on interdisciplinary and multidisciplinary research encompassing computer science, mathematics, statistics and econometrics.
This collection brings together exciting new works that address today's key challenges for a feminist power-sensitive approach to knowledge and scientific practice. Taking up such issues as the role of contextualism in epistemology, democracy and dissent in knowledge practices, and epistemic agency under conditions of oppression, the essays build upon well-established work in feminist epistemology and philosophy of science such as standpoint theory and contextual empiricism, offering new interpretations and applications. Many contributions capture the current engagement of feminist epistemologists with the insights and programs of nonfeminist epistemologists, while others focus on the intersections between feminist epistemology and other fields of feminist inquiry such as feminist ethics and metaphysics. *see remarks below for remainder of text*
Biometrics, the science of using physical traits to identify individuals, is playing an increasing role in our security-conscious society and across the globe. Biometric authentication, or bioauthentication, systems are being used to secure everything from amusement parks to bank accounts to military installations. Yet developments in this field have not been matched by an equivalent improvement in the statistical methods for evaluating these systems. Compensating for this need, this unique text/reference provides a basic statistical methodology for practitioners and testers of bioauthentication devices, supplying a set of rigorous statistical methods for evaluating biometric authentication systems. This framework of methods can be extended and generalized for a wide range of applications and tests. This is the first single resource on statistical methods for estimation and comparison of the performance of biometric authentication systems. The book focuses on six common performance metrics: for each metric, statistical methods are derived for a single system that incorporates confidence intervals, hypothesis tests, sample size calculations, power calculations and prediction intervals. These methods are also extended to allow for the statistical comparison and evaluation of multiple systems for both independent and paired data. Topics and features: * Provides a statistical methodology for the most common biometric performance metrics: failure to enroll (FTE), failure to acquire (FTA), false non-match rate (FNMR), false match rate (FMR), and receiver operating characteristic (ROC) curves * Presents methods for the comparison of two or more biometric performance metrics * Introduces a new bootstrap methodology for FMR and ROC curve estimation * Supplies more than 120 examples, using publicly available biometric data where possible * Discusses the addition of prediction intervals to the bioauthentication statistical toolset * Describes sample-size and power calculations for FTE, FTA, FNMR and FMR Researchers, managers and decisions makers needing to compare biometric systems across a variety of metrics will find within this reference an invaluable set of statistical tools. Written for an upper-level undergraduate or master's level audience with a quantitative background, readers are also expected to have an understanding of the topics in a typical undergraduate statistics course. Dr. Michael E. Schuckers is Associate Professor of Statistics at St. Lawrence University, Canton, NY, and a member of the Center for Identification Technology Research.
The first edition of this award-winning book attracted a wide audience. This second edition is both a joy to read and a useful classroom tool. Unlike traditional textbooks, it requires no mathematical prerequisites and can be read around the mathematics presented. If used as a textbook, the mathematics can be prioritized, with a book both students and instructors will enjoy reading. Secret History: The Story of Cryptology, Second Edition incorporates new material concerning various eras in the long history of cryptology. Much has happened concerning the political aspects of cryptology since the first edition appeared. The still unfolding story is updated here. The first edition of this book contained chapters devoted to the cracking of German and Japanese systems during World War II. Now the other side of this cipher war is also told, that is, how the United States was able to come up with systems that were never broken. The text is in two parts. Part I presents classic cryptology from ancient times through World War II. Part II examines modern computer cryptology. With numerous real-world examples and extensive references, the author skillfully balances the history with mathematical details, providing readers with a sound foundation in this dynamic field. FEATURES Presents a chronological development of key concepts Includes the Vigenere cipher, the one-time pad, transposition ciphers, Jefferson's wheel cipher, Playfair cipher, ADFGX, matrix encryption, Enigma, Purple, and other classic methods Looks at the work of Claude Shannon, the origin of the National Security Agency, elliptic curve cryptography, the Data Encryption Standard, the Advanced Encryption Standard, public-key cryptography, and many other topics New chapters detail SIGABA and SIGSALY, successful systems used during World War II for text and speech, respectively Includes quantum cryptography and the impact of quantum computers
This book develops a novel approach to perturbative quantum field theory: starting with a perturbative formulation of classical field theory, quantization is achieved by means of deformation quantization of the underlying free theory and by applying the principle that as much of the classical structure as possible should be maintained. The resulting formulation of perturbative quantum field theory is a version of the Epstein-Glaser renormalization that is conceptually clear, mathematically rigorous and pragmatically useful for physicists. The connection to traditional formulations of perturbative quantum field theory is also elaborated on, and the formalism is illustrated in a wealth of examples and exercises.
This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.
Physically Unclonable Functions (PUFs) translate unavoidable variations in certain parameters of materials, waves, or devices into random and unique signals. They have found many applications in the Internet of Things (IoT), authentication systems, FPGA industry, several other areas in communications and related technologies, and many commercial products. Statistical Trend Analysis of Physically Unclonable Functions first presents a review on cryptographic hardware and hardware-assisted cryptography. The review highlights PUF as a mega trend in research on cryptographic hardware design. Afterwards, the authors present a combined survey and research work on PUFs using a systematic approach. As part of the survey aspect, a state-of-the-art analysis is presented as well as a taxonomy on PUFs, a life cycle, and an established ecosystem for the technology. In another part of the survey, the evolutionary history of PUFs is examined, and strategies for further research in this area are suggested. In the research side, this book presents a novel approach for trend analysis that can be applied to any technology or research area. In this method, a text mining tool is used which extracts 1020 keywords from the titles of the sample papers. Then, a classifying tool classifies the keywords into 295 meaningful research topics. The popularity of each topic is then numerically measured and analyzed over the course of time through a statistical analysis on the number of research papers related to the topic as well as the number of their citations. The authors identify the most popular topics in four different domains; over the history of PUFs, during the recent years, in top conferences, and in top journals. The results are used to present an evolution study as well as a trend analysis and develop a roadmap for future research in this area. This method gives an automatic popularity-based statistical trend analysis which eliminates the need for passing personal judgments about the direction of trends, and provides concrete evidence to the future direction of research on PUFs. Another advantage of this method is the possibility of studying a whole lot of existing research works (more than 700 in this book). This book will appeal to researchers in text mining, cryptography, hardware security, and IoT.
Reduced order modeling is an important, growing field in computational science and engineering, and this is the first book to address the subject in relation to computational fluid dynamics. It focuses on complex parametrization of shapes for their optimization and includes recent developments in advanced topics such as turbulence, stability of flows, inverse problems, optimization, and flow control, as well as applications. This book will be of interest to researchers and graduate students in the field of reduced order modeling.
This book offers a comprehensive and systematic review of the latest research findings in the area of intuitionistic fuzzy calculus. After introducing the intuitionistic fuzzy numbers' operational laws and their geometrical and algebraic properties, the book defines the concept of intuitionistic fuzzy functions and presents the research on the derivative, differential, indefinite integral and definite integral of intuitionistic fuzzy functions. It also discusses some of the methods that have been successfully used to deal with continuous intuitionistic fuzzy information or data, which are different from the previous aggregation operators focusing on discrete information or data. Mainly intended for engineers and researchers in the fields of fuzzy mathematics, operations research, information science and management science, this book is also a valuable textbook for postgraduate and advanced undergraduate students alike.
This comprehensive book provides an adequate framework to establish various calculi of logical inference. Being an 'enriched' system of natural deduction, it helps to formulate logical calculi in an operational manner. By uncovering a certain harmony between a functional calculus on the labels and a logical calculus on the formulas, it allows mathematical foundations for systems of logic presentation designed to handle meta-level features at the object-level via a labelling mechanism, such as the D Gabbay's Labelled Deductive Systems. The book truly demonstrates that introducing of 'labels' is useful to understand the proof-calculus itself, and also to clarify its connections with model-theoretic interpretations.
Proof techniques in cryptography are very difficult to understand, even for students or researchers who major in cryptography. In addition, in contrast to the excessive emphases on the security proofs of the cryptographic schemes, practical aspects of them have received comparatively less attention. This book addresses these two issues by providing detailed, structured proofs and demonstrating examples, applications and implementations of the schemes, so that students and practitioners may obtain a practical view of the schemes. Seong Oun Hwang is a professor in the Department of Computer Engineering and director of Artificial Intelligence Security Research Center, Gachon University, Korea. He received the Ph.D. degree in computer science from the Korea Advanced Institute of Science and Technology (KAIST), Korea. His research interests include cryptography, cybersecurity, networks, and machine learning. Intae Kim is an associate research fellow at the Institute of Cybersecurity and Cryptology, University of Wollongong, Australia. He received the Ph.D. degree in electronics and computer engineering from Hongik University, Korea. His research interests include cryptography, cybersecurity, and networks. Wai Kong Lee is an assistant professor in UTAR (University Tunku Abdul Rahman), Malaysia. He received the Ph.D. degree in engineering from UTAR, Malaysia. In between 2009 - 2012, he served as an R&D engineer in several multinational companies including Agilent Technologies (now known as Keysight) in Malaysia. His research interests include cryptography engineering, GPU computing, numerical algorithms, Internet of Things (IoT) and energy harvesting.
Stochastic hydrogeology, which emerged as a research area in the late 1970s, involves the study of subsurface, geological variability on flow and transport processes and the interpretation of observations using existing theories. Lacking, however, has been a rational framework for modeling the impact of the processes that take place in heterogeneous media and for incorporating it in predictions and decision-making. This book provides this important framework. It covers the fundamental and practical aspects of stochastic hydrogeology, coupling theoretical aspects with examples, case studies, and guidelines for applications.
rd This book constitutes a collection of extended versions of papers presented at the 23 IFIP TC7 Conference on System Modeling and Optimization, which was held in C- cow, Poland, on July 23-27, 2007. It contains 7 plenary and 22 contributed articles, the latter selected via a peer reviewing process. Most of the papers are concerned with optimization and optimal control. Some of them deal with practical issues, e. g. , p- formance-based design for seismic risk reduction, or evolutionary optimization in structural engineering. Many contributions concern optimization of infini- dimensional systems, ranging from a general overview of the variational analysis, through optimization and sensitivity analysis of PDE systems, to optimal control of neutral systems. A significant group of papers is devoted to shape analysis and opti- zation. Sufficient optimality conditions for ODE problems, and stochastic control methods applied to mathematical finance, are also investigated. The remaining papers are on mathematical programming, modeling, and information technology. The conference was the 23rd event in the series of such meetings biennially org- ized under the auspices of the Seventh Technical Committee "Systems Modeling and Optimization" of the International Federation for Information Processing (IFIP TC7).
This book focuses on lattice-based cryptosystems, widely considered to be one of the most promising post-quantum cryptosystems and provides fundamental insights into how to construct provably secure cryptosystems from hard lattice problems. The concept of provable security is used to inform the choice of lattice tool for designing cryptosystems, including public-key encryption, identity-based encryption, attribute-based encryption, key change and digital signatures. Given its depth of coverage, the book especially appeals to graduate students and young researchers who plan to enter this research area.
An Image Processing Tour of College Mathematics aims to provide meaningful context for reviewing key topics of the college mathematics curriculum, to help students gain confidence in using concepts and techniques of applied mathematics, to increase student awareness of recent developments in mathematical sciences, and to help students prepare for graduate studies. The topics covered include a library of elementary functions, basic concepts of descriptive statistics, probability distributions of functions of random variables, definitions and concepts behind first- and second-order derivatives, most concepts and techniques of traditional linear algebra courses, an introduction to Fourier analysis, and a variety of discrete wavelet transforms - all of that in the context of digital image processing. Features Pre-calculus material and basic concepts of descriptive statistics are reviewed in the context of image processing in the spatial domain. Key concepts of linear algebra are reviewed both in the context of fundamental operations with digital images and in the more advanced context of discrete wavelet transforms. Some of the key concepts of probability theory are reviewed in the context of image equalization and histogram matching. The convolution operation is introduced painlessly and naturally in the context of naive filtering for denoising and is subsequently used for edge detection and image restoration. An accessible elementary introduction to Fourier analysis is provided in the context of image restoration. Discrete wavelet transforms are introduced in the context of image compression, and the readers become more aware of some of the recent developments in applied mathematics. This text helps students of mathematics ease their way into mastering the basics of scientific computer programming.
This book provides a clear understanding regarding the fundamentals of matrix and determinant from introduction to its real-life applications. The topic is considered one of the most important mathematical tools used in mathematical modelling. Matrix and Determinant: Fundamentals and Applications is a small self-explanatory and well synchronized book that provides an introduction to the basics along with well explained applications. The theories in the book are covered along with their definitions, notations, and examples. Illustrative examples are listed at the end of each covered topic along with unsolved comprehension questions, and real-life applications. This book provides a concise understanding of matrix and determinate which will be useful to students as well as researchers.
Optimization is the act of obtaining the "best" result under given circumstances. In design, construction, and maintenance of any engineering system, engineers must make technological and managerial decisions to minimize either the effort or cost required or to maximize benefits. There is no single method available for solving all optimization problems efficiently. Several optimization methods have been developed for different types of problems. The optimum-seeking methods are mathematical programming techniques (specifically, nonlinear programming techniques). Nonlinear Optimization: Models and Applications presents the concepts in several ways to foster understanding. Geometric interpretation: is used to re-enforce the concepts and to foster understanding of the mathematical procedures. The student sees that many problems can be analyzed, and approximate solutions found before analytical solutions techniques are applied. Numerical approximations: early on, the student is exposed to numerical techniques. These numerical procedures are algorithmic and iterative. Worksheets are provided in Excel, MATLAB (R), and Maple (TM) to facilitate the procedure. Algorithms: all algorithms are provided with a step-by-step format. Examples follow the summary to illustrate its use and application. Nonlinear Optimization: Models and Applications: Emphasizes process and interpretation throughout Presents a general classification of optimization problems Addresses situations that lead to models illustrating many types of optimization problems Emphasizes model formulations Addresses a special class of problems that can be solved using only elementary calculus Emphasizes model solution and model sensitivity analysis About the author: William P. Fox is an emeritus professor in the Department of Defense Analysis at the Naval Postgraduate School. He received his Ph.D. at Clemson University and has taught at the United States Military Academy and at Francis Marion University where he was the chair of mathematics. He has written many publications, including over 20 books and over 150 journal articles. Currently, he is an adjunct professor in the Department of Mathematics at the College of William and Mary. He is the emeritus director of both the High School Mathematical Contest in Modeling and the Mathematical Contest in Modeling.
Discrete Mathematics for Computer Science: An Example-Based Introduction is intended for a first- or second-year discrete mathematics course for computer science majors. It covers many important mathematical topics essential for future computer science majors, such as algorithms, number representations, logic, set theory, Boolean algebra, functions, combinatorics, algorithmic complexity, graphs, and trees. Features Designed to be especially useful for courses at the community-college level Ideal as a first- or second-year textbook for computer science majors, or as a general introduction to discrete mathematics Written to be accessible to those with a limited mathematics background, and to aid with the transition to abstract thinking Filled with over 200 worked examples, boxed for easy reference, and over 200 practice problems with answers Contains approximately 40 simple algorithms to aid students in becoming proficient with algorithm control structures and pseudocode Includes an appendix on basic circuit design which provides a real-world motivational example for computer science majors by drawing on multiple topics covered in the book to design a circuit that adds two eight-digit binary numbers Jon Pierre Fortney graduated from the University of Pennsylvania in 1996 with a BA in Mathematics and Actuarial Science and a BSE in Chemical Engineering. Prior to returning to graduate school, he worked as both an environmental engineer and as an actuarial analyst. He graduated from Arizona State University in 2008 with a PhD in Mathematics, specializing in Geometric Mechanics. Since 2012, he has worked at Zayed University in Dubai. This is his second mathematics textbook.
Discrete Mathematics for Computer Science: An Example-Based Introduction is intended for a first- or second-year discrete mathematics course for computer science majors. It covers many important mathematical topics essential for future computer science majors, such as algorithms, number representations, logic, set theory, Boolean algebra, functions, combinatorics, algorithmic complexity, graphs, and trees. Features Designed to be especially useful for courses at the community-college level Ideal as a first- or second-year textbook for computer science majors, or as a general introduction to discrete mathematics Written to be accessible to those with a limited mathematics background, and to aid with the transition to abstract thinking Filled with over 200 worked examples, boxed for easy reference, and over 200 practice problems with answers Contains approximately 40 simple algorithms to aid students in becoming proficient with algorithm control structures and pseudocode Includes an appendix on basic circuit design which provides a real-world motivational example for computer science majors by drawing on multiple topics covered in the book to design a circuit that adds two eight-digit binary numbers Jon Pierre Fortney graduated from the University of Pennsylvania in 1996 with a BA in Mathematics and Actuarial Science and a BSE in Chemical Engineering. Prior to returning to graduate school, he worked as both an environmental engineer and as an actuarial analyst. He graduated from Arizona State University in 2008 with a PhD in Mathematics, specializing in Geometric Mechanics. Since 2012, he has worked at Zayed University in Dubai. This is his second mathematics textbook. |
![]() ![]() You may like...
Modeling and Simulating Complex Business…
Zoumpolia Dikopoulou
Hardcover
R3,608
Discovery Miles 36 080
Logic and Implication - An Introduction…
Petr Cintula, Carles Noguera
Hardcover
R3,451
Discovery Miles 34 510
Numerical Time-Dependent Partial…
Moysey Brio, Gary M. Webb, …
Hardcover
Recent Trends in Mathematical Modeling…
Vinai K. Singh, Yaroslav D. Sergeyev, …
Hardcover
R6,393
Discovery Miles 63 930
Machine Learning with Quantum Computers
Maria Schuld, Francesco Petruccione
Hardcover
R3,651
Discovery Miles 36 510
|