![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Mathematics > Applied mathematics > Mathematics for scientists & engineers
This book reports on the latest scientific achievements on robot kinematics provided by the prominent researchers participating in the 18th International Symposium on Advances in Robot Kinematics ARK2022, organized in the University of the Basque Country, Bilbao, Spain. It is of interest to researchers wanting to know more about the latest topics and methods in the fields of the kinematics, control and design of robotic systems. The book brings together 53 peer-reviewed papers. These cover the full range of robotic systems, including serial, parallel, flexible mechanisms, and cable-driven manipulators, and tackle problems such as: kinematic analysis of robots, robot modelling and simulation, theories and methods in kinematics, singularity analysis, kinematic problems in parallel robots, redundant robots, cable robots, kinematics in biological systems, flexible parallel manipulators, humanoid robots and humanoid subsystems.
In the past few years, the differential quadrature method has been applied extensively in engineering. This book, aimed primarily at practising engineers, scientists and graduate students, gives a systematic description of the mathematical fundamentals of differential quadrature and its detailed implementation in solving Helmholtz problems and problems of flow, structure and vibration. Differential quadrature provides a global approach to numerical discretization, which approximates the derivatives by a linear weighted sum of all the functional values in the whole domain. Following the analysis of function approximation and the analysis of a linear vector space, it is shown in the book that the weighting coefficients of the polynomial-based, Fourier expansion-based, and exponential-based differential quadrature methods can be computed explicitly. It is also demonstrated that the polynomial-based differential quadrature method is equivalent to the highest-order finite difference scheme. Furthermore, the relationship between differential quadrature and conventional spectral collocation is analysed.The book contains material on:- Linear Vector Space Analysis and the Approximation of a Function;- Polynomial-, Fourier Expansion- and Exponential-based Differential Quadrature;- Differential Quadrature Weighting Coefficient Matrices;- Solution of Differential Quadrature-resultant Equations;- The Solution of Incompressible Navier-Stokes and Helmholtz Equations;- Structural and Vibrational Analysis Applications;- Generalized Integral Quadrature and its Application in the Solution of Boundary Layer Equations.Three FORTRAN programs for simulation of driven cavity flow, vibration analysis of plate and Helmholtz eigenvalue problems respectively, are appended. These sample programs should give the reader a better understanding of differential quadrature and can easily be modified to solve the readers own engineering problems.
This book describes the main concepts of and recent advances in the base forces element method (BFEM). It combines theories, methods, models, numerical results, and an analysis of the BFEM. Each chapter starts with an introduction and derivation of a new mathematical model for the proposed method. Subsequently, the methods are described and numerical examples demonstrating the significance of the proposed method are presented. The closing chapter summarizes the performance and features of the BFEM and describes the prospects for its application. The book is intended for engineers, scientists and graduate students in applied mechanics and applied mathematics, and for all readers interested in numerical computations and simulations.
This book provides detailed descriptions of big data solutions for activity detection and forecasting of very large numbers of moving entities spread across large geographical areas. It presents state-of-the-art methods for processing, managing, detecting and predicting trajectories and important events related to moving entities, together with advanced visual analytics methods, over multiple heterogeneous, voluminous, fluctuating and noisy data streams from moving entities, correlating them with data from archived data sources expressing e.g. entities' characteristics, geographical information, mobility patterns, mobility regulations and intentional data. The book is divided into six parts: Part I discusses the motivation and background of mobility forecasting supported by trajectory-oriented analytics, and includes specific problems and challenges in the aviation (air-traffic management) and the maritime domains. Part II focuses on big data quality assessment and processing, and presents novel technologies suitable for mobility analytics components. Next, Part III describes solutions toward processing and managing big spatio-temporal data, particularly enriching data streams and integrating streamed and archival data to provide coherent views of mobility, and storing of integrated mobility data in large distributed knowledge graphs for efficient query-answering. Part IV focuses on mobility analytics methods exploiting (online) processed, synopsized and enriched data streams as well as (offline) integrated, archived mobility data, and highlights future location and trajectory prediction methods, distinguishing between short-term and more challenging long-term predictions. Part V examines how methods addressing data management, data processing and mobility analytics are integrated in big data architectures with distinctive characteristics compared to other known big data paradigmatic architectures. Lastly, Part VI covers important ethical issues that research on mobility analytics should address. Providing novel approaches and methodologies related to mobility detection and forecasting needs based on big data exploration, processing, storage, and analysis, this book will appeal to computer scientists and stakeholders in various application domains.
This volume assesses approaches to the construction of computer vision systems. It shows that there is a spectrum of approaches with different degrees of maturity and robustness. The useful exploitation of computer vision in industry and elsewhere and the development of the discipline itself depend on understanding the way these approaches influence one another. The chief topic discussed is autonomy.True autonomy may not be achievable in machines in the near future, and the workshop concluded that it may be more desirable - and is certainly more pragmatic - to leave a person in the processing loop. The second conclusion of the workshop concerns the manner in which a system is designedfor an application. It was agreed that designers should first specify the required functionality, then identify the knowledge appropriate to that task, and finally choose the appropriate techniques and algorithms. The third conclusion concerns the methodologies employed in developing vision systems: craft, engineering, and science are mutually relevant and contribute to one another. The contributors place heavy emphasis on providing the reader with concrete examples of operational systems. The book is based on a workshop held as part of the activities of an ESPRIT Basic Research Action.
The book is centered around the research areas of combinatorics, special functions, and computer algebra. What these research fields share is that many of their outstanding results do not only have applications in Mathematics, but also other disciplines, such as computer science, physics, chemistry, etc. A particular charm of these areas is how they interact and influence one another. For instance, combinatorial or special functions' techniques have motivated the development of new symbolic algorithms. In particular, first proofs of challenging problems in combinatorics and special functions were derived by making essential use of computer algebra. This book addresses these interdisciplinary aspects. Algorithmic aspects are emphasized and the corresponding software packages for concrete problem solving are introduced. Readers will range from graduate students, researchers to practitioners who are interested in solving concrete problems within mathematics and other research disciplines.
Statistics for Biomedical Engineers and Scientists: How to Analyze and Visualize Data provides an intuitive understanding of the concepts of basic statistics, with a focus on solving biomedical problems. Readers will learn how to understand the fundamental concepts of descriptive and inferential statistics, analyze data and choose an appropriate hypothesis test to answer a given question, compute numerical statistical measures and perform hypothesis tests 'by hand', and visualize data and perform statistical analysis using MATLAB. Practical activities and exercises are provided, making this an ideal resource for students in biomedical engineering and the biomedical sciences who are in a course on basic statistics.
This textbook introduces the concepts and tools that biomedical and chemical engineering students need to know in order to translate engineering problems into a numerical representation using scientific fundamentals. Modeling concepts focus on problems that are directly related to biomedical and chemical engineering. A variety of computational tools are presented, including MATLAB, Excel, Mathcad, and COMSOL, and a brief introduction to each tool is accompanied by multiple computer lab experiences. The numerical methods covered are basic linear algebra and basic statistics, and traditional methods like Newton's method, Euler Integration, and trapezoidal integration. The book presents the reader with numerous examples and worked problems, and practice problems are included at the end of each chapter.
As the first book of a three-part series, this book is offered as a tribute to pioneers in vision, such as Bela Julesz, David Marr, King-Sun Fu, Ulf Grenander, and David Mumford. The authors hope to provide foundation and, perhaps more importantly, further inspiration for continued research in vision. This book covers David Marr's paradigm and various underlying statistical models for vision. The mathematical framework herein integrates three regimes of models (low-, mid-, and high-entropy regimes) and provides foundation for research in visual coding, recognition, and cognition. Concepts are first explained for understanding and then supported by findings in psychology and neuroscience, after which they are established by statistical models and associated learning and inference algorithms. A reader will gain a unified, cross-disciplinary view of research in vision and will accrue knowledge spanning from psychology to neuroscience to statistics.
This book discusses the formalization of mathematical theories centering on complex analysis and matrix theory, covering topics such as algebraic systems, complex numbers, gauge integration, the Fourier transformation and its discrete counterpart, matrices and their transformation, inner product spaces, and function matrices. The formalization is performed using the interactive theorem prover HOL4, chiefly developed at the University of Cambridge. Many of the developments presented are now integral parts of the library of this prover. As mathematical developments continue to gain in complexity, sometimes demanding proofs of enormous sizes, formalization has proven to be invaluable in terms of obtaining real confidence in their correctness. This book provides a basis for the computer-aided verification of engineering systems constructed using the principles of complex analysis and matrix theory, as well as building blocks for the formalization of more involved mathematical theories.
During the days 14-18 of October 1991, we had the pleasure of attending a most interesting Conference on New Developments in Partial Differential Equations and Applications to Mathematical Physics in Ferrarra. The Conference was organized within the Scientific Program celebrating the six hundredth birthday of the University of Ferrarra and, after the many stimulating lectures and fruitful discussions, we may certainly conclude, together with the numerous participants, that it has represented a big success. The Conference would not have been possible without the financial support of several sources. In this respect, we are particularly grateful to the Comitato Organizzatore del VI Centenario, the University of Ferrarra in the Office of the Rector, Professor Antonio Rossi, the Consiglio Nationale delle Ricerche, and the Department of Mathematics of the University of Ferrarra. We should like to thank all of the partlClpants and the speakers, and we are especially grateful to those who have contributed to the present volume. G. Buttazzo, University of Pisa G.P. Galdi, University of Ferrarra L. Zanghirati, University of Ferrarra Ferrarra, May 11 th, 1992 v CONTENTS INVITED LECTURES Liapunov Functionals and Qualitative Behaviour of the Solution to the Nonlinear Enskog Equation .................................................................................. .
The developments in mesh generation are usually driven by the needs of new applications and/or novel algorithms. The last decade has seen a renewed interest in mesh generation and adaptation by the computational engineering community, due to the challenges introduced by complex industrial problems.Another common challenge is the need to handle complex geometries. Nowadays, it is becoming obvious that geometry should be persistent throughout the whole simulation process. Several methodologies that can carry the geometric information throughout the simulation stage are available, but due to the novelty of these methods, the generation of suitable meshes for these techniques is still the main obstacle for the industrial uptake of this technology.This book will cover different aspects of mesh generation and adaptation, with particular emphasis on cutting-edge mesh generation techniques for advanced discretisation methods and complex geometries.
Artificial intelligence (AI) has become pervasive in most areas of research and applications. While computation can significantly reduce mental efforts for complex problem solving, effective computer algorithms allow continuous improvement of AI tools to handle complexity-in both time and memory requirements-for machine learning in large datasets. Meanwhile, data science is an evolving scientific discipline that strives to overcome the hindrance of traditional skills that are too limited to enable scientific discovery when leveraging research outcomes. Solutions to many problems in medicine and life science, which cannot be answered by these conventional approaches, are urgently needed for society. This edited book attempts to report recent advances in the complementary domains of AI, computation, and data science with applications in medicine and life science. The benefits to the reader are manifold as researchers from similar or different fields can be aware of advanced developments and novel applications that can be useful for either immediate implementations or future scientific pursuit. Features: Considers recent advances in AI, computation, and data science for solving complex problems in medicine, physiology, biology, chemistry, and biochemistry Provides recent developments in three evolving key areas and their complementary combinations: AI, computation, and data science Reports on applications in medicine and physiology, including cancer, neuroscience, and digital pathology Examines applications in life science, including systems biology, biochemistry, and even food technology This unique book, representing research from a team of international contributors, has not only real utility in academia for those in the medical and life sciences communities, but also a much wider readership from industry, science, and other areas of technology and education.
This book introduces iterative learning control (ILC) and its applications to the new equations such as fractional order equations, impulsive equations, delay equations, and multi-agent systems, which have not been presented in other books on conventional fields. ILC is an important branch of intelligent control, which is applicable to robotics, process control, and biological systems. The fractional version of ILC updating laws and formation control are presented in this book. ILC design for impulsive equations and inclusions are also established. The broad variety of achieved results with rigorous proofs and many numerical examples make this book unique. This book is useful for graduate students studying ILC involving fractional derivatives and impulsive conditions as well as for researchers working in pure and applied mathematics, physics, mechanics, engineering, biology, and related disciplines.
This textbook grew out of notes for the ECE143 Programming for Data Analysis class that the author has been teaching at University of California, San Diego, which is a requirement for both graduate and undergraduate degrees in Machine Learning and Data Science. This book is ideal for readers with some Python programming experience. The book covers key language concepts that must be understood to program effectively, especially for data analysis applications. Certain low-level language features are discussed in detail, especially Python memory management and data structures. Using Python effectively means taking advantage of its vast ecosystem. The book discusses Python package management and how to use third-party modules as well as how to structure your own Python modules. The section on object-oriented programming explains features of the language that facilitate common programming patterns. After developing the key Python language features, the book moves on to third-party modules that are foundational for effective data analysis, starting with Numpy. The book develops key Numpy concepts and discusses internal Numpy array data structures and memory usage. Then, the author moves onto Pandas and details its many features for data processing and alignment. Because strong visualizations are important for communicating data analysis, key modules such as Matplotlib are developed in detail, along with web-based options such as Bokeh, Holoviews, Altair, and Plotly. The text is sprinkled with many tricks-of-the-trade that help avoid common pitfalls. The author explains the internal logic embodied in the Python language so that readers can get into the Python mindset and make better design choices in their codes, which is especially helpful for newcomers to both Python and data analysis. To get the most out of this book, open a Python interpreter and type along with the many code samples.
Fibonacci Cubes have been an extremely popular area of research since the 1990s.This unique compendium features the state of research into Fibonacci Cubes. It expands the knowledge in graph theoretic and combinatorial properties of Fibonacci Cubes and their variants.By highlighting various approaches with numerous examples, it provides a fundamental source for further research in the field. This useful reference text surely benefits advanced students in computer science and mathematics and serves as an archival record of the current state of the field.
Reflecting more than 30 years of teaching experience in the field, this guide provides engineers with an introduction to statistics and its applicability to engineering. Examples cover a wide range of engineering applications, including both chemical engineering and semiconductors. Among the topics featured are: quality assurance and statistics, continuous variables, hypothesis testing, comparative experiments, acceptance sampling, the analysis of variance, Taguchi and Orthogonal arrays. Tables, references and an index round out this work.
This book mainly focuses on the widely distributed nature of computational tools, models, and methods, ultimately related to the current importance of computational machines as mediators of cognition. An entirely new eco-cognitive approach to computation is offered, to underline the question of the overwhelming cognitive domestication of ignorant entities, which is persistently at work in our current societies. Eco-cognitive computationalism does not aim at furnishing an ultimate and static definition of the concepts of information, cognition, and computation, instead, it intends, by respecting their historical and dynamical character, to propose an intellectual framework that depicts how we can understand their forms of "emergence" and the modification of their meanings, also dealing with impressive unconventional non-digital cases. The new proposed perspective also leads to a clear description of the divergence between weak and strong levels of creative "abductive" hypothetical cognition: weak accomplishments are related to "locked abductive strategies", typical of computational machines, and deep creativity is instead related to "unlocked abductive strategies", which characterize human cognizers, who benefit from the so-called "eco-cognitive openness".
This volume explores the universal mathematical properties underlying big language data and possible reasons why such properties exist, revealing how we may be unconsciously mathematical in our language use. These properties are statistical and thus different from linguistic universals that contribute to describing the variation of human languages, and they can only be identified over a large accumulation of usages. The book provides an overview of state-of-the art findings on these statistical universals and reconsiders the nature of language accordingly, with Zipf's law as a well-known example. The main focus of the book further lies in explaining the property of long memory, which was discovered and studied more recently by borrowing concepts from complex systems theory. The statistical universals not only possibly lie as the precursor of language system formation, but they also highlight the qualities of language that remain weak points in today's machine learning. In summary, this book provides an overview of language's global properties. It will be of interest to anyone engaged in fields related to language and computing or statistical analysis methods, with an emphasis on researchers and students in computational linguistics and natural language processing. While the book does apply mathematical concepts, all possible effort has been made to speak to a non-mathematical audience as well by communicating mathematical content intuitively, with concise examples taken from real texts.
In many practical situations, we are interested in statistics characterizing a population of objects: e.g. in the mean height of people from a certain area. Most algorithms for estimating such statistics assume that the sample values are exact. In practice, sample values come from measurements, and measurements are never absolutely accurate. Sometimes, we know the exact probability distribution of the measurement inaccuracy, but often, we only know the upper bound on this inaccuracy. In this case, we have interval uncertainty: e.g. if the measured value is 1.0, and inaccuracy is bounded by 0.1, then the actual (unknown) value of the quantity can be anywhere between 1.0 - 0.1 = 0.9 and 1.0 + 0.1 = 1.1. In other cases, the values are expert estimates, and we only have fuzzy information about the estimation inaccuracy. This book shows how to compute statistics under such interval and fuzzy uncertainty. The resulting methods are applied to computer science (optimal scheduling of different processors), to information technology (maintaining privacy), to computer engineering (design of computer chips), and to data processing in geosciences, radar imaging, and structural mechanics.
This book focuses on Krylov subspace methods for solving linear systems, which are known as one of the top 10 algorithms in the twentieth century, such as Fast Fourier Transform and Quick Sort (SIAM News, 2000). Theoretical aspects of Krylov subspace methods developed in the twentieth century are explained and derived in a concise and unified way. Furthermore, some Krylov subspace methods in the twenty-first century are described in detail, such as the COCR method for complex symmetric linear systems, the BiCR method, and the IDR(s) method for non-Hermitian linear systems. The strength of the book is not only in describing principles of Krylov subspace methods but in providing a variety of applications: shifted linear systems and matrix functions from the theoretical point of view, as well as partial differential equations, computational physics, computational particle physics, optimizations, and machine learning from a practical point of view. The book is self-contained in that basic necessary concepts of numerical linear algebra are explained, making it suitable for senior undergraduates, postgraduates, and researchers in mathematics, engineering, and computational science. Readers will find it a useful resource for understanding the principles and properties of Krylov subspace methods and correctly using those methods for solving problems in the future.
This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance.
From the reviews: "A unique feature of this book is the nice blend of engineering vividness and mathematical rigour. [...] The authors are to be congratulated for their valuable contribution to the literature in the area of theoretical thermoelasticity and vibration of plates." Journal of Sound and Vibration
This book covers methods of Mathematical Morphology to model and simulate random sets and functions (scalar and multivariate). The introduced models concern many physical situations in heterogeneous media, where a probabilistic approach is required, like fracture statistics of materials, scaling up of permeability in porous media, electron microscopy images (including multispectral images), rough surfaces, multi-component composites, biological tissues, textures for image coding and synthesis. The common feature of these random structures is their domain of definition in n dimensions, requiring more general models than standard Stochastic Processes.The main topics of the book cover an introduction to the theory of random sets, random space tessellations, Boolean random sets and functions, space-time random sets and functions (Dead Leaves, Sequential Alternate models, Reaction-Diffusion), prediction of effective properties of random media, and probabilistic fracture theories.
The focus of this book is on providing students with insights into geometry that can help them understand deep learning from a unified perspective. Rather than describing deep learning as an implementation technique, as is usually the case in many existing deep learning books, here, deep learning is explained as an ultimate form of signal processing techniques that can be imagined. To support this claim, an overview of classical kernel machine learning approaches is presented, and their advantages and limitations are explained. Following a detailed explanation of the basic building blocks of deep neural networks from a biological and algorithmic point of view, the latest tools such as attention, normalization, Transformer, BERT, GPT-3, and others are described. Here, too, the focus is on the fact that in these heuristic approaches, there is an important, beautiful geometric structure behind the intuition that enables a systematic understanding. A unified geometric analysis to understand the working mechanism of deep learning from high-dimensional geometry is offered. Then, different forms of generative models like GAN, VAE, normalizing flows, optimal transport, and so on are described from a unified geometric perspective, showing that they actually come from statistical distance-minimization problems. Because this book contains up-to-date information from both a practical and theoretical point of view, it can be used as an advanced deep learning textbook in universities or as a reference source for researchers interested in acquiring the latest deep learning algorithms and their underlying principles. In addition, the book has been prepared for a codeshare course for both engineering and mathematics students, thus much of the content is interdisciplinary and will appeal to students from both disciplines. |
You may like...
Mathematics for Neuroscientists
Fabrizio Gabbiani, Steven J. Cox
Hardcover
New Optimization Algorithms and their…
Zhenxing Zhang, Liying Wang, …
Paperback
R3,975
Discovery Miles 39 750
Eigenvalue and Eigenvector Problems in…
Sorin Vlase, - Marin Marin, …
Hardcover
R3,806
Discovery Miles 38 060
|