![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Science: general issues > Scientific standards
In the last quarter century, delamination has come to mean more than just a failure in adhesion between layers of bonded composite plies that might affect their load-bearing capacity. Ever-increasing computer power has meant that we can now detect and analyze delamination between, for example, cell walls in solid wood. This fast-moving and critically important field of study is covered in a book that provides everyone from manufacturers to research scientists the state of the art in wood delamination studies. Divided into three sections, the book first details the general aspects of the subject, from basic information including terminology, to the theoretical basis for the evaluation of delamination. A settled terminology in this subject area is a first key goal of the book, as the terms which describe delamination in wood and wood-based composites are numerous and often confusing. The second section examines different and highly specialized methods for delamination detection such as confocal laser scanning microscopy, light microscopy, scanning electron microscopy and ultrasonics. Ways in which NDE (non-destructive evaluation) can be employed to detect and locate defects are also covered. The book's final section focuses on the practical aspects of this defect in a wide range of wood products covering the spectrum from trees, logs, laminated panels and glued laminated timbers to parquet floors. Intended as a primary reference, this book covers everything from the microscopic, anatomical level of delamination within solid wood sections to an examination of the interface of wood and its surface coatings. It provides readers with the perspective of industry as well as laboratory and is thus a highly practical sourcebook for wood engineers working in manufacturing as well as a comprehensively referenced text for materials scientists wrestling with the theory underlying the subject.
Infrared thermography is a measurement technique that enables to obtain non intrusive measurements of surface temperatures. One of the interesting features of this technique is its ability to measure a full two dimensional map of the surface temperature and for this reason it has been widely used as a flow visualization technique. Since the temperature measurements can be extremely accurate it is possible, by using a heat flux sensor, also to measure convective heat transfer coefficient distributions on a surface making the technique de facto quantitative. This book, starting from the basic theory of infrared thermography and heat flux sensor guides, both the experienced researcher and the young student, in the correct application of this powerful technique to various practical problems. A significant number of examples and applications are also examined in detail.
This book presents recent advances and developments in control, automation, robotics, and measuring techniques. It presents contributions of top experts in the fields, focused on both theory and industrial practice. The particular chapters present a deep analysis of a specific technical problem which is in general followed by a numerical analysis and simulation, and results of an implementation for the solution of a real world problem. The presented theoretical results, practical solutions and guidelines will be useful for both researchers working in the area of engineering sciences and for practitioners solving industrial problems.
The book reviews methods for the analysis of astronomical datasets, particularly emphasizing very large databases arising from both existing and forthcoming projects, as well as current large-scale computer simulation studies. Leading experts give overviews of cutting-edge methods applicable in the area of astronomical data mining.
The joint NASA-ESA Cassini-Huygens mission promises to return four (and possibly more) years of unparalleled scientific data from the solar system's most exotic planet, the ringed, gas giant, Saturn. Larger than Galileo with a much greater communication bandwidth, Cassini can accomplish in a single flyby what Galileo returned in a series of passes. Cassini explores the Saturn environment in three dimensions, using gravity assists to climb out of the equatorial plane to look down on the rings from above, to image the aurora and to study polar magnetospheric processes such as field-aligned currents. Since the radiation belt particle fluxes are much more benign than those at Jupiter, Cassini can more safely explore the inner regions of the magnetosphere. The spacecraft approaches the planet closer than Galileo could, and explores the inner moons and the rings much more thoroughly than was possible at Jupiter. This book is the second volume, in a three volume set, that describes the Cassini/Huygens mission. This volume describes the in situ investigations on the Cassini orbiter: plasma spectrometer, ion and neutral mass spectrometer, energetic charged and neutral particle spectrometer, magnetometer, radio and plasma wave spectrometer and the cosmic dust analyzer. This book is of interest to all potential users of the Cassini-Huygens data, to those who wish to learn about the planned scientific return from the Cassini-Huygens mission and those curious about the processes occurring on this most fascinating planet. A third volume describes the remote sensing investigations on the orbiter.
The field of large-scale dimensional metrology (LSM) deals with objects that have linear dimensions ranging from tens to hundreds of meters. It has recently attracted a great deal of interest in many areas of production, including the automotive, railway, and shipbuilding sectors. Distributed Large-Scale Dimensional Metrology introduces a new paradigm in this field that reverses the classical metrological approach: measuring systems that are portable and can be easily moved around the location of the measured object, which is preferable to moving the object itself. Distributed Large-Scale Dimensional Metrology combines the concepts of distributed systems and large scale metrology at the application level. It focuses on the latest insights and challenges of this new generation of systems from the perspective of the designers and developers. The main topics are: coverage of measuring area, sensors calibration, on-line diagnostics, probe management, and analysis of metrological performance. The general descriptions of each topic are further enriched by specific examples concerning the use of commercially available systems or the development of new prototypes. This will be particularly useful for professional practitioners such as quality engineers, manufacturing and development engineers, and procurement specialists, but Distributed Large-Scale Dimensional Metrology also has a wealth of information for interested academics.
This book gives a detailed review on ground-based aerosol optical depth measurement with emphasis on the calibration issue. The review is written in chronological sequence to render better comprehension on the evolution of the classical Langley calibration from the past to present. It not only compiles the existing calibration methods but also presents a novel calibration algorithm in Langley sun-photometry over low altitude sites which conventionally is a common practice performed at high observatory stations. The proposed algorithm avoids travelling to high altitudes for frequent calibration that is difficult both in logistics and financial prospects. We addressed the problem by combining clear-sky detection model and statistical filter to strictly imitate the ideal clear-sky condition at high altitude for measurements taken over low altitudes. In this way, the possible temporal atmospheric drifts, abundant aerosol loadings and short time interval cloud transits are properly constrained. We believe that this finding has an integral part of practicality and versatility in ground-based aerosol optical depth measurement, which is nowadays an important climate agent in many atmospheric studies. Finally, the outcome of this book introduces a new calibration technique for the study and measurement of aerosol monitoring with emphasis on aerosol optical depth that we believe could be very beneficial to researchers and scientists in the similar area.
Since publication of the previous, the 3rd edition of this book, the sensor tech- logies have made a remarkable leap ahead. The sensitivity of the sensors became higher, the dimensions - smaller, the selectivity - better, and the prices - lower. What have not changed, are the fundamental principles of the sensor design. They still are governed by the laws of Nature. Arguably one of the greatest geniuses ever lived, Leonardo Da Vinci had his own peculiar way of praying. It went like this, "Oh Lord, thanks for Thou don't violate Thy own laws. " It is comforting indeed that the laws of Nature do not change with time, it is just that our appreciation of them becomes re?ned. Thus, this new edition examines the same good old laws of Nature that form the foundation for designs of various sensors. This has not changed much since the previous editions. Yet, the sections that describe practical designs are revised substantially. Recent ideas and developments have been added, while obsolete and less important designs were dropped. This book is about devices commonly called sensors. The invention of a microprocessor has brought highly sophisticated instruments into our everyday life. Numerous computerized appliances, of which microprocessors are integral parts, wash clothes and prepare coffee, play music, guard homes, and control room temperature. Sensors are essential components in any device that uses a digital signal processor.
Nuclear reactions at energies near and below the Coulomb barrier have found much interest since unexpectedly large cross sections of fusion for heavy ions were discovered around 1980. This book covers the more important experimental and theoretical aspects such as sub-barrier fusion, sub- and near-barrier transfer, couplings of various reaction channels, neck-formation, the threshold anomaly, spin distributions and fusion of polarized ions. The symposium also included a session devoted to mass spectrometry for fast reaction products.
This volume comprises a collection of invited papers presented at the interna tional symposium "The Future of Muon Physics", May 7-9 1991, at the Ruprecht Karls-Universitat in Heidelberg. In the inspiring atmosphere of the Internationales Wissenschaftsforum researchers working worldwide at universities and at many inter national accelerator centers came together to review the present status of the field and to discuss the future directions in muon physics. The muon, charged lepton of the second generation, was first oberved some sixty years ago~ Despite many efforts since, the reason for its existence still remains a secret to the scientific community challenging both theorists and experimentalists. In modern physics the muon plays a key role in many topics of research. Atomic physics with negative muons provides excellent tests of the theory of quantum electrodynamics and of the electro-weak interaction and probes nuclear properties. The. purely leptonic hydrogen-like muonium atom allows tests of fun damental laws in physics and the determination of precise values for fundamental constants. New measurements of the anomalous magnetic moment of the muon will probe the renormalizability of the weak interaction and will be sensitive to physics beyond the standard model. The muon decay is the most carefully studied weak process. Searches for rare decay modes of muons and for the conversion of muonium to antimuonium examine the lepton number conservation laws and new speculative theories. Nuclear muon capture addresses fundamental questions like tests of the CPT theorem.
Sloshing causes liquid to fluctuate, making accurate level readings difficult to obtain in dynamic environments. The measurement system described uses a single-tube capacitive sensor to obtain an instantaneous level reading of the fluid surface, thereby accurately determining the fluid quantity in the presence of slosh. A neural network based classification technique has been applied to predict the actual quantity of the fluid contained in a tank under sloshing conditions. In "A neural network approach to fluid quantity measurement in dynamic environments," effects of temperature variations and contamination on the capacitive sensor are discussed, and the authors propose that these effects can also be eliminated with the proposed neural network based classification system. To examine the performance of the classification system, many field trials were carried out on a running vehicle at various tank volume levels that range from 5 L to 50 L. The effectiveness of signal enhancement on the neural network based signal classification system is also investigated. Results obtained from the investigation are compared with traditionally used statistical averaging methods, and proves that the neural network based measurement system can produce highly accurate fluid quantity measurements in a dynamic environment. Although in this case a capacitive sensor was used to demonstrate measurement system this methodology is valid for all types of electronic sensors. The approach demonstrated in "A neural network approach to fluid quantity measurement in dynamic environments "can be applied to a wide range of fluid quantity measurement applications in the automotive, naval and aviation industries to produce accurate fluid level readings. Students, lecturers, and experts will find the description of current research about accurate fluid level measurement in dynamic environments using neural network approach useful."
The book is a comprehensive edition which considers the interactions of atoms, ions and molecules with charged particles, photons and laser fields and reflects the present understanding of atomic processes such as electron capture, target and projectile ionisation, photoabsorption and others occurring in most of laboratory and astrophysical plasma sources including many-photon and many-electron processes. The material consists of selected papers written by leading scientists in various fields.
Central to this thesis is the characterisation and exploitation of electromagnetic properties of light in imaging and measurement systems. To this end an information theoretic approach is used to formulate a hitherto lacking, quantitative definition of polarisation resolution, and to establish fundamental precision limits in electromagnetic systems. Furthermore rigorous modelling tools are developed for propagation of arbitrary electromagnetic fields, including for example stochastic fields exhibiting properties such as partial polarisation, through high numerical aperture optics. Finally these ideas are applied to the development, characterisation and optimisation of a number of topical optical systems: polarisation imaging; multiplexed optical data storage; and single molecule measurements. The work has implications for all optical imaging systems where polarisation of light is of concern.
Measuring Technology and Mechatronics Automation in Electrical Engineering includes select presentations on measuring technology and mechatronics automation related to electrical engineering, originally presented during the International Conference on Measuring Technology and Mechanatronics Automation (ICMTMA2012). This Fourth ICMTMA, held at Sanya, China, offered a prestigious, international forum for scientists, engineers, and educators to present the state of the art of measuring technology and mechatronics automation research.
The book describes the fundamentals, latest developments and use of key experimental techniques for semiconductor research. It explains the application potential of various analytical methods and discusses the opportunities to apply particular analytical techniques to study novel semiconductor compounds, such as dilute nitride alloys. The emphasis is on the technique rather than on the particular system studied.
The characteristics of electrical contacts have long attracted the attention of researchers since these contacts are used in every electrical and electronic device. Earlier studies generally considered electrical contacts of large dimensions, having regions of current concentration with diameters substantially larger than the characteristic dimensions of the material: the interatomic distance, the mean free path for electrons, the coherence length in the superconducting state, etc. [110]. The development of microelectronics presented to scientists and engineers the task of studying the characteristics of electrical contacts with ultra-small dimensions. Characteristics of point contacts such as mechanical stability under continuous current loads, the magnitudes of electrical fluctuations, inherent sensitivity in radio devices and nonlinear characteristics in connection with electromagnetic radiation can not be understood and altered in the required way without knowledge of the physical processes occurring in contacts. Until recently it was thought that the electrical conductivity of contacts with direct conductance (without tunneling or semiconducting barriers) obeyed Ohm's law. Nonlinearities of the current-voltage characteristics were explained by joule heating of the metal in the region of the contact. However, studies of the current-voltage characteristics of metallic point contacts at low (liquid helium) temperatures [142] showed that heating effects were negligible in many cases and the nonlinear characteristics under these conditions were observed to take the form of the energy dependent probability of inelastic electron scattering, induced by various mechanisms.
The "Rudolf Moessbauer Story" recounts the history of the discovery of the "Moessbauer Effect" in 1958 by Rudolf Moessbauer as a graduate student of Heinz Maier-Leibnitz for which he received the Nobel Prize in 1961 when he was 32 years old. The development of numerous applications of the Moessbauer Effect in many fields of sciences , such as physics, chemistry, biology and medicine is reviewed by experts who contributed to this wide spread research. In 1978 Moessbauer focused his research interest on a new field "Neutrino Oscillations" and later on the study of the properties of the neutrinos emitted by the sun.
Recent state-of-the-art technologies in fabricating low-loss optical and mechanical components have significantly motivated the study of quantum-limited measurements with optomechanical devices. Such research is the main subject of this thesis. In the first part, the author considers various approaches for surpassing the standard quantum limit for force measurements. In the second part, the author proposes different experimental protocols for using optomechanical interactions to explore quantum behaviors of macroscopic mechanical objects. Even though this thesis mostly focuses on large-scale laser interferometer gravitational-wave detectors and related experiments, the general approaches apply equally well for studying small-scale optomechanical devices. The author is the winner of the 2010 Thesis prize awarded by the Gravitational Wave International Committee.
This book fulfills the global need to evaluate measurement results along with the associated uncertainty. In the book, together with the details of uncertainty calculations for many physical parameters, probability distributions and their properties are discussed. Definitions of various terms are given and will help the practicing metrologists to grasp the subject. The book helps to establish international standards for the evaluation of the quality of raw data obtained from various laboratories for interpreting the results of various national metrology institutes in an international inter-comparisons. For the routine calibration of instruments, a new idea for the use of pooled variance is introduced. The uncertainty calculations are explained for (i) independent linear inputs, (ii) non-linear inputs and (iii) correlated inputs. The merits and limitations of the Guide to the Expression of Uncertainty in Measurement (GUM) are discussed. Monte Carlo methods for the derivation of the output distribution from the input distributions are introduced. The Bayesian alternative for calculation of expanded uncertainty is included. A large number of numerical examples is included.
Interferometry, the most precise measurement technique known today, exploits the wave-like nature of the atoms or photons in the interferometer. As expected from the laws of quantum mechanics, the granular, particle-like features of the individually independent atoms or photons are responsible for the precision limit, the shot noise limit. However this "classical" bound is not fundamental and it is the aim of quantum metrology to overcome it by employing entanglement among the particles. This work reports on the realization of spin-squeezed states suitable for atom interferometry. Spin squeezing was generated on the basis of motional and spin degrees of freedom, whereby the latter allowed the implementation of a full interferometer with quantum-enhanced precision.
Dealing with Uncertainties is an innovative monograph that lays special emphasis on the deductive approach to uncertainties and on the shape of uncertainty distributions. This perspective has the potential for dealing with the uncertainty of a single data point and with sets of data that have different weights. It is shown that the inductive approach that is commonly used to estimate uncertainties is in fact not suitable for these two cases. The approach that is used to understand the nature of uncertainties is novel in that it is completely decoupled from measurements. Uncertainties which are the consequence of modern science provide a measure of confidence both in scientific data and in information in everyday life. Uncorrelated uncertainties and correlated uncertainties are fully covered and the weakness of using statistical weights in regression analysis is discussed. The text is abundantly illustrated with examples and includes more than 150 problems to help the reader master the subject.
This book gathers the proceedings of The Hadron Collider Physics Symposia (HCP) 2005, and reviews the state-of-the-art in the key physics directions of experimental hadron collider research. Topics include QCD physics, precision electroweak physics, c-, b-, and t-quark physics, physics beyond the Standard Model, and heavy ion physics. The present volume serves as a reference for everyone working in the field of accelerator-based high-energy physics.
Understanding the dynamics of multi-phase flows has been a challenge in the fields of nonlinear dynamics and fluid mechanics. This chapter reviews our work on two-phase flow dynamics in combination with complex network theory. We systematically carried out gas-water/oil-water two-phase flow experiments for measuring the time series of flow signals which is studied in terms of the mapping from time series to complex networks. Three network mapping methods were proposed for the analysis and identification of flow patterns, i.e. Flow Pattern Complex Network (FPCN), Fluid Dynamic Complex Network (FDCN) and Fluid Structure Complex Network (FSCN). Through detecting the community structure of FPCN based on K-means clustering, distinct flow patterns can be successfully distinguished and identified. A number of FDCN's under different flow conditions were constructed in order to reveal the dynamical characteristics of two-phase flows. The FDCNs exhibit universal power-law degree distributions. The power-law exponent and the network information entropy are sensitive to the transition among different flow patterns, which can be used to characterize nonlinear dynamics of the two-phase flow. FSCNs were constructed in the phase space through a general approach that we introduced. The statistical properties of FSCN can provide quantitative insight into the fluid structure of two-phase flow. These interesting and significant findings suggest that complex networks can be a potentially powerful tool for uncovering the nonlinear dynamics of two-phase flows.
Researchers and professionals will find a hands-on guide to successful experiments and applications of modern electroanalytical techniques here. The new edition has been completely revised and extended by a chapter on quartz-crystal microbalances. The book is written for chemists, biochemists, environmental and materials scientists, and physicists. A basic knowledge of chemistry and physics is sufficient for understanding the described methods. Electroanalytical techniques are particularly useful for qualitative and quantitative analysis of chemical, biochemical, and physical systems. Experienced experts provide the necessary theoretical background of electrochemistry and thoroughly describe frequently used measuring techniques. Special attention is given to experimental details and data evaluation. |
![]() ![]() You may like...
The Other - Feminist Reflections in…
Helen Fielding, Gabrielle Hiltmann, …
Hardcover
R1,528
Discovery Miles 15 280
Aeroacoustics of Low Mach Number Flows…
Stewart Glegg, William Devenport
Paperback
R3,025
Discovery Miles 30 250
The Chosen Species - The Long March of…
J.L. Arsuaga, Ignacio Martinez
Paperback
R1,144
Discovery Miles 11 440
Nanofluids and Mass Transfer
Mohammad Reza Rahimpour, Mohammad Amin Makarem, …
Paperback
R4,975
Discovery Miles 49 750
|