![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Science: general issues > Scientific standards
Accurate fluid level measurement in dynamic environments can be assessed using a Support Vector Machine (SVM) approach. SVM is a supervised learning model that analyzes and recognizes patterns. It is a signal classification technique which has far greater accuracy than conventional signal averaging methods. Ultrasonic Fluid Quantity Measurement in Dynamic Vehicular Applications: A Support Vector Machine Approach describes the research and development of a fluid level measurement system for dynamic environments. The measurement system is based on a single ultrasonic sensor. A Support Vector Machines (SVM) based signal characterization and processing system has been developed to compensate for the effects of slosh and temperature variation in fluid level measurement systems used in dynamic environments including automotive applications. It has been demonstrated that a simple -SVM model with Radial Basis Function (RBF) Kernel with the inclusion of a Moving Median filter could be used to achieve the high levels of accuracy required for fluid level measurement in dynamic environments. Aimed toward graduate and postgraduate students, researchers, and engineers studying applications of artificial intelligence, readers will learn about a measurement system that is based on a single ultrasonic sensor which can achieve the high levels of accuracy required for fluid level measurement in dynamic environments.
This volume will define the direction of eddy-current technology in nondestructive evaluation (NDE) in the twenty-first century. It describes the natural marriage of the computer to eddy-current NDE, and its publication was encouraged by favorable responses from workers in the nuclear-power and aerospace industries. It will be used by advanced students and practitioners in the fields of computational electromagnetics, electromagnetic inverse-scattering theory, nondestructive evaluation, materials evaluation and biomedical imaging, among others, and will be based on our experience in applying the subject of computational electromagnetics to these areas, as manifested by our recent research and publications. Finally, it will be a reference to future monographs on advanced NDE that are being contemplated by our colleagues and others. Its importance lies in the fact that it will be the first book to show that advanced computational methods can be used to solve practical, but difficult, problems in eddy-current NDE. In fact, in many cases these methods are the only things available for solving the problems. The book will cover the topic of computational electromagnetics in eddy-current nondestructive evaluation (NDE) by emphasizing three distinct topics: (a) fundamental mathematical principles of volume-integral equations as a subset of computational electromagnetics, (b) mathematical algorithms applied to signal-processing and inverse scattering problems, and (c) applications of these two topics to problems in which real and model data are used. This will make the book more than an academic exercise; we expect it to be valuable to users of eddy-current NDE technology in industries as varied as nuclear power, aerospace, materials characterization and biomedical imaging. We know of no other book on the market that covers this material in the manner in which we will present it, nor are there any books, to our knowledge, that apply this material to actual test situations that are of importance to the industries cited. It will be the first book to actually define the modern technology of eddy-current NDE, by showing how mathematics and the computer will solve problems more effectively than current analog practice.
This thesis deals with two main procedures performed with the ATLAS detector at the Large Hadron Collider (LHC). The noise description in the hadronic calorimeter TileCal represents a very valuable technical job. The second part presents a fruitful physics analysis - the cross section measurement of the process p+p Z0 + . The Monte Carlo simulations of the TileCal are described in the first part of the thesis, including a detailed treatment of the electronic noise and multiple interactions (so-called pile-up). An accurate description of both is crucial for the reconstruction of e.g. jets or hadronic tau-jets. The second part reports a Standard Model measurement of the Z0 + process with the emphasis on the final state with an electron and a hadronically decaying tau-lepton. The Z0 + channel forms the dominant background in the search for Higgs bosons decaying into tau lepton pairs, and thus the good understanding achieved here can facilitate more sensitive Higgs detection.
This book presents in a concise way the Mie theory and its current applications. It begins with an overview of current theories, computational methods, experimental techniques, and applications of optics of small particles. There is also some biographic information on Gustav Mie, who published his famous paper on the colour of Gold colloids in 1908. The Mie solution for the light scattering of small spherical particles set the basis for more advanced scattering theories and today there are many methods to calculate light scattering and absorption for practically any shape and composition of particles. The optics of small particles is of interest in industrial, atmospheric, astronomic and other research. The book covers the latest developments in divers fields in scattering theory such as plasmon resonance, multiple scattering and optical force.
These notes describe how to average and fit numerical data that have been obtained either by simulation or measurement. Following an introduction on how to estimate various average values, they discuss how to determine error bars on those estimates, and how to proceed for combinations of measured values. Techniques for fitting data to a given set of models will be described in the second part of these notes. This primer equips readers to properly derive the results covered, presenting the content in a style suitable for a physics audience. It also includes scripts in python, perl and gnuplot for performing a number of tasks in data analysis and fitting, thereby providing readers with a useful reference guide.
The book provides the broad knowledge on electromigration techniques including: theory of CE, description of instrumentation, theory and practice in micellar electrokinetic chromatography, isotachophoresis, capillary isoelectric focusing, capillary and planar electrochromatography (including description of instrumentation and packed and monolithic column preparation), 2D-gel electrophoresis (including sample preparation) and lab-on-a-chip systems. The book also provides the most recent examples of applications including food, environmental, pharmaceutical analysis as well as proteomics.
This thesis presents a study of the origin of an apparently extended X-ray emission associated with the Galactic ridge. The study was carried out with broadband spectra obtained from mapping observations in the Galactic bulge region conducted in 2005-2010 by the Suzaku space X-ray observatory. The spectra were analyzed with a newly constructed X-ray spectral model of an accreting white dwarf binary that is one of the proposed candidate stars for the origin of the Galactic ridge emission in the higher energy band. Fitting of the observed Galactic ridge spectra with the model showed that there is another spectral component that fills the gap between the observed X-ray flux and the component expected from the accreting white dwarf spectral model in the lower energy band. This additional soft spectral component was nicely explained by an X-ray spectral model of normal stars. The result, together with previously reported high-resolution imaging results, strongly supports the idea that the Galactic ridge X-ray emission is an assembly of dim, discrete X-ray point sources.
This book by Helmut Wiedemann is a well-established, classic text, providing an in-depth and comprehensive introduction to the field of high-energy particle acceleration and beam dynamics. The present 4th edition has been significantly revised, updated and expanded. The newly conceived Part I is an elementary introduction to the subject matter for undergraduate students. Part II gathers the basic tools in preparation of a more advanced treatment, summarizing the essentials of electrostatics and electrodynamics as well as of particle dynamics in electromagnetic fields. Part III is an extensive primer in beam dynamics, followed, in Part IV, by an introduction and description of the main beam parameters and including a new chapter on beam emittance and lattice design. Part V is devoted to the treatment of perturbations in beam dynamics. Part VI then discusses the details of charged particle acceleration. Parts VII and VIII introduce the more advanced topics of coupled beam dynamics and describe very intense beams - a number of additional beam instabilities are introduced and reviewed in this new edition. Part IX is an exhaustive treatment of radiation from accelerated charges and introduces important sources of coherent radiation such as synchrotrons and free-electron lasers. The appendices at the end of the book gather useful mathematical and physical formulae, parameters and units. Solutions to many end-of-chapter problems are given. This textbook is suitable for an intensive two-semester course starting at the senior undergraduate level.
Michael Schenk evaluates new technologies and methods, such as cryogenic read-out electronics and a UV laser system, developed to optimise the performance of large liquid argon time projection chambers (LArTPC). Amongst others, the author studies the uniformity of the electric field produced by a Greinacher high-voltage generator operating at cryogenic temperatures, measures the linear energy transfer (LET) of muons and the longitudinal diffusion coefficient of electrons in liquid argon. The results are obtained by analysing events induced by cosmic-ray muons and UV laser beams. The studies are carried out with ARGONTUBE, a prototype LArTPC in operation at the University of Bern, Switzerland, designed to investigate the feasibility of drift distances of up to five metres for electrons in liquid argon.
This volume surveys recent research on autonomous sensor networks from the perspective of enabling technologies that support medical, environmental and military applications. State of the art, as well as emerging concepts in wireless sensor networks, body area networks and ambient assisted living introduce the reader to the field, while subsequent chapters deal in depth with established and related technologies, which render their implementation possible. These range from smart textiles and printed electronic devices to implanted devices and specialized packaging, including the most relevant technological features. The last four chapters are devoted to customization, implementation difficulties and outlook for these technologies in specific applications.
This book presents a comprehensive and up-to-date account of the theory (physical principles), design, and practical implementations of various sensors for scientific, industrial, and consumer applications. This latest edition focuses on the sensing technologies driven by the expanding use of sensors in mobile devices. These new miniature sensors will be described, with an emphasis on smart sensors which have embedded processing systems. The chapter on chemical sensors has also been expanded to present the latest developments. Digital systems, however complex and intelligent they may be, must receive information from the outside world that is generally analog and not electrical. Sensors are interface devices between various physical values and the electronic circuits that "understand" only a language of moving electrical charges. In other words, sensors are the eyes, ears, and noses of silicon chips. Unlike other books on sensors, the Handbook of Modern Sensors is organized according to the measured variables (temperature, pressure, position, etc.). This book is a reference text for students, researchers interested in modern instrumentation (applied physicists and engineers), sensor designers, application engineers and technicians whose job it is to understand, select and/or design sensors for practical systems.
The theoretical foundations of the Standard Model of elementary particles relies on the existence of the Higgs boson, a particle which has been revealed for the first time by the experiments run at the Large Hadron Collider (LHC) in 2012. As the Higgs boson is an unstable particle, its search strategies were based on its decay products. In this thesis, Francesco Pandolfi conducted a search for the Higgs boson in the H ZZ l + l - qq Decay Channel with 4.6 fb -1 of 7 TeV proton-proton collision data collected by the Compact Muon Solenoid (CMS) experiment. The presence of jets in the final state poses a series of challenges to the experimenter: both from a technical point of view, as jets are complex objects and necessitate of ad-hoc reconstruction techniques, and from an analytical one, as backgrounds with jets are copious at hadron colliders, therefore analyses must obtain high degrees of background rejection in order to achieve competitive sensitivity. This is accomplished by following two directives: the use of an angular likelihood discriminant, capable of discriminating events likely to originate from the decay of a scalar boson from non-resonant backgrounds, and by using jet parton flavor tagging, selecting jets compatible with quark hadronization and discarding jets more likely to be initiated by gluons. The events passing the selection requirements in 4.6 fb -1 of data collected by the CMS detector are examined, in the search of a possible signal compatible with the decay of a heavy Higgs boson. The thesis describes the statistical tools and the results of this analysis. This work is a paradigm for studies of the Higgs boson with final states with jets. The non-expert physicists will enjoy a complete and eminently readable description of a proton-proton collider analysis. At the same time, the expert reader will learn the details of the searches done with jets at CMS.
The book presents the recent advancements in the area of sensors and sensing technology, specifically in environmental monitoring, structural health monitoring, dielectric, magnetic, electrochemical, ultrasonic, microfluidic, flow, surface acoustic wave, gas, cloud computing and bio-medical. This book will be useful to a variety of readers, namely, Master and PhD degree students, researchers, practitioners, working on sensors and sensing technology. The book will provide an opportunity of a dedicated and a deep approach in order to improve their knowledge in this specific field.
Dimensional metrology is an essential part of modern manufacturing technologies, but the basic theories and measurement methods are no longer sufficient for today's digitized systems. The information exchange between the software components of a dimensional metrology system not only costs a great deal of money, but also causes the entire system to lose data integrity. Information Modeling for Interoperable Dimensional Metrology analyzes interoperability issues in dimensional metrology systems and describes information modeling techniques. It discusses new approaches and data models for solving interoperability problems, as well as introducing process activities, existing and emerging data models, and the key technologies of dimensional metrology systems. Written for researchers in industry and academia, as well as advanced undergraduate and postgraduate students, this book gives both an overview and an in-depth understanding of complete dimensional metrology systems. By covering in detail the theory and main content, techniques, and methods used in dimensional metrology systems, Information Modeling for Interoperable Dimensional Metrology enables readers to solve real-world dimensional measurement problems in modern dimensional metrology practices.
This series of reference books describes sciences of different elds in and around geodesy with independent chapters. Each chapter covers an individual eld and describes the history, theory, objective, technology, development, highlights of research and applications. In addition, problems as well as future directions are discussed. The subjects of this reference book include Absolute and Relative Gravimetry, Adaptively Robust Kalman Filters with Applications in Navigation, Airborne Gravity Field Determination, Analytic Orbit Theory, Deformation and Tectonics, Earth Rotation, Equivalence of GPS Algorithms and its Inference, Marine Geodesy, Satellite Laser Ranging, Superconducting Gravimetry and Synthetic Aperture Radar Interferometry. These are individual subjects in and around geodesy and are for the rst time combined in a unique book which may be used for teaching or for learning basic principles of many subjects related to geodesy. The material is suitable to provide a general overview of geodetic sciences for high-level geodetic researchers, educators as well as engineers and students. Some of the chapters are written to ll literature blanks of the related areas. Most chapters are written by well-known scientists throughout the world in the related areas. The chapters are ordered by their titles. Summaries of the individual chapters and introductions of their authors and co-authors are as follows. Chapter 1 "Absolute and Relative Gravimetry" provides an overview of the gravimetric methods to determine most accurately the gravity acceleration at given locations.
The measurement and characterisation of surface topography is crucial to modern manufacturing industry. The control of areal surface structure allows a manufacturer to radically alter the functionality of a part. Examples include structuring to effect fluidics, optics, tribology, aerodynamics and biology. To control such manu facturing methods requires measurement strategies. There is now a large range of new optical techniques on the market, or being developed in academia, that can measure areal surface topography. Each method has its strong points and limitations. The book starts with introductory chapters on optical instruments, their common language, generic features and limitations, and their calibration. Each type of modern optical instrument is described (in a common format) by an expert in the field. The book is intended for both industrial and academic scientists and engineers, and will be useful for undergraduate and postgraduate studies.
Precision Nanometrology describes the new field of precision nanometrology, which plays an important part in nanoscale manufacturing of semiconductors, optical elements, precision parts and similar items. It pays particular attention to the measurement of surface forms of precision workpieces and to stage motions of precision machines. The first half of the book is dedicated to the description of optical sensors for the measurement of angle and displacement, which are fundamental quantities for precision nanometrology. The second half presents a number of scanning-type measuring systems for surface forms and stage motions. The systems discussed include: * error separation algorithms and systems for measurement of straightness and roundness, * the measurement of micro-aspherics, * systems based on scanning probe microscopy, and * scanning image-sensor systems. Precision Nanometrology presents the fundamental and practical technologies of precision nanometrology with a helpful selection of algorithms, instruments and experimental data. It will be beneficial for researchers, engineers and postgraduate students involved in precision engineering, nanotechnology and manufacturing.
This volume is the outcome of a community-wide review of the field of dynamics and thermodynamics with nuclear degrees of freedom. It presents the achievements and the outstanding open questions in 26 articles collected in six topical sections and written by more than 60 authors. All authors are internationally recognized experts in their fields.
The book describes the experimental techniques employed to study surfaces and interfaces. The emphasis is on the experimental method. Therefore all chapters start with an introduction of the scientific problem, the theory necessary to understand how the technique works and how to understand the results. Descriptions of real experimental setups, experimental results at different systems are given to show both the strength and the limits of the technique. In a final part the new developments and possible extensions of the techniques are presented. The included techniques provide microscopic as well as macroscopic information. They cover most of the techniques used in surface science.
This Dictionary of Weighing Terms is a comprehensive practical guide to the terminology of weighing for all users of weighing instruments in industry and science. It explains more than 1000 terms of weighing technology and related areas; numerous illustrations assist understanding. The Dictionary of Weighing Terms is a joint work of the German Federal Institute of Physics and Metrology (PTB) and METTLER TOLEDO, the weighing instruments manufacturer. Special thanks go to Peter Brandes, Michael Denzel, and Dr. Oliver Mack of PTB, and to Richard Davis of BIPM, who with their technical knowledge have contributed to the success of this work. The Dictionary contains terms from the following fields: fundamentals of weighing, application and use of weighing instruments, international standards, legal requirements for weighing instruments, weighing accuracy. An index facilitates rapid location of the required term. The authors welcome suggestions and corrections at www.mt.com/w eighing-terms. Braunschweig (DE) and Greifensee (CH), The Authors Summer 2009 Foreword Since its founding in 1875, the International Bureau of Weights and Measures (BIPM) has had a unique role in mass metrology. The definition of the kilogram depends on an artefact conserved and used within our laboratories. The mass embodied in this - tefact defines the kilogram, and this information is disseminated throughout the world to promote uniformity of measurements. Although the definition of the kilogram may change in the re- tively near future, reflecting the success of new technologies and new requirements, the task of ensuring world-wide uniformity of mass measurements will remain.
Since the discovery of the giant magnetoresistance (GMR) effect in 1988, spintronics has been presented as a new technology paradigm, awarded by the Nobel Prize in Physics in 2007. Initially used in read heads of hard disk drives, and while disputing a piece of the market to the flash memories, GMR devices have broadened their range of usage by growing towards magnetic field sensing applications in a huge range of scenarios. Potential applications at the time of the discovery have become real in the last two decades. Definitively, GMR was born to stand. In this sense, selected successful approaches of GMR based sensors in different applications: space, automotive, microelectronics, biotechnology ... are collected in the present book. While keeping a practical orientation, the fundamentals as well as the current trends and challenges of this technology are also analyzed. In this sense, state of the art contributions from academy and industry can be found through the contents. This book can be used by starting researchers, postgraduate students and multidisciplinary scientists in order to have a reference text in this topical fascinating field.
"Natural Gas Hydrates: Experimental Techniques and Their Applications" attempts to broadly integrate the most recent knowledge in the fields of hydrate experimental techniques in the laboratory. The book examines various experimental techniques in order to provide useful parameters for gas hydrate exploration and exploitation. It provides experimental techniques for gas hydrates, including the detection techniques, the thermo-physical properties, permeability and mechanical properties, geochemical abnormalities, stability and dissociation kinetics, exploitation conditions, as well as modern measurement technologies etc. This book will be of interest to experimental scientists who engage in gas hydrate experiments in the laboratory, and is also intended as a reference work for students concerned with gas hydrate research. Yuguang Ye is a distinguished professor of Experimental Geology at Qingdao Institute of Marine Geology, China Geological Survey, China. Professor Changling Liu works at the Qingdao Institute of Marine Geology, China Geological Survey, China.
This book brings together reviews from leading international authorities on the developments in the study of dark matter and dark energy, as seen from both their cosmological and particle physics side. Studying the physical and astrophysical properties of the dark components of our Universe is a crucial step towards the ultimate goal of unveiling their nature. The work developed from a doctoral school sponsored by the Italian Society of General Relativity and Gravitation. The book starts with a concise introduction to the standard cosmological model, as well as with a presentation of the theory of linear perturbations around a homogeneous and isotropic background. It covers the particle physics and cosmological aspects of dark matter and (dynamical) dark energy, including a discussion of how modified theories of gravity could provide a possible candidate for dark energy. A detailed presentation is also given of the possible ways of testing the theory in terms of cosmic microwave background, galaxy redshift surveys and weak gravitational lensing observations. Included is a chapter reviewing extensively the direct and indirect methods of detection of the hypothetical dark matter particles. Also included is a self-contained introduction to the techniques and most important results of numerical (e.g. N-body) simulations in cosmology. " This volume will be useful to researchers, PhD and graduate students in Astrophysics, Cosmology Physics and Mathematics, who are interested in cosmology, dark matter and dark energy.
Dealing with Uncertainties is an innovative monograph that lays special emphasis on the deductive approach to uncertainties and on the shape of uncertainty distributions. This perspective has the potential for dealing with the uncertainty of a single data point and with sets of data that have different weights. It is shown that the inductive approach that is commonly used to estimate uncertainties is in fact not suitable for these two cases. The approach that is used to understand the nature of uncertainties is novel in that it is completely decoupled from measurements. Uncertainties which are the consequence of modern science provide a measure of confidence both in scientific data and in information in everyday life. Uncorrelated uncertainties and correlated uncertainties are fully covered and the weakness of using statistical weights in regression analysis is discussed. The text is abundantly illustrated with examples and includes more than 150 problems to help the reader master the subject.
This book describes the state-of-the art instruments for measuring the solar irradiance from soft x-ray to the near infrared and the total solar irradiance. Furthermore, the SORCE mission and early results on solar variability are presented along with papers that provide an overview of solar influences on Earth. This collection of papers provides the only detailed description of the SORCE mission and its instruments. |
![]() ![]() You may like...
Key to the Hebrew-Egyptian Mystery in…
James Ral J Ralston (James Ralston)
Hardcover
R904
Discovery Miles 9 040
Models and Measures in Measurements and…
Vitaliy P. Babak, Serhii V. Babak, …
Hardcover
R4,813
Discovery Miles 48 130
|