![]() |
![]() |
Your cart is empty |
||
Books > Science & Mathematics > Science: general issues > Scientific standards
It is the intent of this book to combine high-voltage (HV) engineering with HV testing technique and HV measuring technique. Based on long-term experience gained by the authors as lecturer and researcher as well as member in international organizations, such as IEC and CIGRE, the book will reflect the state of the art as well as the future trends in testing and diagnostics of HV equipment to ensure a reliable generation, transmission and distribution of electrical energy. The book is intended not only for experts but also for students in electrical engineering and high-voltage engineering.
The search for gravitational radiation with optical interferometers is gaining momentum worldwide. Beside the VIRGO and GEO gravitational wave observatories in Europe and the two LIGOs in the United States, which have operated successfully during the past decade, further observatories are being completed (KAGRA in Japan) or planned (ILIGO in India). The sensitivity of the current observatories, although spectacular, has not allowed direct discovery of gravitational waves. The advanced detectors (Advanced LIGO and Advanced Virgo) at present in the development phase will improve sensitivity by a factor of 10, probing the universe up to 200 Mpc for signal from inspiraling binary compact stars. This book covers all experimental aspects of the search for gravitational radiation with optical interferometers. Every facet of the technological development underlying the evolution of advanced interferometers is thoroughly described, from configuration to optics and coatings and from thermal compensation to suspensions and controls. All key ingredients of an advanced detector are covered, including the solutions implemented in first-generation detectors, their limitations, and how to overcome them. Each issue is addressed with special reference to the solution adopted for Advanced VIRGO but constant attention is also paid to other strategies, in particular those chosen for Advanced LIGO.
This book reflects the results of the 2nd and 3rd International Workshops on Turbulent Spray Combustion. The focus is on progress in experiments and numerical simulations for two-phase flows, with emphasis on spray combustion. Knowledge of the dominant phenomena and their interactions allows development of predictive models and their use in combustor and gas turbine design. Experts and young researchers present the state-of-the-art results, report on the latest developments and exchange ideas in the areas of experiments, modelling and simulation of reactive multiphase flows. The first chapter reflects on flame structure, auto-ignition and atomization with reference to well-characterized burners, to be implemented by modellers with relative ease. The second chapter presents an overview of first simulation results on target test cases, developed at the occasion of the 1st International Workshop on Turbulent Spray Combustion. In the third chapter, evaporation rate modelling aspects are covered, while the fourth chapter deals with evaporation effects in the context of flamelet models. In chapter five, LES simulation results are discussed for variable fuel and mass loading. The final chapter discusses PDF modelling of turbulent spray combustion. In short, the contributions in this book are highly valuable for the research community in this field, providing in-depth insight into some of the many aspects of dilute turbulent spray combustion.
This book gives the background to differential-pressure flow measurement and goes through the requirements explaining the reason for them. For those who want to use an orifice plate or a Venturi tube the standard ISO 5167 and its associated Technical Reports give the instructions required. However, they rarely tell the users why they should follow certain instructions. This book helps users of the ISO standards for orifice plates and Venturi tubes to understand the reasons why the standards are as they are, to apply them effectively, and to understand the consequences of deviations from the standards.
This comprehensive volume summarizes and structures the multitude of results obtained at the LHC in its first running period and draws the grand picture of today’s physics at a hadron collider. Topics covered are Standard Model measurements, Higgs and top-quark physics, flavour physics, heavy-ion physics, and searches for supersymmetry and other extensions of the Standard Model. Emphasis is placed on overview and presentation of the lessons learned. Chapters on detectors and the LHC machine and a thorough outlook into the future complement the book. The individual chapters are written by teams of expert authors working at the forefront of LHC research.
This edited book contains invited papers from renowned experts working in the field of Wearable Electronics Sensors. It includes 14 chapters describing recent advancements in the area of Wearable Sensors, Wireless Sensors and Sensor Networks, Protocols, Topologies, Instrumentation architectures, Measurement techniques, Energy harvesting and scavenging, Signal processing, Design and Prototyping. The book will be useful for engineers, scientist and post-graduate students as a reference book for their research on wearable sensors, devices and technologies which is experiencing a period of rapid growth driven by new applications such as heart rate monitors, smart watches, tracking devices and smart glasses.
Neutrinos continue to be the most mysterious and, arguably, the most fascinating particles of the Standard Model as their intrinsic properties such as absolute mass scale and CP properties are unknown. The open question of the absolute neutrino mass scale will be addressed with unprecedented accuracy by the Karlsruhe Tritium Neutrino (KATRIN) experiment, currently under construction. This thesis focusses on the spectrometer part of KATRIN and background processes therein. Various background sources such as small Penning traps, as well as nuclear decays from single radon atoms are fully characterized here for the first time. Most importantly, however, it was possible to reduce the background in the spectrometer by more than five orders of magnitude by eliminating Penning traps and by developing a completely new background reduction method by stochastically heating trapped electrons using electron cyclotron resonance (ECR). The work beautifully demonstrates that the obstacles and challenges in measuring the absolute mass scale of neutrinos can be met successfully if novel experimental tools (ECR) and novel computing methods (KASSIOPEIA) are combined to allow almost background-free tritium ss-spectroscopy.
This thesis describes a high-quality, high-precision method for the data analysis of an interesting elementary particle reaction. The data was collected at the Japanese B-meson factory KEKB with the Belle detector, one of the most successful large-scale experiments worldwide. CP violation is a subtle quantum effect that makes the world look different when simultaneously left and right and matter and antimatter are exchanged. This being a prerequisite for our own world to have developed from the big bang, there are only a few experimental indications of such effects, and their detection requires very intricate techniques. The discovery of CP violation in B meson decays garnered Kobayashi and Maskawa, who had predicted these findings as early as 1973, the 2008 Nobel prize in physics. This thesis describes in great detail what are by far the best measurements of branching ratios and CP violation parameters in two special reactions with two charm mesons in the final state. It presents an in-depth but accessible overview of the theory, phenomenology, experimental setup, data collection, Monte Carlo simulations, (blind) statistical data analysis, and systematic uncertainty studies.
The work presented in this thesis spans a wide range of experimental particle physics subjects, starting from level-1 trigger electronics to the final results of the search for Higgs boson decay and to tau lepton pairs. The thesis describes an innovative reconstruction algorithm for tau decays and details how it was instrumental in providing a measurement of Z decay to tau lepton pairs. The reliability of the analysis is fully established by this measurement before the Higgs boson decay to tau lepton pairs is considered. The work described here continues to serve as a model for analysing CMS Higgs to tau leptons measurements.
In this thesis, the author explains the background of problems in quantum estimation, the necessary conditions required for estimation precision benchmarks that are applicable and meaningful for evaluating data in quantum information experiments, and provides examples of such benchmarks. The author develops mathematical methods in quantum estimation theory and analyzes the benchmarks in tests of Bell-type correlation and quantum tomography with those methods. Above all, a set of explicit formulae for evaluating the estimation precision in quantum tomography with finite data sets is derived, in contrast to the standard quantum estimation theory, which can deal only with infinite samples. This is the first result directly applicable to the evaluation of estimation errors in quantum tomography experiments, allowing experimentalists to guarantee estimation precision and verify quantitatively that their preparation is reliable.
This book provides a concise survey of modern theoretical concepts of X-ray materials analysis. The principle features of the book are: basics of X-ray scattering, interaction between X-rays and matter and new theoretical concepts of X-ray scattering. The various X-ray techniques are considered in detail: high-resolution X-ray diffraction, X-ray reflectivity, grazing-incidence small-angle X-ray scattering and X-ray residual stress analysis. All the theoretical methods presented use the unified physical approach. This makes the book especially useful for readers learning and performing data analysis with different techniques. The theory is applicable to studies of bulk materials of all kinds, including single crystals and polycrystals as well as to surface studies under grazing incidence. The book appeals to researchers and graduate students alike.
This thesis is devoted to ANTARES, the first underwater neutrino telescope in the Mediterranean sea. As the main scientific analysis, a search for high-energy neutrino emission from the region of the Fermi bubbles has been performed using data from the ANTARES detector. A method for the background estimation using off-zones has been developed specially for this measurement. A new likelihood for the limits calculation which treats both observations in the on-zone and in the off-zone in the similar way and also includes different systematic uncertainties has been constructed. The analysis of 2008–2011 ANTARES data yielded a 1.2 σ excess of events in the Fermi bubble regions, compatible with the no-signal hypothesis. For the optimistic case of no energy cutoff in the flux, the upper limit is within a factor of three of the prediction of the purely hadronic model based on the measured gamma-ray flux. The sensitivity improves as more data are accumulated (more than 65% gain in the sensitivity is expected once 2012–2016 data are added to the analysis).
In this book the applicability and the utility of two statistical approaches for understanding dark energy and dark matter with gravitational lensing measurement are introduced. For cosmological constraints on the nature of dark energy, morphological statistics called Minkowski functionals (MFs) to extract the non-Gaussian information of gravitational lensing are studied. Measuring lensing MFs from the Canada-France-Hawaii Telescope Lensing survey (CFHTLenS), the author clearly shows that MFs can be powerful statistics beyond the conventional approach with the two-point correlation function. Combined with the two-point correlation function, MFs can constrain the equation of state of dark energy with a precision level of approximately 3-4 % in upcoming surveys with sky coverage of 20,000 square degrees. On the topic of dark matter, the author studied the cross-correlation of gravitational lensing and the extragalactic gamma-ray background (EGB). Dark matter annihilation is among the potential contributors to the EGB. The cross-correlation is a powerful probe of signatures of dark matter annihilation, because both cosmic shear and gamma-ray emission originate directly from the same dark matter distribution in the universe. The first measurement of the cross-correlation using a real data set obtained from CFHTLenS and the Fermi Large Area Telescope was performed. Comparing the result with theoretical predictions, an independent constraint was placed on dark matter annihilation. Future lensing surveys will be useful to constrain on the canonical value of annihilation cross section for a wide range of mass of dark matter annihilation. Future lensing surveys will be useful to constrain on the canonical value of annihilation cross section for a wide range of mass of dark matter.
The main goal of the book is to provide a systematic and didactic approach to the physics and technology of free-electron lasers. Numerous figures are used for illustrating the underlying ideas and concepts and links to other fields of physics are provided. After an introduction to undulator radiation and the low-gain FEL, the one-dimensional theory of the high-gain FEL is developed in a systematic way. Particular emphasis is put on explaining and justifying the various assumptions and approximations that are needed to obtain the differential and integral equations governing the FEL dynamics. Analytical and numerical solutions are presented and important FEL parameters are defined, such as gain length, FEL bandwidth and saturation power. One of the most important features of a high-gain FEL, the formation of microbunches, is studied at length. The increase of gain length due to beam energy spread, space charge forces, and three-dimensional effects such as betatron oscillations and optical diffraction is analyzed. The mechanism of Self-Amplified Spontaneous Emission is described theoretically and illustrated with numerous experimental results. Various methods of FEL seeding by coherent external radiation are introduced, together with experimental results. The world’s first soft X-ray FEL, the user facility FLASH at DESY, is described in some detail to give an impression of the complexity of such an accelerator-based light source. The last chapter is devoted to the new hard X-ray FELs which generate extremely intense radiation in the Angstrøm regime. The appendices contain supplementary material and more involved calculations.
The research presented here includes important contributions on the commissioning of the ATLAS experiment and the discovery of the Higgs boson. The thesis describes essential work on the alignment of the inner tracker during the commissioning of the experiment and development of the electron identification algorithm. The subsequent analysis focuses on the search for the Higgs boson in the WW channel, including the development of a method to model the critical W+jet background. In addition, the thesis provides excellent introductions, suitable for non-specialists, to Higgs physics, to the LHC, and to the ATLAS experiment.
Micro-X-ray fluorescence offers the possibility for a position- sensitive and non-destructive analysis that can be used for the analysis of non-homogeneous materials and layer systems. This analytical technique has shown a dynamic development in the last 15 years and is used for the analysis of small particles, inclusions, of elemental distributions for a wide range of different applications both in research and quality control. The first experiments were performed on synchrotrons but there is a requirement for laboratory instruments which offers a fast and immediate access for analytical results. The book discuss the main components of a µ-XRF instrument and the different measurement modes, it gives an overview about the various instruments types, considers the special requirements for quantification of non-homogeneous materials and presents a wide range of application for single point and multi-point analysis as well as for distribution analysis in one, two and three dimensions.
This book presents and introduces ellipsometry in nanoscience and nanotechnology making a bridge between the classical and nanoscale optical behaviour of materials. It delineates the role of the non-destructive and non-invasive optical diagnostics of ellipsometry in improving science and technology of nanomaterials and related processes by illustrating its exploitation, ranging from fundamental studies of the physics and chemistry of nanostructures to the ultimate goal of turnkey manufacturing control. This book is written for a broad readership: materials scientists, researchers, engineers, as well as students and nanotechnology operators who want to deepen their knowledge about both basics and applications of ellipsometry to nanoscale phenomena. It starts as a general introduction for people curious to enter the fields of ellipsometry and polarimetry applied to nanomaterials and progresses to articles by experts on specific fields that span from plasmonics, optics, to semiconductors and flexible electronics. The core belief reflected in this book is that ellipsometry applied at the nanoscale offers new ways of addressing many current needs. The book also explores forward-looking potential applications.
Housed by a 4 m diameter tunnel of 27 km circumference, with huge underground labs and numerous surface facilities, and set up with a precision of 0.1 mm per kilometer, the Large Electron-Positron Collider (LEP) was not only the largest but also one of the most sophisticated scientific research instruments ever created by Man. Located at CERN, near Geneva, LEP was built during the years 1983 - 1989, was operational until 2000, and corroborated the standard model of particle physics through continuous high precision measurements. The Author, director-general of CERN during the crucial period of the construction of LEP, recounts vividly the convoluted decision-making and technical implementation processes - the tunnel alone being a highly challenging geo- and civil engineering project - and the subsequent extremely fruitful period of scientific research. Finally he describes the difficult decision to close down LEP, at a time when the discovery of the Higgs boson seemed within reach. LEP was eventually dismantled in 2000, enabling the tunnel to be reused for building the next generation machine, the much more powerful Large Hadron Collider (LHC), an upgrade then called LEP3 and foreseen from the beginning. It became operational just as this account was being completed. Written by the main protagonist responsible for making LEP a reality, this is the definitive inside story of a remarkable machine and the many thousands of scientists and engineers from around the world, whose efforts contributed to the new knowledge it produced.
Large mass bolometers are used in particle physics experiments to search for rare processes, like neutrinoless double beta decay and dark matter interactions. In the next years the CUORE experiment (a 1 Ton detector composed by 1000 crystals of TeO2 operated as bolometers in a large cryostat at 10mK) will be the particle physics experiment with the highest chance of discovering the Majorana neutrino, a long standing and yet fundamental question of particle physics. The study presented in this book was performed on the bolometers of the CUORE experiment. The response function of these detectors is not linear in the energy range of interest, and it changes with the operating temperature, worsening the performances. The nonlinearity appeared to be dominated by the thermistor and the biasing circuit used to read the bolometer, and was modeled using few measurable parameters. A method to obtain a linear response is the result of this work. It allows a great improvement of the detector operation and data analysis. With a foreword by Fernando Ferroni.
In this book, the anomaly mediated supersymmetry breaking (AMSB) model is explored by searching for charged winos with their subsequent decays collected with the ATLAS detector at the Large Hadron Collider (LHC). The author develops a new method, called "re-tracking," to detect charged winos that decay before reaching the Semiconductor Tracker (SCT) detector. Because the nominal tracking algorithm at the ATLAS experiment requires at least seven successive hits in the inner tracking system, the sensitivity to charged winos having a fraction of a nanosecond in the past analysis was therefore limited. However, re-tracking requires a minimum of three pixel hits and provides a fully efficient tracking capability for charged winos traversing the pixel detector, resulting in around about 100 times greater efficiency for charged winos with a lifetime ~0.2 ns longer than that in past searches. Signal topology is characterized by a jet with large transverse momentum (pT), large missing transverse energy, and a high-pT disappearing track. There are three types of back ground tracks: interacting hadron tracks, charged leptons, and tracks with mismeasured pT. A background estimation based on the Monte Carlo (MC) simulation suffers from large uncertainties due to poor statistics and has difficulty simulating the properties of background tracks. Therefore, a data-driven approach has been developed by the author of the book to estimate the background track-pT spectrum. No significant excess above the background expectation is observed for candidate tracks with large transverse momentum, and constraints on the AMSB model are obtained. The author shows that in the AMSB model, a charged wino mass below 270 GeV is excluded at 95 % confidence level, which also directly constrains the mass of wino dark matter.
Advances in the synthesis of new materials with often complex, nano-scaled structures require increasingly sophisticated experimental techniques that can probe the electronic states, the atomic magnetic moments and the magnetic microstructures responsible for the properties of these materials. At the same time, progress in synchrotron radiation techniques has ensured that these light sources remain a key tool of investigation, e.g. synchrotron radiation sources of the third generation are able to support magnetic imaging on a sub-micrometer scale. With the Sixth Mittelwihr School on Magnetism and Synchrotron Radiation the tradition of teaching the state-of-the-art on modern research developments continues and is expressed through the present set of extensive lectures provided in this volume. While primarily aimed at postgraduate students and newcomers to the field, this volume will also benefit researchers and lecturers actively working in the field.
This thesis reports on the first studies of Standard Model photon production at the Large Hadron Collider (LHC) using the ATLAS detector. Standard Model photon production is a large background in the search for Higgs bosons decaying into photon pairs, and is thus critical to understand. The thesis explains the techniques used to reconstruct and identify photon candidates using the ATLAS detector, and describes a measurement of the production cross section for isolated prompt photons. The thesis also describes a search for the Higgs boson in which the analysis techniques used in the measurement are exploited to reduce and estimate non-prompt backgrounds in diphoton events.
The book offers a thorough introduction to machine vision. It is organized in two parts. The first part covers the image acquisition, which is the crucial component of most automated visual inspection systems. All important methods are described in great detail and are presented with a reasoned structure. The second part deals with the modeling and processing of image signals and pays particular regard to methods, which are relevant for automated visual inspection.
The main goal of this book is to elucidate what kind of experiment must be performed in order to determine the full set of independent parameters which can be extracted and calculated from theory, where electrons, photons, atoms, ions, molecules, or molecular ions may serve as the interacting constituents of matter. The feasibility of such perfect' and-or `complete' experiments, providing the complete quantum mechanical knowledge of the process, is associated with the enormous potential of modern research techniques, both, in experiment and theory. It is even difficult to overestimate the role of theory in setting of the complete experiment, starting with the fact that an experiment can be complete only within a certain theoretical framework, and ending with the direct prescription of what, and in what conditions should be measured to make the experiment `complete'. The language of the related theory is the language of quantum mechanical amplitudes and their relative phases. This book captures the spirit of research in the direction of the complete experiment in atomic and molecular physics, considering some of the basic quantum processes: scattering, Auger decay and photo-ionization. It includes a description of the experimental methods used to realize, step by step, the complete experiment up to the level of the amplitudes and phases. The corresponding arsenal includes, beyond determining the total cross section, the observation of angle and spin resolved quantities, photon polarization and correlation parameters, measurements applying coincidence techniques, preparing initially polarized targets, and even more sophisticated methods. The `complete' experiment is, until today, hardly to perform. Therefore, much attention is paid to the results of state-of-the-art experiments providing detailed information on the process, and their comparison to the related theoretical approaches, just to mention relativistic multi-configurational Dirac-Fock, convergent close-coupling, Breit-Pauli R-matrix, or relativistic distorted wave approaches, as well as Green's operator methods. This book has been written in honor of Herbert Walther and his major contribution to the field but even to stimulate advanced Bachelor and Master students by demonstrating that obviously nowadays atomic and molecular scattering physics yields and gives a much exciting appreciation for further advancing the field.
This book reflects the outcome of the 1st International Workshop on Turbulent Spray Combustion held in 2009 in Corsica (France). The focus is on reporting the progress of experimental and numerical techniques in two-phase flows, with emphasis on spray combustion. The motivation for studies in this area is that knowledge of the dominant phenomena and their interactions in such flow systems is essential for the development of predictive models and their use in combustor and gas turbine design. This necessitates the development of accurate experimental methods and numerical modelling techniques. The workshop aimed at providing an opportunity for experts and young researchers to present the state-of-the-art, discuss new developments or techniques and exchange ideas in the areas of experimentations, modelling and simulation of reactive multiphase flows. The first two papers reflect the contents of the invited lectures, given by experts in the field of turbulent spray combustion. The first concerns computational issues, while the second deals with experiments. These lectures initiated very interesting and interactive discussions among the researchers, further pursued in contributed poster presentations. Contributions 3 and 4 focus on some aspects of the impact of the interaction between fuel evaporation and combustion on spray combustion in the context of gas turbines, while the final article deals with the interaction between evaporation and turbulence. |
![]() ![]() You may like...
FRCR Part 2A - Single Best Answer (SBA…
Tristan Barrett, Nadeem Shaida, …
Paperback
R1,259
Discovery Miles 12 590
Midnight In The Morgue - The Caine Prize…
Femi Kayode, Karen Jennings
Paperback
|