![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Science & Mathematics > Science: general issues > Scientific standards
This book is written for scientists involved in the calibration of viscometers. A detailed description for stepping up procedures to establish the viscosity scale and obtaining sets of master viscometers is given in the book. Uncertainty considerations for standard oils of known viscosity are presented. The modern viscometers based on principles oftuning fork, ultrasonic, PZT, plate waves, Love waves, micro-cantilever and vibration of optical fiber are discussed to inspire the reader to further research and to generate improved versions. The primary standard for viscosity is pure water. Measurements of its viscosity with accuracy/uncertainty achieved are described. The principles of rotational and oscillation viscometers are explained to enhance the knowledge in calibration work. Devices used for specific materials and viscosity in non SI units are discussed with respect to the need to correlate viscosity values obtained by various devices. The description of commercial viscometers meets the needs of the user."
This thesis reports on the first studies of Standard Model photon production at the Large Hadron Collider (LHC) using the ATLAS detector. Standard Model photon production is a large background in the search for Higgs bosons decaying into photon pairs, and is thus critical to understand. The thesis explains the techniques used to reconstruct and identify photon candidates using the ATLAS detector, and describes a measurement of the production cross section for isolated prompt photons. The thesis also describes a search for the Higgs boson in which the analysis techniques used in the measurement are exploited to reduce and estimate non-prompt backgrounds in diphoton events.
This book reviews the HL-LHC experiments and the fourth-generation photon science experiments, discussing the latest radiation hardening techniques, optimization of device & process parameters using TCAD simulation tools, and the experimental characterization required to develop rad-hard Si detectors for x-ray induced surface damage and bulk damage by hadronic irradiation. Consisting of eleven chapters, it introduces various types of strip and pixel detector designs for the current upgrade, radiation, and dynamic range requirement of the experiments, and presents an overview of radiation detectors, especially Si detectors. It also describes the design of pixel detectors, experiments and characterization of Si detectors. The book is intended for researchers and master's level students with an understanding of radiation detector physics. It provides a concept that uses TCAD simulation to optimize the electrical performance of the devices used in the harsh radiation environment of the colliders and at XFEL.
This book deals with the properties and behavior of carbon at high temperatures. It presents new methods and new ways to obtain the liquid phase of carbon. Melting of graphite and the properties of liquid carbon are presented under stationary heat and pulse methods. Metal like properties of molten graphite at high initial density are indicated. A new possible transition of liquid carbon from metal to nonmetal behavior much above the melting point is mentioned. Methodical questions of pulse heating, in particular the role of pinch-pressure in receiving a liquid state of carbon, are discussed. The reader finds evidence about the necessity of applying high pressure (higher than 100 bar) to melt graphite (melting temperature 4800+/-100 K). The reader can verify the advantage of volume pulse electrical heating before surface laser heating to study the physical properties of carbon, including enthalpy, heat capacity, electrical resistivity and temperature. The advantages of fast heating of graphite by pulsed electric current during a few microseconds are shown. The data obtained for the heat capacity of liquid carbon under constant pressure and constant volume were used to estimate the behavior at temperatures much higher 5000 K.
Electrostatic accelerators are an important and widespread subgroup within the broad spectrum of modern, large particle acceleration devices. They are specifically designed for applications that require high-quality ion beams in terms of energy stability and emittance at comparatively low energies (a few MeV). Their ability to accelerate virtually any kind of ion over a continuously tunable range of energies makes them a highly versatile tool for investigations in many research fields including, but not limited to, atomic and nuclear spectroscopy, heavy ion reactions, accelerator mass spectroscopy as well as ion-beam analysis and modification. The book is divided into three parts. The first part concisely introduces the field of accelerator technology and techniques that emphasize their major modern applications. The second part treats the electrostatic accelerator per se: its construction and operational principles as well as its maintenance. The third part covers all relevant applications in which electrostatic accelerators are the preferred tool for accelerator-based investigations. Since some topics are common to all types of accelerators, Electrostatic Accelerators will also be of value for those more familiar with other types of accelerators.
Analytical chemists and materials scientists will find this a useful addition to their armory. The contributors have sought to highlight the present state of affairs in the validation and quality assurance of fluorescence measurements, as well as the need for future standards. Methods included range from steady-state fluorometry and microfluorometry, microscopy, and micro-array technology, to time-resolved fluorescence and fluorescence depolarization imaging techniques.
This book presents and introduces ellipsometry in nanoscience and nanotechnology making a bridge between the classical and nanoscale optical behaviour of materials. It delineates the role of the non-destructive and non-invasive optical diagnostics of ellipsometry in improving science and technology of nanomaterials and related processes by illustrating its exploitation, ranging from fundamental studies of the physics and chemistry of nanostructures to the ultimate goal of turnkey manufacturing control. This book is written for a broad readership: materials scientists, researchers, engineers, as well as students and nanotechnology operators who want to deepen their knowledge about both basics and applications of ellipsometry to nanoscale phenomena. It starts as a general introduction for people curious to enter the fields of ellipsometry and polarimetry applied to nanomaterials and progresses to articles by experts on specific fields that span from plasmonics, optics, to semiconductors and flexible electronics. The core belief reflected in this book is that ellipsometry applied at the nanoscale offers new ways of addressing many current needs. The book also explores forward-looking potential applications.
Success in scientific and engineering research depends on effective writing and presentation. The purpose of this guide is to help the reader achieve that goal. It enables students and researchers to write and present material to a professional modern standard, efficiently and painlessly, and with maximum impact. The approach is not prescriptive. Rather, the emphasis is on a logical approach to communication, informed by what needs to be achieved, what works in practice, and what interferes with success. Over 400 examples of good and bad writing and graphing are presented. Each is from a published research article and is accompanied by analysis, comment, and correction where needed. Journal reviewers' critiques of submitted manuscripts are included to illustrate common pitfalls. Above all, this is a "how-to" book, comprehensive but concise, suitable for continuous study or quick reference. Checklists at the end of each chapter enable the reader to test the readiness of a dissertation, journal submission, or conference presentation for assessment or review. Although oriented towards engineering and the physical and life sciences, it is also relevant to other areas, including behavioural and clinical sciences and medicine.
Sloshing causes liquid to fluctuate, making accurate level readings difficult to obtain in dynamic environments. The measurement system described uses a single-tube capacitive sensor to obtain an instantaneous level reading of the fluid surface, thereby accurately determining the fluid quantity in the presence of slosh. A neural network based classification technique has been applied to predict the actual quantity of the fluid contained in a tank under sloshing conditions. In "A neural network approach to fluid quantity measurement in dynamic environments," effects of temperature variations and contamination on the capacitive sensor are discussed, and the authors propose that these effects can also be eliminated with the proposed neural network based classification system. To examine the performance of the classification system, many field trials were carried out on a running vehicle at various tank volume levels that range from 5 L to 50 L. The effectiveness of signal enhancement on the neural network based signal classification system is also investigated. Results obtained from the investigation are compared with traditionally used statistical averaging methods, and proves that the neural network based measurement system can produce highly accurate fluid quantity measurements in a dynamic environment. Although in this case a capacitive sensor was used to demonstrate measurement system this methodology is valid for all types of electronic sensors. The approach demonstrated in "A neural network approach to fluid quantity measurement in dynamic environments "can be applied to a wide range of fluid quantity measurement applications in the automotive, naval and aviation industries to produce accurate fluid level readings. Students, lecturers, and experts will find the description of current research about accurate fluid level measurement in dynamic environments using neural network approach useful."
This book, "Integrated Chemical Microsensor Systems in CMOS Technology," provides a comprehensive treatment of the highly interdisciplinary field of CMOS chemical microsensor systems. It is targeted at students, scientists and engineers who are interested in gaining an introduction to the field of chemical sensing since all the necessary fundamental knowledge is included. However, as it provides detailed information on all important issues related to the realization of chemical microsensors in CMOS technology, it also addresses experts well familiar with the field. After a brief introduction, the fundamentals of chemical sensing are presented. Fabrication and processing steps that are commonly used in the semiconductor industry are then detailed followed by a short description of the microfabrication techniques, and of the CMOS substrate and materials. Thereafter, a comprehensive overview of semiconductor-based and CMOS-based transducer structures for chemical sensors is given. CMOS-technology is then introduced as platform technology, which enables the integration of these microtransducers with the necessary driving and signal conditioning circuitry on the same chip. In a next section, the development of monolithic multisensor arrays and fully developed microsystems with on-chip sensor control and standard interfaces is described. A short section on packaging shows that techniques from the semiconductor industry can be applied to chemical microsensor packaging. The book concludes with a brief outlook on future developments, such as the realization of more complex integrated microsensor systems and methods to interface biological materials, such as cells, with CMOS microelectronics.
In a ?rst approximation, certainly rough, one can de?ne as non-crystalline materials those which are neither single-crystals nor poly-crystals. Within this category, we canincludedisorderedsolids,softcondensed matter,andlivesystemsamong others. Contrary to crystals, non-crystalline materials have in common that their intrinsic structures cannot be exclusively described by a discrete and periodical function but by a continuous function with short range of order. Structurally these systems have in common the relevance of length scales between those de?ned by the atomic and the macroscopic scale. In a simple ?uid, for example, mobile molecules may freely exchange their positions, so that their new positions are permutations of their old ones. By contrast, in a complex ?uid large groups of molecules may be interc- nected so that the permutation freedom within the group is lost, while the p- mutation between the groups is possible. In this case, the dominant characteristic length, which may de?ne the properties of the system, is not the molecular size but that of the groups. A central aspect of some non-crystalline materials is that they may self-organize. This is of particular importance for Soft-matter materials. Self-organization is characterized by the spontaneous creation of regular structures at different length scales which may exhibit a certain hierarchy that controls the properties of the system. X-ray scattering and diffraction have been for more than a hundred years an essential technique to characterize the structure of materials. Quite often scattering anddiffractionphenomenaexhibitedbynon-crystallinematerialshavebeenreferred to as non-crystalline diffraction.
This book provides an in-depth overview of on chip instrumentation technologies and various approaches taken in adding instrumentation to System on Chip (ASIC, ASSP, FPGA, etc.) design that are collectively becoming known as Design for Debug (DfD). On chip instruments are hardware based blocks that are added to a design for the specific purpose and improving the visibility of internal or embedded portions of the design (specific instruction flow in a processor, bus transaction in an on chip bus as examples) to improve the analysis or optimization capabilities for a SoC. DfD is the methodology and infrastructure that surrounds the instrumentation. Coverage includes specific design examples and discussion of implementations and DfD tradeoffs in a decision to design or select instrumentation or SoC that include instrumentation. Although the focus will be on hardware implementations, software and tools will be discussed in some detail.
This book presents lecture materials from the Third LOFAR Data School, transformed into a coherent and complete reference book describing the LOFAR design, along with descriptions of primary science cases, data processing techniques, and recipes for data handling. Together with hands-on exercises the chapters, based on the lecture notes, teach fundamentals and practical knowledge. LOFAR is a new and innovative radio telescope operating at low radio frequencies (10-250 MHz) and is the first of a new generation of radio interferometers that are leading the way to the ambitious Square Kilometre Array (SKA) to be built in the next decade. This unique reference guide serves as a primary information source for research groups around the world that seek to make the most of LOFAR data, as well as those who will push these topics forward to the next level with the design, construction, and realization of the SKA. This book will also be useful as supplementary reading material for any astrophysics overview or astrophysical techniques course, particularly those geared towards radio astronomy (and radio astronomy techniques).
In fields as diverse as research and development, governance, and international trade, success depends on effective communication and processes. However, limited research exists on how professionals can utilize procedures and express themselves consistently across disciplines. Corporate and Global Standardization Initiatives in Contemporary Society is a critical scholarly resource that examines standardization in organizations. Featuring coverage on a broad range of topics, such as business standards, information technology standards, and mobile communications, this book is geared towards professionals, students, and researchers seeking current research on standardization for diverse settings and applications.
This is the first book to show how to apply the principles of quality assurance to the identification of analytes (qualitative chemical analysis). After presenting the principles of identification and metrological basics, the author focuses on the reliability and the errors of chemical identification. This is then applied to practical examples such as EPA methods, EU, FDA, or WADA regulations. Two whole chapters are devoted to the analysis of unknowns and identification of samples such as foodstuffs or oil pollutions. Essential reading for researchers and professionals dealing with the identification of chemical compounds and the reliability of chemical analysis.
This thesis demonstrates and investigates novel dual-polarization interferometric fiber-optic gyroscope (IFOG) configurations, which utilize optical compensation between two orthogonal polarizations to suppress errors caused by polarization nonreciprocity. Further, it provides a scheme for dual-polarization two-port IFOGs and details their unique benefits. Dual-polarization IFOGs break through the restriction of the "minimal scheme," which conventional IFOGs are based on. These innovative new IFOGs have unique properties: They require no polarizer and have two ports available for signal detection. As such, they open new avenues for IFOGs to achieve lower costs and higher sensitivity.
This thesis unites the fields of optical atomic clocks and ultracold molecular science, laying the foundation for optical molecular measurements of unprecedented precision. Building upon optical manipulation techniques developed by the atomic clock community, this work delves into attaining surgical control of molecular quantum states. The thesis develops two experimental observables that one can measure with optical-lattice-trapped ultracold molecules: extremely narrow optical spectra, and angular distributions of photofragments that are ejected when the diatomic molecules are dissociated by laser light pulses. The former allows molecular spectroscopy approaching the level of atomic clocks, leading into molecular metrology and tests of fundamental physics. The latter opens the field of ultracold chemistry through observation of quantum effects such as matter-wave interference of photofragments and tunneling through reaction barriers. The thesis also describes a discovery of a new method of thermometry that can be used near absolute zero temperatures for particles lacking cycling transitions, solving a long-standing experimental problem in atomic and molecular physics.
This book covers the topic of eddy current nondestructive evaluation, the most commonly practiced method of electromagnetic nondestructive evaluation (NDE). It emphasizes a clear presentation of the concepts, laws and relationships of electricity and magnetism upon which eddy current inspection methods are founded. The chapters include material on signals obtained using many common eddy current probe types in various testing environments. Introductory mathematical and physical concepts in electromagnetism are introduced in sufficient detail and summarized in the Appendices for easy reference. Worked examples and simple calculations that can be done by hand are distributed throughout the text. These and more complex end-of-chapter examples and assignments are designed to impart a working knowledge of the connection between electromagnetic theory and the practical measurements described. The book is intended to equip readers with sufficient knowledge to optimize routine eddy current NDE inspections, or design new ones. It is useful for graduate engineers and scientists seeking a deeper understanding of electromagnetic methods of NDE than can be found in a guide for practitioners.
The joint NASA-ESA Cassini-Huygens mission promises to return four (and possibly more) years of unparalleled scientific data from the solar system's most exotic planet, the ringed, gas giant, Saturn. Larger than Galileo with a much greater communication bandwidth, Cassini can accomplish in a single flyby what Galileo returned in a series of passes. Cassini explores the Saturn environment in three dimensions, using gravity assists to climb out of the equatorial plane to look down on the rings from above, to image the aurora and to study polar magnetospheric processes such as field-aligned currents. Since the radiation belt particle fluxes are much more benign than those at Jupiter, Cassini can more safely explore the inner regions of the magnetosphere. The spacecraft approaches the planet closer than Galileo could, and explores the inner moons and the rings much more thoroughly than was possible at Jupiter. This book is the second volume, in a three volume set, that describes the Cassini/Huygens mission. This volume describes the in situ investigations on the Cassini orbiter: plasma spectrometer, ion and neutral mass spectrometer, energetic charged and neutral particle spectrometer, magnetometer, radio and plasma wave spectrometer and the cosmic dust analyzer. This book is of interest to all potential users of the Cassini-Huygens data, to those who wish to learn about the planned scientific return from the Cassini-Huygens mission and those curious about the processes occurring on this most fascinating planet. A third volume describes the remote sensing investigations on the orbiter.
This book provides insights into sensor development for structural health monitoring. Current technological advances mean that the field is changing rapidly, making standardization an ongoing challenge. As such, the book gathers several essential contributions in the area of sensor development, including macro-fiber composite sensors for crack detection and optical fiber Bragg gratings for flaw detection. It also discusses the use of the welds in the structure as sensors, and probability estimation of detection for various sensor configurations. In addition, it presents methods based on vibration signal variations to detect small defects in composite components or to monitor large structures. Last but not least, the book includes special structural health monitoring applications in industrial components such as a nuclear boiler support spines and industrial presses as well as in corrosion monitoring of pipes.
The research presented here includes important contributions on the commissioning of the ATLAS experiment and the discovery of the Higgs boson. The thesis describes essential work on the alignment of the inner tracker during the commissioning of the experiment and development of the electron identification algorithm. The subsequent analysis focuses on the search for the Higgs boson in the WW channel, including the development of a method to model the critical W+jet background. In addition, the thesis provides excellent introductions, suitable for non-specialists, to Higgs physics, to the LHC, and to the ATLAS experiment.
Precision Nanometrology describes the new field of precision nanometrology, which plays an important part in nanoscale manufacturing of semiconductors, optical elements, precision parts and similar items. It pays particular attention to the measurement of surface forms of precision workpieces and to stage motions of precision machines. The first half of the book is dedicated to the description of optical sensors for the measurement of angle and displacement, which are fundamental quantities for precision nanometrology. The second half presents a number of scanning-type measuring systems for surface forms and stage motions. The systems discussed include: * error separation algorithms and systems for measurement of straightness and roundness, * the measurement of micro-aspherics, * systems based on scanning probe microscopy, and * scanning image-sensor systems. Precision Nanometrology presents the fundamental and practical technologies of precision nanometrology with a helpful selection of algorithms, instruments and experimental data. It will be beneficial for researchers, engineers and postgraduate students involved in precision engineering, nanotechnology and manufacturing.
A discussion of recently developed experimental methods for noise research in nanoscale electronic devices, conducted by specialists in transport and stochastic phenomena in nanoscale physics. The approach described is to create methods for experimental observations of noise sources, their localization and their frequency spectrum, voltage-current and thermal dependences. Our current knowledge of measurement methods for mesoscopic devices is summarized to identify directions for future research, related to downscaling effects. The directions for future research into fluctuation phenomena in quantum dot and quantum wire devices are specified. Nanoscale electronic devices will be the basic components for electronics of the 21st century. From this point of view the signal-to-noise ratio is a very important parameter for the device application. Since the noise is also a quality and reliability indicator, experimental methods will have a wide application in the future.
This book will bring together experts in the field of astronomical photometry to discuss how their subfields provide the precision and accuracy in astronomical energy flux measurements that are needed to permit tests of astrophysical theories. Differential photometers and photometry, improvements in infrared precision, theimprovements in precision and accuracy of CCD photometry, the absolute calibration of flux, the development of the Johnson UBVRI photometric system and other passband systems to measure and precisely classify specific types of stars and astrophysical quantities, and the current capabilities of spectrophotometry, and polarimetry to provide precise and accurate data, will all be discussed in this volume. The discussion of differential or two-star photometers will include those developed for planetary as well as stellar photometry and will range from the Princeton polarizing photometer through the pioneering work of Walraven to the differential photometers designed to measure the ashen light of Venus and to counter the effects of aurorae at high latitude sites; the last to be discussed will be the Rapid Alternate Detection System (RADS) developed at the University of Calgary in the 1980s."
In this thesis the author contributes to the analysis of neutrino beam data collected between 2010 and 2013 to identify e events at the Super-Kamiokande detector. In particular, the author improves the pion-nucleus interaction uncertainty, which is one of the dominant systematic error sources in T2K neutrino oscillation measurement. In the thesis, the measurement of e oscillation in the T2K (Tokai to Kamioka) experiment is presented and a new constraint on CP is obtained. This measurement and the analysis establish, at greater than 5 significance, the observation of e oscillation for the first time in the world. Combining the T2K e oscillation measurement with the latest findings on oscillation parameters including the world average value of 13 from reactor experiments, the constraint on the value of CP at the 90% confidence level is obtained. This constraint on CP is an important step towards the discovery of CP violation in the lepton sector. |
You may like...
Advances in Mathematical Modeling and…
Rivka Gilat, Leslie Banks-Sills
Hardcover
R2,704
Discovery Miles 27 040
Solar, Stellar and Galactic Connections…
Alberto Carraminana, Francisco Siddharta Guzman Murillo, …
Hardcover
R4,043
Discovery Miles 40 430
New Perspectives on Applied Industrial…
Jorge Luis Garcia-Alcaraz, Giner Alor-Hernandez, …
Hardcover
|