![]() |
![]() |
Your cart is empty |
||
Books > Professional & Technical > Mechanical engineering & materials > Production engineering > Reliability engineering
This book focuses on how to keep blast furnaces running stably and smoothly with low consumption and long operating life spans. Assessing and adjusting blast furnace performance are key to operation. The book describes in detail cases of both successful and failed blast furnace operation. It also demonstrates various phenomena and "symptoms" in the smelting process that have rarely been studied before, e.g. abnormal gas distribution, bending loss of tuyere, slag crust fall-off, blast furnace thickening, and hearth accumulation. As such, it will help readers understand internal phenomena in blast furnaces, providing a basis for developing intelligent control and management systems.
The rapid development of China's transportation system brings huge challenges to fire safety issues. Fire Protection Engineering Applications for Large Transportation Systems in Chinaanalyzes key fire issues for large transportation systems in rail, airport, tunnels, etc. and offers solutions and best practices for similar projects throughout the world. The first monograph to look at transportation hub fire issues in China looks at architecture features, occupancy and area classification, fire hazard and design difficulties based on local code design. The book then provides case studies to identity the common problems and introduces possible solutions in order to develop a best practice for future design and improvement. The authors worked directly on the case studies provided, which include the Hongqiao airport transportation hub, Beijing and Pudoing airport PBD study, subways in different cities and the high speed train system Cross China. They use their research and investigation to form the theoretical basis for the fire design of urban large transportation hubs and the establishment of corresponding fire codes. The cutting-edge technologies discussed include: Smoke control strategy in complicated multiple function space, assistant evacuation performance based study new technology on fire separation new fire products for smoke detection and intelligent guiding system for evacuation BIM and internet of things used to improve fire management
[FIRST EDITION] This accessible textbook presents an introduction to computer vision algorithms for industrially-relevant applications of X-ray testing. Features: introduces the mathematical background for monocular and multiple view geometry; describes the main techniques for image processing used in X-ray testing; presents a range of different representations for X-ray images, explaining how these enable new features to be extracted from the original image; examines a range of known X-ray image classifiers and classification strategies; discusses some basic concepts for the simulation of X-ray images and presents simple geometric and imaging models that can be used in the simulation; reviews a variety of applications for X-ray testing, from industrial inspection and baggage screening to the quality control of natural products; provides supporting material at an associated website, including a database of X-ray images and a Matlab toolbox for use with the book's many examples.
This book discusses various challenges and solutions in the fields of operation, control, design, monitoring and protection of microgrids, and facilitates the integration of renewable energy and distribution systems through localization of generation, storage and consumption. It covers five major topics relating to microgrid i.e., operation, control, design, monitoring and protection. The book is primarily intended for electric power and control engineering researchers who are seeking factual information, but also appeals to professionals from other engineering disciplines wanting an overview of the entire field or specific information on one aspect of it. Featuring practical case studies and demonstrating different root causes of large power failures, it helps readers develop new concepts for mitigating blackout issues. This book is a comprehensive reference resource for graduate and postgraduate students, academic researchers, and practicing engineers working in the fields of power system and microgrid.
This book addresses the experimental calibration of best-estimate numerical simulation models. The results of measurements and computations are never exact. Therefore, knowing only the nominal values of experimentally measured or computed quantities is insufficient for applications, particularly since the respective experimental and computed nominal values seldom coincide. In the author's view, the objective of predictive modeling is to extract "best estimate" values for model parameters and predicted results, together with "best estimate" uncertainties for these parameters and results. To achieve this goal, predictive modeling combines imprecisely known experimental and computational data, which calls for reasoning on the basis of incomplete, error-rich, and occasionally discrepant information. The customary methods used for data assimilation combine experimental and computational information by minimizing an a priori, user-chosen, "cost functional" (usually a quadratic functional that represents the weighted errors between measured and computed responses). In contrast to these user-influenced methods, the BERRU (Best Estimate Results with Reduced Uncertainties) Predictive Modeling methodology developed by the author relies on the thermodynamics-based maximum entropy principle to eliminate the need for relying on minimizing user-chosen functionals, thus generalizing the "data adjustment" and/or the "4D-VAR" data assimilation procedures used in the geophysical sciences. The BERRU predictive modeling methodology also provides a "model validation metric" which quantifies the consistency (agreement/disagreement) between measurements and computations. This "model validation metric" (or "consistency indicator") is constructed from parameter covariance matrices, response covariance matrices (measured and computed), and response sensitivities to model parameters. Traditional methods for computing response sensitivities are hampered by the "curse of dimensionality," which makes them impractical for applications to large-scale systems that involve many imprecisely known parameters. Reducing the computational effort required for precisely calculating the response sensitivities is paramount, and the comprehensive adjoint sensitivity analysis methodology developed by the author shows great promise in this regard, as shown in this book. After discarding inconsistent data (if any) using the consistency indicator, the BERRU predictive modeling methodology provides best-estimate values for predicted parameters and responses along with best-estimate reduced uncertainties (i.e., smaller predicted standard deviations) for the predicted quantities. Applying the BERRU methodology yields optimal, experimentally validated, "best estimate" predictive modeling tools for designing new technologies and facilities, while also improving on existing ones.
Computational intelligence is rapidly becoming an essential part of reliability engineering. This book offers a wide spectrum of viewpoints on the merger of technologies. Leading scientists share their insights and progress on reliability engineering techniques, suitable mathematical methods, and practical applications. Thought-provoking ideas are embedded in a solid scientific basis that contribute to the development the emerging field. This book is for anyone working on the most fundamental paradigm-shift in resilience engineering in decades. Scientists benefit from this book by gaining insight in the latest in the merger of reliability engineering and computational intelligence. Businesses and (IT) suppliers can find inspiration for the future, and reliability engineers can use the book to move closer to the cutting edge of technology.
This book provides methods and concepts which enable engineers to design mass and cost efficient products. Therefore, the book introduces background and motivation related to sustainability and lightweight design by looking into those aspects from a durability and quality point of view. Hence this book gives a "top-down" approach: What does an engineer has to do for providing a mass and cost efficient solution? A central part of that approach is the "stress-strength interference model" and how to deal with "stresses" (caused by operational loads) as well as with the "strength" of components (provided by material, design and manufacturing process). The basic concepts of material fatigue are introduced, but the focus of the volume is to develop an understanding of the content and sequence of engineering tasks to be performed which help to build reliable products. This book is therefore aimed specifically at students of mechanical engineering and mechatronics and at engineers in professional practice.
This book highlights operation principles for Air Traffic Control Automated Systems (ATCAS), new scientific directions in design and application of dispatching training simulators and parameters of ATCAS radio equipment items for aircraft positioning. This book is designed for specialists in air traffic control and navigation at a professional and scientific level. The following topics are also included in this book: personnel actions in emergency, including such unforeseen circumstances as communication failure, airplane wandering off course, unrecognized aircraft appearance in the air traffic service zone, aerial target interception, fuel draining, airborne collision avoidance system (ACAS) alarm, emergency stacking and volcanic ash cloud straight ahead.
This book presents selected papers from the 3rd Global Summit of Research Institutes for Disaster Risk Reduction - Expanding the Platform for Bridging Science and Policy Making, which was held at the Disaster Prevention Research Institute (DPRI), Kyoto University, Uji Campus from 19 to 21 March 2017. It was organised by the Global Alliance of Disaster Research Institutes (GADRI), which was established soon after the second Global Summit and the UN World Conference on Disaster Risk Reduction in March 2015, and is intended to support the implementation of the Sendai Framework for Disaster Risk Reduction 2015-2030. The conference not only provided a platform for discussion and exchange of information on key current and future research projects on disaster risk reduction and management, but also promoted active dialogues through group discussion sessions that addressed various disaster research disciplines. In this book, authors from various disciplines working at governmental and international organisations provide guidance to the science and technical community, discuss the current challenges, and evaluate the research needs and gaps in the context of climate change, sustainable development goals and other interlinked global disaster situations. Expert opinions from practitioners and researchers provide valuable insights into how to connect and engage in collaborative research with the international science and technical communities and other stakeholders to achieve the goals set out in the agenda of the Sendai Framework for Disaster Risk Reduction 2015-2030. In addition, case studies and other evidence-based research papers highlight ongoing research projects and reflect the challenges encountered in information sharing by various stakeholders in the context of disaster risk reduction and management. Chapter "Science and technology commitment to the implementation of the Sendai Framework for Disaster Risk Reduction 2015-2030" is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
This book presents a compilation of selected papers from the Fourth International Symposium on Software Reliability, Industrial Safety, Cyber Security and Physical Protection of Nuclear Power Plant, held in August 2019 in Guiyang, China. The purpose of the symposium was to discuss inspection, testing, certification and research concerning the software and hardware of instrument and control (I&C) systems used at nuclear power plants (NPP), such as sensors, actuators and control systems. The event provides a venue for exchange among experts, scholars and nuclear power practitioners, as well as a platform for the combination of teaching and research at universities and enterprises to promote the safe development of nuclear power plants. Readers will find a wealth of valuable insights into achieving safer and more efficient instrumentation and control systems.
This book is a practical guide to the uncertainty analysis of computer model applications. Used in many areas, such as engineering, ecology and economics, computer models are subject to various uncertainties at the level of model formulations, parameter values and input data. Naturally, it would be advantageous to know the combined effect of these uncertainties on the model results as well as whether the state of knowledge should be improved in order to reduce the uncertainty of the results most effectively. The book supports decision-makers, model developers and users in their argumentation for an uncertainty analysis and assists them in the interpretation of the analysis results.
This book presents the latest research in the fields of reliability theory and its applications, providing a comprehensive overview of reliability engineering and discussing various tools, techniques, strategies and methods within these areas. Reliability analysis is one of the most multidimensional topics in the field of systems reliability engineering, and while its rapid development creates opportunities for industrialists and academics, it is also means that it is hard to keep up to date with the research taking place. By gathering findings from institutions around the globe, the book offers insights into the international developments in the field. As well as discussing the current areas of research, it also identifies knowledge gaps in reliability theory and its applications and highlights fruitful avenues for future research. Covering topics from life cycle sustainability to performance analysis of cloud computing, this book is ideal for upper undergraduate and postgraduate researchers studying reliability engineering.
This book talks about the dynamics of the surface water-groundwater contaminant interactions under different environmental conditions across the world. The contents of the book highlight trends of monitoring, prediction, awareness, learning, policy, and mitigation success. The book provides a description of the background processes and factors controlling resilience, risk, and response of water systems, contributing to the development of more efficient, sustainable technologies and management options. It integrates methodologies and techniques such as data science and engineering, remote sensing, modelling, analytics, synthesis and indices, disruptive innovations and their utilization in water management, policy making, and mitigation strategies. The book is intended to be a comprehensive reference for students, professionals, and researchers working on various aspects of science and technology development. It will also prove a useful resource for policy makers and implementation specialists.
This book provides insights into important new developments in the area of statistical quality control and critically discusses methods used in on-line and off-line statistical quality control. The book is divided into three parts: Part I covers statistical process control, Part II deals with design of experiments, while Part III focuses on fields such as reliability theory and data quality. The 12th International Workshop on Intelligent Statistical Quality Control (Hamburg, Germany, August 16 - 19, 2016) was jointly organized by Professors Sven Knoth and Wolfgang Schmid. The contributions presented in this volume were carefully selected and reviewed by the conference's scientific program committee. Taken together, they bridge the gap between theory and practice, making the book of interest to both practitioners and researchers in the field of quality control.
This contributed book focuses on major aspects of statistical quality control, shares insights into important new developments in the field, and adapts established statistical quality control methods for use in e.g. big data, network analysis and medical applications. The content is divided into two parts, the first of which mainly addresses statistical process control, also known as statistical process monitoring. In turn, the second part explores selected topics in statistical quality control, including measurement uncertainty analysis and data quality. The peer-reviewed contributions gathered here were originally presented at the 13th International Workshop on Intelligent Statistical Quality Control, ISQC 2019, held in Hong Kong on August 12-14, 2019. Taken together, they bridge the gap between theory and practice, making the book of interest to both practitioners and researchers in the field of statistical quality control.
This book provides a detailed introduction to maintenance policies and the current and future research in these fields, highlighting mathematical formulation and optimization techniques. It comprehensively describes the state of art in maintenance modelling and optimization for single- and multi-unit technical systems, and also investigates the problem of the estimation process of delay-time parameters and how this affects system performance. The book discusses delay-time modelling for multi-unit technical systems in various reliability structures, examining the optimum maintenance policies both analytically and practically, focusing on a delay-time modelling technique that has been employed by researchers in the field of maintenance engineering to model inspection intervals. It organizes the existing work into several fields, based mainly on the classification of single- and multi-unit models and assesses the applicability of the reviewed works and maintenance models. Lastly, it identifies potential future research directions and suggests research agendas. This book is a valuable resource for maintenance engineers, reliability specialists, and researchers, as it demonstrates the latest developments in maintenance, inspection and delay-time-based maintenance modelling issues. It is also of interest to graduate and senior undergraduate students, as it introduces current theory and practice in maintenance modelling issues, especially in the field of delay-time modelling.
This book discusses the new roles that the VLSI (very-large-scale integration of semiconductor circuits) is taking for the safe, secure, and dependable design and operation of electronic systems. The book consists of three parts. Part I, as a general introduction to this vital topic, describes how electronic systems are designed and tested with particular emphasis on dependability engineering, where the simultaneous assessment of the detrimental outcome of failures and cost of their containment is made. This section also describes the related research project "Dependable VLSI Systems," in which the editor and authors of the book were involved for 8 years. Part II addresses various threats to the dependability of VLSIs as key systems components, including time-dependent degradations, variations in device characteristics, ionizing radiation, electromagnetic interference, design errors, and tampering, with discussion of technologies to counter those threats. Part III elaborates on the design and test technologies for dependability in such applications as control of robots and vehicles, data processing, and storage in a cloud environment and heterogeneous wireless telecommunications. This book is intended to be used as a reference for engineers who work on the design and testing of VLSI systems with particular attention to dependability. It can be used as a textbook in graduate courses as well. Readers interested in dependable systems from social and industrial-economic perspectives will also benefit from the discussions in this book.
Expert practical and theoretical coverage of runs and scans This volume presents both theoretical and applied aspects of runs and scans, and illustrates their important role in reliability analysis through various applications from science and engineering. Runs and Scans with Applications presents new and exciting content in a systematic and cohesive way in a single comprehensive volume, complete with relevant approximations and explanations of some limit theorems. The authors provide detailed discussions of both classical and current problems, such as:
Runs and Scans with Applications offers broad coverage of the subject in the context of reliability and life-testing settings and serves as an authoritative reference for students and professionals alike.
This book provides engineers and scientists with practical fundamentals for turbomachinery design. It presents a detailed analysis of existing procedures for the analysis of rotor and structure dynamics, while keeping mathematical equations to a minimum. Specific terminologies are used for rotors and structures, respectively, allowing the readers to clearly distinguish between the two. Further, the book describes the essential concepts needed to understand rotor failure modes due to lateral and torsional oscillations. It guides the reader from simple single-degree-of-freedom models to the most complex multi-degree-of-freedom systems, and provides useful information concerning steel pedestal stiffness degradation and other structural issues. Fluid-film bearing types and their dynamical behavior are extensively covered and discussed in the context of various turbomachinery applications. The book also discusses shaft alignment and rotor balancing from a practical point of view, providing readers with essential information to help them solve practical problems. As the main body of the book focuses on the diagnostics and description of case studies addressing the most pressing practical issues, together with their successful solutions, it offers a valuable reference guide, helping field engineers manage day-to-day issues with turbomachinery.
This book expands on the subject matter of 'Computational Electromagnetics and Model-Based Inversion: A Modern Paradigm for Eddy-Current Nondestructive Evaluation.' It includes (a) voxel-based inversion methods, which are generalizations of model-based algorithms; (b) a complete electromagnetic model of advanced composites (and other novel exotic materials), stressing the highly anisotropic nature of these materials, as well as giving a number of applications to nondestructive evaluation; and (c) an up-to-date discussion of stochastic integral equations and propagation-of-uncertainty models in nondestructive evaluation. As such, the book combines research started twenty-five years ago in advanced composites and voxel-based algorithms, but published in scattered journal articles, as well as recent research in stochastic integral equations. All of these areas are of considerable interest to the aerospace, nuclear power, civil infrastructure, materials characterization and biomedical industries. The book covers the topic of computational electromagnetics in eddy-current nondestructive evaluation (NDE) by emphasizing three distinct topics: (a) fundamental mathematical principles of volume-integral equations as a subset of computational electromagnetics, (b) mathematical algorithms applied to signal-processing and inverse scattering problems, and (c) applications of these two topics to problems in which real and model data are used. It is therefore more than an academic exercise and is valuable to users of eddy-current NDE technology in industries as varied as nuclear power, aerospace, materials characterization and biomedical imaging.
Cyber-physical systems (CPSs) combine cyber capabilities, such as computation or communication, with physical capabilities, such as motion or other physical processes. Cars, aircraft, and robots are prime examples, because they move physically in space in a way that is determined by discrete computerized control algorithms. Designing these algorithms is challenging due to their tight coupling with physical behavior, while it is vital that these algorithms be correct because we rely on them for safety-critical tasks. This textbook teaches undergraduate students the core principles behind CPSs. It shows them how to develop models and controls; identify safety specifications and critical properties; reason rigorously about CPS models; leverage multi-dynamical systems compositionality to tame CPS complexity; identify required control constraints; verify CPS models of appropriate scale in logic; and develop an intuition for operational effects. The book is supported with homework exercises, lecture videos, and slides.
The book comprehensively covers the various aspects of risk modeling and analysis in technological contexts. It pursues a systems approach to modeling risk and reliability concerns in engineering, and covers the key concepts of risk analysis and mathematical tools used to assess and account for risk in engineering problems. The relevance of incorporating risk-based structures in design and operations is also stressed, with special emphasis on the human factor and behavioral risks. The book uses the nuclear plant, an extremely complex and high-precision engineering environment, as an example to develop the concepts discussed. The core mechanical, electronic and physical aspects of such a complex system offer an excellent platform for analyzing and creating risk-based models. The book also provides real-time case studies in a separate section to demonstrate the use of this approach. There are many limitations when it comes to applications of risk-based approaches to engineering problems. The book is structured and written in a way that addresses these key gap areas to help optimize the overall methodology. This book serves as a textbook for graduate and advanced undergraduate courses on risk and reliability in engineering. It can also be used outside the classroom for professional development courses aimed at practicing engineers or as an introduction to risk-based engineering for professionals, researchers, and students interested in the field.
The purpose of this book is to present a comprehensive review of the latest research and development trends at the international level for modeling and optimization of the supplier selection process for different industrial sectors. It is targeted to serve two audiences: the MBA and PhD student interested in procurement, and the practitioner who wishes to gain a deeper understanding of procurement analysis with multi-criteria based decision tools to avoid upstream risks to get better supply chain visibility. The book is expected to serve as a ready reference for supplier selection criteria and various multi-criteria based supplier's evaluation methods for forward, reverse and mass customized supply chain. This book encompasses several criteria, methods for supplier selection in a systematic way based on extensive literature review from 1998 to 2012. It provides several case studies and some useful links which can serve as a starting point for interested researchers. In the appendix several computer code written in MatLab and VB.NET is also included for the interested reader. Lucid explosion of various techniques used to select and evaluate suppliers is one of the unique characteristic of this book. Moreover, this book gives in depth analysis of selection and evaluation of suppliers for traditional supply chain, closed loop supply chain, supply chain for customized product, green supply chain, sustainable supply chain and also depicts methods for supply base reduction and selection of large number of suppliers. |
![]() ![]() You may like...
Guided Meditations for Mindfulness and…
Healing Meditation Academy
Hardcover
R539
Discovery Miles 5 390
|