![]() |
![]() |
Your cart is empty |
||
Books > Professional & Technical > Mechanical engineering & materials > Production engineering > Reliability engineering
This book discusses the emerging field of industrial neuroscience, and reports on the authors' cutting-edge findings in the evaluation of mental states, including mental workload, cognitive control and training of personnel involved either in the piloting of aircraft and helicopters, or in managing air traffic. It encompasses neuroimaging and cognitive psychology techniques and shows how they have been successfully applied in the evaluation of human performance and human-machine interactions, and to guarantee a proper level of safety in such operational contexts. With an introduction to the most relevant concepts of neuroscience, neurophysiological techniques, simulators and case studies in aviation environments, it is a must-have for both students and scientists in the field of aeronautic and biomedical engineering, as well as for various professionals in the aviation world. This is the first book to intensively apply neurosciences to the evaluation of human factors and mental states in aviation.
The book focuses on system dependability modeling and calculation, considering the impact of s-dependency and uncertainty. The best suited approaches for practical system dependability modeling and calculation, (1) the minimal cut approach, (2) the Markov process approach, and (3) the Markov minimal cut approach as a combination of (1) and (2) are described in detail and applied to several examples. The stringently used Boolean logic during the whole development process of the approaches is the key for the combination of the approaches on a common basis. For large and complex systems, efficient approximation approaches, e.g. the probable Markov path approach, have been developed, which can take into account s-dependencies be-tween components of complex system structures. A comprehensive analysis of aleatory uncertainty (due to randomness) and epistemic uncertainty (due to lack of knowledge), and their combination, developed on the basis of basic reliability indices and evaluated with the Monte Carlo simulation method, has been carried out. The uncertainty impact on system dependability is investigated and discussed using several examples with different levels of difficulty. The applications cover a wide variety of large and complex (real-world) systems. Actual state-of-the-art definitions of terms of the IEC 60050-192:2015 standard, as well as the dependability indices, are used uniformly in all six chapters of the book.
The growing dependence of working environments on complex
technology has created many challenges and lead to a large number
of accidents. Although the quality of organization and management
within the work environment plays an important role in these
accidents, the significance of individual human action (as a direct
cause and as a mitigating factor) is undeniable. This has created a
need for new, integrated approaches to accident analysis and risk
assessment. This book detailing the use of CREAM is, therefore, both timely
and useful. CREAM can be used as a second-generation human reliability
analysis (HRA) approach in probabilistic safety assessment (PSA),
as a stand-alone method for accident analysis and as part of a
larger design method for interactive systems. In particular, the
use of CREAM will enable system designers and risk analysts
to:
This book reviews the active faults around nuclear power plants in Japan and recommends an optimal method of nuclear power regulation controlled by the Nuclear Regulation Authority of Japan. The active faults around nuclear power plants have been underestimated in Japan since the latter half of the 20th century. However, based on the lessons learned from the Fukushima nuclear power plant accident, the book sheds light on why the risks of active faults were underestimated, and discusses the optimal scientific method of assessing those risks. Further, the author shares his experiences in the new standard for nuclear regulation creation team and in the active fault survey at the Nuclear Regulation Authority of Japan. This book is a valuable resource for students, researchers, academic and policy-makers, as well as non-experts interested in nuclear safety.
This book covers ideas, methods, algorithms, and tools for the in-depth study of the performance and reliability of dependable fault-tolerant systems. The chapters identify the current challenges that designers and practitioners must confront to ensure the reliability, availability, and performance of systems, with special focus on their dynamic behaviors and dependencies. Topics include network calculus, workload and scheduling; simulation, sensitivity analysis and applications; queuing networks analysis; clouds, federations and big data; and tools. This collection of recent research exposes system researchers, performance analysts, and practitioners to a spectrum of issues so that they can address these challenges in their work.
This book addresses Integrated Design Engineering (IDE), which represents a further development of Integrated Product Development (IPD) into an interdisciplinary model for both a human-centred and holistic product development. The book covers the systematic use of integrated, interdisciplinary, holistic and computer-aided strategies, methods and tools for the development of products and services, taking into account the entire product lifecycle. Being applicable to various kinds of products (manufactured, software, services, etc.), it helps readers to approach product development in a synthesised and integrated way. The book explains the basic principles of IDE and its practical application. IDE's usefulness has been demonstrated in case studies on actual industrial projects carried out by all book authors. A neutral methodology is supplied that allows the reader to choose the appropriate working practices and performance assessment techniques to develop their product quickly and efficiently. Given its manifold topics, the book offers a valuable reference guide for students in engineering, industrial design, economics and computer science, product developers and managers in industry, as well as industrial engineers and technicians.
This book provides readers with a timely snapshot of the potential offered by and challenges posed by signal processing methods in the field of machine diagnostics and condition monitoring. It gathers contributions to the first Workshop on Signal Processing Applied to Rotating Machinery Diagnostics, held in Setif, Algeria, on April 9-10, 2017, and organized by the Applied Precision Mechanics Laboratory (LMPA) at the Institute of Precision Mechanics, University of Setif, Algeria and the Laboratory of Mechanics, Modeling and Manufacturing (LA2MP) at the National School of Engineers of Sfax. The respective chapters highlight research conducted by the two laboratories on the following main topics: noise and vibration in machines; condition monitoring in non-stationary operations; vibro-acoustic diagnosis of machinery; signal processing and pattern recognition methods; monitoring and diagnostic systems; and dynamic modeling and fault detection.
This multi-contributed volume provides a practical, applications-focused introduction to nonlinear acoustical techniques for nondestructive evaluation. Compared to linear techniques, nonlinear acoustical/ultrasonic techniques are much more sensitive to micro-cracks and other types of small distributed damages. Most materials and structures exhibit nonlinear behavior due to the formation of dislocation and micro-cracks from fatigue or other types of repetitive loadings well before detectable macro-cracks are formed. Nondestructive evaluation (NDE) tools that have been developed based on nonlinear acoustical techniques are capable of providing early warnings about the possibility of structural failure before detectable macro-cracks are formed. This book presents the full range of nonlinear acoustical techniques used today for NDE. The expert chapters cover both theoretical and experimental aspects, but always with an eye towards applications. Unlike other titles currently available, which treat nonlinearity as a physics problem and focus on different analytical derivations, the present volume emphasizes NDE applications over detailed analytical derivations. The introductory chapter presents the fundamentals in a manner accessible to anyone with an undergraduate degree in Engineering or Physics and equips the reader with all of the necessary background to understand the remaining chapters. This self-contained volume will be a valuable reference to graduate students through practising researchers in Engineering, Materials Science, and Physics. Represents the first book on nonlinear acoustical techniques for NDE applications Emphasizes applications of nonlinear acoustical techniques Presents the fundamental physics and mathematics behind nonlinear acoustical phenomenon in a simple, easily understood manner Covers a variety of popular NDE techniques based on nonlinear acoustics in a single volume
This volume contains selected papers from the Second Quadrennial International Conference on Structural Integrity (ICONS-2018). The papers cover important topics related to structural integrity of critical installations, such as power plants, aircrafts, spacecrafts, defense and civilian components. The focus is on assuring safety of operations with high levels of reliability and structural integrity. This volume will be of interest to plant operators working with safety critical equipment, engineering solution providers, software professionals working on engineering analysis, as well as academics working in the area.
This book demonstrates the use of a wide range of strategic engineering concepts, theories and applied case studies to improve the safety, security and sustainability of complex and large-scale engineering and computer systems. It first details the concepts of system design, life cycle, impact assessment and security to show how these ideas can be brought to bear on the modeling, analysis and design of information systems with a focused view on cloud-computing systems and big data analytics. This informative book is a valuable resource for graduate students, researchers and industry-based practitioners working in engineering, information and business systems as well as strategy.
This book presents the latest key research into the performance and reliability aspects of dependable fault-tolerant systems and features commentary on the fields studied by Prof. Kishor S. Trivedi during his distinguished career. Analyzing system evaluation as a fundamental tenet in the design of modern systems, this book uses performance and dependability as common measures and covers novel ideas, methods, algorithms, techniques, and tools for the in-depth study of the performance and reliability aspects of dependable fault-tolerant systems. It identifies the current challenges that designers and practitioners must face in order to ensure the reliability, availability, and performance of systems, with special focus on their dynamic behaviors and dependencies, and provides system researchers, performance analysts, and practitioners with the tools to address these challenges in their work. With contributions from Prof. Trivedi's former PhD students and collaborators, many of whom are internationally recognized experts, to honor him on the occasion of his 70th birthday, this book serves as a valuable resource for all engineering disciplines, including electrical, computer, civil, mechanical, and industrial engineering as well as production and manufacturing.
This book introduces a number of recent developments on connectivity of communication networks, ranging from connectivity of large static networks and connectivity of highly dynamic networks to connectivity of small to medium sized networks. This book also introduces some applications of connectivity studies in network optimization, in network localization, and in estimating distances between nodes. The book starts with an overview of the fundamental concepts, models, tools, and methodologies used for connectivity studies. The rest of the chapters are divided into four parts: connectivity of large static networks, connectivity of highly dynamic networks, connectivity of small to medium sized networks, and applications of connectivity studies.
This book discusses the application of quality and reliability engineering in Asian industries, and offers information for multinational companies (MNC) looking to transfer some of their operation and manufacturing capabilities to Asia and at the same time maintain high levels of reliability and quality. It is also provides small and medium enterprises (SME) in Asia with insights into producing high-quality and reliable products. It mainly comprises peer-reviewed papers that were presented at the Asian Network for Quality (ANQ) Congress 2014 held in Singapore (August, 2014), which provides a platform for companies, especially those within Asia where rapid changes and growth in manufacturing are taking place, to present their quality and reliability practices. The book presents practical demonstrations of how quality and reliability methodologies can be modified for the unique Asian market, and as such is a valuable resource for students, academics, professionals and practitioners in the field of quality and reliability.
Reliability of Microtechnology discusses the reliability of microtechnology products from the bottom up, beginning with devices and extending to systems. The book's focus includes but is not limited to reliability issues of interconnects, the methodology of reliability concepts and general failure mechanisms. Specific failure modes in solder and conductive adhesives are discussed at great length. Coverage of accelerated testing, component and system level reliability, and reliability design for manufacturability are also described in detail. The book also includes exercises and detailed solutions at the end of each chapter.
This thesis develops a systematic, data-based dynamic modeling framework for industrial processes in keeping with the slowness principle. Using said framework as a point of departure, it then proposes novel strategies for dealing with control monitoring and quality prediction problems in industrial production contexts. The thesis reveals the slowly varying nature of industrial production processes under feedback control, and integrates it with process data analytics to offer powerful prior knowledge that gives rise to statistical methods tailored to industrial data. It addresses several issues of immediate interest in industrial practice, including process monitoring, control performance assessment and diagnosis, monitoring system design, and product quality prediction. In particular, it proposes a holistic and pragmatic design framework for industrial monitoring systems, which delivers effective elimination of false alarms, as well as intelligent self-running by fully utilizing the information underlying the data. One of the strengths of this thesis is its integration of insights from statistics, machine learning, control theory and engineering to provide a new scheme for industrial process modeling in the era of big data.
Containing selected papers from the ICRESH-ARMS 2015 conference in Lulea, Sweden, collected by editors with years of experiences in Reliability and maintenance modeling, risk assessment, and asset management, this work maximizes reader insights into the current trends in Reliability, Availability, Maintainability and Safety (RAMS) and Risk Management. Featuring a comprehensive analysis of the significance of the role of RAMS and Risk Management in the decision making process during the various phases of design, operation, maintenance, asset management and productivity in Industrial domains, these proceedings discuss key issues and challenges in the operation, maintenance and risk management of complex engineering systems and will serve as a valuable resource for those in the field.
This book presents state-of-the-art probabilistic methods for the reliability analysis and design of engineering products and processes. It seeks to facilitate practical application of probabilistic analysis and design by providing an authoritative, in-depth, and practical description of what probabilistic analysis and design is and how it can be implemented. The text is packed with many practical engineering examples (e.g., electric power transmission systems, aircraft power generating systems, and mechanical transmission systems) and exercise problems. It is an up-to-date, fully illustrated reference suitable for both undergraduate and graduate engineering students, researchers, and professional engineers who are interested in exploring the fundamentals, implementation, and applications of probabilistic analysis and design methods.
Equipment to be installed in electric power-transmission and distribution systems must pass acceptance tests with standardized high-voltage or high-current test impulses which simulate the stress on the insulation caused by external lightning discharges and switching operations in the grid. High impulse voltages and currents are also used in many other fields of science and engineering for various applications. Therefore, precise impulse-measurement techniques are necessary, either to prevent an over- or understressing of the insulation or to guarantee the effectiveness and quality of the application. The target audience primarily comprises engineers and technicians but the book may also be beneficial for graduate students of high-voltage engineering and electrical power supply systems.
This book aims to address how nanotechnology risks are being addressed by scientists, particularly in the areas of human health and the environment and how these risks can be measured in financial terms for insurers and regulators. It provides a comprehensive overview of nanotechnology risk measurement and risk transfer methods, including a chapter outlining how Bayesian methods can be used. It also examines nanotechnology from a legal perspective, both current and potential future outcomes. The global market for nanotechnology products was valued at $22.9 billion in 2013 and increased to about $26 billion in 2014. This market is expected to reach about $64.2 billion by 2019, a compound annual growth rate (CAGR) of 19.8% from 2014 to 2019. Despite the increasing value of nanotechnologies and their widespread use, there is a significant gap between the enthusiasm of scientists and nanotechnology entrepreneurs working in the nanotechnology space and the insurance/regulatory sector. Scientists are scarcely aware that insurers/regulators have concerns about the potential for human and environmental risk and insurers/regulators are not in a position to access the potential risk. This book aims to bridge this gap by defining the current challenges in nanotechnology across disciplines and providing a number of risk management and assessment methodologies. Featuring contributions from authors in areas such as regulation, law, ethics, management, insurance and manufacturing, this volume provides an interdisciplinary perspective that is of value to students, academics, researchers, policy makers, practitioners and society in general.
This book offers a comprehensive overview of quality and quality management. It also explores total quality management, covering its human, technological and analytical imperatives. It also examines quality systems and system standards, highlighting essential features and avoiding a reproduction of the ISO 9000 standard, as well as people-related issues in implementing a quality system. A holistic understanding of quality considerations, which now permeate every aspect of human life, should guide related policies, plans and practices. The book describes the all-pervasive characteristics of quality, putting together diverse definitions of "quality," outlining its different dimensions, and linking it with reliability and innovation. It goes on to assess the quality of measurements in terms of precision, accuracy and uncertainty and discusses managing quality with a focus on business performance. This is followed by a chapter on improving process quality, which is the summum bonum of quality management, and a chapter addressing the crucial problem of measuring customer satisfaction through appropriate models and tools. Further, it covers non-traditional subjects such as quality of life, quality of working life, quality assurance and improvement in education, with special reference to higher education, quality in research and development and characterizes the quality-related policies and practices in Indian industry. The last chapter provides a broad sketch of some recent advances in statistical methods for quality management. Along with the research community, the book's content is also useful for practitioners and industry watchers.
In this probing critique of aviation security since 9/11, Andrew R.
Thomas, a globally recognized aviation security expert, examines
the recent overhaul of the national aviation security system.
This book presents a number of approaches to Fine-Kinney-based multi-criteria occupational risk-assessment. For each proposed approach, it provides case studies demonstrating their applicability, as well as Python coding, which will enable readers to implement them into their own risk assessment process. The book begins by giving a review of Fine-Kinney occupational risk-assessment methods and their extension by fuzzy sets. It then progresses in a logical fashion, dedicating a chapter to each approach, including the fuzzy best and worst method, interval-valued Pythagorean fuzzy VIKOR and interval type-2 fuzzy QUALIFLEX. This book will be of interest to professionals and researchers working in the field of occupational risk management, as well as postgraduate and undergraduate students studying applications of fuzzy systems.
The ever increasing public demand and the setting-up of national and international legislation on safety assessment of potentially dangerous plant require that a correspondingly increased effort be devoted by regulatory bodies and industrial organizations to reliability data in order to produce safety analyses. Reliability data are also needed to assess availability of plant and services and to improve quality of production processes, in particular, to meet the needs of plant operators and/or designers regarding maintenance planning and production availability. The need for an educational effort in the field of data acquisition and processing has been stressed within the framework of EuReDatA, an association of organizations operating reliability data banks. This association aims to promote data exchange and pooling of data between organizations and to encourage the adoption of compatible standards and basic definitions for a consistent exchange of reliability data. Such basic definitions are considered to be essential in order to improve quality. To cover issues directly linked to the above areas, space is devoted to the definition of failure events, comon cause and human error data, feedback of operational and disturbance data, event data analysis, life-time distributions, cumulative distribution functions, density functions, Bayesian inference methods, mutivariate analysis, fuzzy sets and possibility theory.
This edited volume explores the fundamental aspects of the dark web, ranging from the technologies that power it, the cryptocurrencies that drive its markets, the criminalities it facilitates to the methods that investigators can employ to master it as a strand of open source intelligence. The book provides readers with detailed theoretical, technical and practical knowledge including the application of legal frameworks. With this it offers crucial insights for practitioners as well as academics into the multidisciplinary nature of dark web investigations for the identification and interception of illegal content and activities addressing both theoretical and practical issues.
Most of the world s redundant ships are scrapped on the beaches of the Indian sub-continent, largely by hand. As well as cargo residues and wastes, ships contain high levels of hazardous materials that are released into the surrounding ecology when scrapped. The scrapping process is labour-intensive and largely manual; injuries and death are commonplace. Ship breaking was a relatively obscure industry until the late 1990s. In just 12 years, action by environmental NGOs has led to the ratification of an international treaty targeting the extensive harm to human and environmental health arising from this heavy, polluting industry; it has also produced important case law. Attempts to regulate the industry via the "Basel Convention" have resulted in a strong polarization of opinion as to its applicability and various international guidelines have also failed because of their voluntary nature. The adoption of the "Hong Kong Convention" in 2009 was a serious attempt to introduce international controls to this industry." |
![]() ![]() You may like...
Advanced H Control - Towards Nonsmooth…
Yury V. Orlov, Luis T. Aguilar
Hardcover
Symmetry in Complex Network Systems…
Visarath In, Antonio Palacios
Hardcover
System Dynamics with Interaction…
Albert C.J. Luo, Dennis M O'Connor
Hardcover
Progress in Turbulence V - Proceedings…
Alessandro Talamelli, Martin Oberlack, …
Hardcover
R4,931
Discovery Miles 49 310
Geometric Integrators for Differential…
Xinyuan Wu, Bin Wang
Hardcover
R3,460
Discovery Miles 34 600
Reference for Modern Instrumentation…
R.N. Thurston, Allan D. Pierce
Hardcover
R3,675
Discovery Miles 36 750
Progress in Turbulence VII - Proceedings…
Ramis Oerlu, Alessandro Talamelli, …
Hardcover
Active Control of Vibration
Christopher C. Fuller, S.J. Elliott, …
Paperback
|