![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Mechanical engineering & materials > Production engineering > Reliability engineering
A survey of recent developments in the field of plutonium disposal by the application of advanced nuclear systems, both critical and subcritical. Current national R&D plans are summarized. The actinide-fuelled critical reactors are associated with control problems, since they tend to have a small delayed neutron fraction coupled with a small Doppler effect and a positive void coefficient. Current thinking is turning to accelerator-driven subcritical systems for the transmutation of actinides. The book's conclusion is that the various systems proposed are technically feasible, even though not yet technically mature. The book presents a unique summary and evaluation of all relevant possibilities for burning surplus plutonium, presented by experts from a variety of different disciplines and interests, including the defence establishment. The obvious issue - the non-proliferation of nuclear weapons - is vital, but the matter represents a complex technological challenge that also requires an assessment in economic terms.
Consideration was given to more advanced theoretical approaches and novel applications of reliability to ensure that topics having a futuristic impact were specifically included. The entries have been categorized into seven parts, each emphasizing a theme that seems poised for the future development of reliability as an academic discipline with relevance. The topics, when linked with utility theory, constitute the science base of risk analysis.
Multiple Criteria Decision Support in Engineering Design examines some of the underlying issues and related modelling strategies, with a view to exploring the rich potential of a generalised multiple-criteria approach to design decision-making. The arguments are supported by numerical examples. It can be argued that, within the classic monocriterion paradigm, the optimal solution is inarguably identified once the feasible alternatives are established and an objective function agreed on. It is only when conflict resolution is involved that decision-making truly becomes important, and many design situations exist where stated functional requirements may be in actual or potential conflict. The most preferred solution under such circumstances depends on the designer's or decision-maker's priorities, so that the chosen solution is based on a combination of technical possibilities and designer preferences. This book addresses the key concepts in multiple criteria decision-making and provides valuable insight into how such problems arise and can be solved, in the area of decision-making in general and in the domain of engineering design in particular.
Reliability data collection and its use in risk and availability assessment is a subject of increasing importance. The founders of EuReDatA, and in particular, Arne Ullman, the originator 'and first Chairman of the Association, recognised the need for a body capable of acting as a catalyst and providing a unified approach to this subject. It is therefore a prevailing objective of the European Reliability Databank Association to initiate and support contact between experts, companies and institutions active in reliability engineering and research. Although the first and principle interest of EuReDatA is reliability data and data banks, the Association is aware that these are tools that are used with others to establish and maintain reliability and safety. It is with this objective that EuReDatA regularly holds conferences and seminars covering a range of reliability topics. C.A. Campbell H.J. Wingender EuReDatA Chairman Organiser, Editor Contents CHAPTER 1: OVERVIEWS Data Situation and the Quality of Risk Assessment (FRG) A. Birkhofer, K. Koberlein (GRS) ..****....*.*...**.....*.*.. 3 Reliability Engineering in Europe (CEC) G. Volta (JRC-Ispra) *...****... *...........***.*....*.**..**. 16 1984: A Year of Industrial Catastrophies.
In 1980, I received a grant from Aoyama-gakuin university to come to the United States to assist American Industry improve the quality of their products. In a small way this was to repay the help the US had given Japan after the war. In the summer of 1980, I visited the AT&T Bell Laboratories Quality Assurance Center, the organization that founded modern quality control. The result of my first summer at AT&T was an experiment with an orthogonal array design of size 18 (OA18) for optimization of an LSI fabrication process. As a measure of quality, the quantity "signal-ta-noise" ratio was to be optimized. Since then, this experi mental approach has been named "robust design" and has attracted the attention of both engineers and statisticians. My colleagues at Bell Laboratories have written several expository articles and a few theoretical papers on robust design from the viewpoint of statistics. Because so many people have asked for copies of these papers, it has been decided to publish them in a book form. This anthology is the result of these efforts. Despite the fact that quality engineering borrows some technical words from traditional design of experiments, the goals of quality engineering are different from those of statistics. For example, suppose there are two vendors. One vendor supplies products whose quality characteristic has a normal distribution with the mean on target (the desired value) and a certain standard deviation.
The analysis of statistical data is a critical element in road safety studies. For example, specific projects or programs may be implemented with the analyst asked to answer the question, "What has been the effect of this project (program) on accident frequency and/or severity?" Are there any interdependencies or contribut ing effects due to the age, sex or driving experience of involved motorists? What is the contribution, if any, of roadway design, time of day, traffic density, etc.?" To answer, or to provide insight into, these types of questions, contingency tables are often used to display frequency or count data. The subsequent analysis of these contingency tables is the principal form of this book. Because of recent advances in the underlying statistical methodology and procedures, and because of the increasing interest in the application of contingency table analysis to road safety studies, an Advanced Study Institute (ASI) directed to this topic was held at the Sogesta Conference Center, Urbino, Italy, during the period 18-29 June 1979. The ASI was funded by the Scientific North Atlantic Treaty Organization (NATO) as part of its Advanced Study Institutes Programme. The contents of this book, with two exceptions described below, represent the Proceedings of the ASI.
This book addresses the various risks associated with the transport of dangerous goods within a territory. The emphasis of the contributions is on methods and tools to reduce the vulnerability of both the environment and human society to accidents or malicious acts involving such transport. With topics ranging from game theory to governance principles, the authors together cover technical, legal, financial, and logistic aspects of this problem. The intended audience includes responsible persons in territorial organizations, managers of transport infrastructures, as well as students, teachers and researchers wishing to deepen their knowledge in this area.
The proceedings contain lectures and short papers presented at the NATO Advanced Study Institute on "Reliability Theory and Its Application in Structural and Soil Me chanics", Bornholm, Denmark, August 31 -September 9,1982. The proceedings are organized in two parts. The first part contains 12 papers by the invited lecturers and the second part contains 23 papers by participants plus one paper from an invited lecturer (la~e arrival). The Institute dealt with specific topics on application of modem reliability theories in structural engineering and soil mechanics. Both fundamental theory and more ad vanced theory were covered. Lecture courses were followed by tutorial and summary discussions with active participation of those attending the Institute. Special lectures of topical subjects were given by a number of invited speake~, leading to plenary dis cussions and summary statements on important aspects of application of modem .re liability theory in structural engineering and soil mechanics. A great number of the participants presented brief reports of their own research activities.
Multivariate Statistical Analysis
MOX fuel, a mixture of weapon-grade plutonium and natural or depleted uranium, may be used to deplete a portion of the world's surplus of weapon-grade plutonium. A number of reactors currently operate in Europe with one-third MOX cores, and others are scheduled to begin using MOX fuels in both Europe and Japan in the near future. While Russia has laboratory-scale MOX fabrication facilities, the technology remains under study. No fuels containing plutonium are used in the U.S. The 25 presentations in this book give an impressive overview of MOX technology. The following issues are covered: an up to date report on the disposition of ex-weapons Pu in Russia; an analysis of safety features of MOX fuel configurations of different reactor concepts and their operating and control measures; an exchange of information on the status of MOX utilisation in existing power plants, the fabrication technology of various MOX fuels and their behaviour in practice; a discussion of the typical national approaches by Russia and the western countries to the utilisation of Pu as MOX fuel; an introduction to new ideas, enhancing the disposition option of MOX fuel exploitation and destruction in existing and future advanced reactor systems; and the identification of common research areas where defined tasks can be initiated in cooperative partnership.
Behavioral Intervals in Embedded Software introduces a
comprehensive approach to timing, power, and communication analysis
of embedded software processes. Embedded software timing, power and
communication are typically not unique but occur in intervals which
result from data dependent behavior, environment timing and target
system properties.
This book presents models and methods for systems reliability assessment, human reliability analysis and uncertainty management. It includes fourteen contributions which are grouped into three sections. Section 1 deals with basic reliability methods and applications. The papers by Saiz de Bustamante and Perlado introduce the stochastic processes and the Monte Carlo method, respectively. Sanz Fermandez de Cordoba and Gonzales discuss important practical implications of the use of reliability methods. The former refers to the aerospace industry. The latter considers nuclear power plants. Session 2 presents some advances in systems reliability techniques. The paper by Contini and Poucet illustrates the mathematical analysis of fault trees and event trees. It includes a discussion on the logical analysis of non-coherent fault trees and considerations on the major measures of criticality and importance of a component. The paper by Babbio is devoted to Petri nets. First, the formalism of this relatively new technique is given. Then, stochastic Petri nets are introduced as a tool to describe the behaviour of systems in time. Finally, by some fully developed examples, it is shown how this approach can be used to represent and evaluate complex stochastic systems. Limnios introduces the notion of failure delay systems and gives the lifetime structure for the evaluation of reliability measures. A reservoir is studied as an example of a failure delay system.
Current issues and approaches in the reliability and safety analysis of dynamic process systems are the subject of this book. The authors of the chapters are experts from nuclear, chemical, mechanical, aerospace and defense system industries, and from institutions including universities, national laboratories, private consulting companies, and regulatory bodies. Both the conventional approaches and dynamic methodologies which explicitly account for the time element in system evolution in failure modeling are represented. The papers on conventional approaches concentrate on the modeling of dynamic effects and the need for improved methods. The dynamic methodologies covered include the DYLAM methodology, the theory of continuous event trees, several Markov model construction procedures, Monte Carlo simulation, and utilization of logic flowgraphs in conjunction with Petri nets. Special emphasis is placed on human factors such as procedures and training.
This book covers several bases at once. It is useful as a textbook for a second course in experimental optimization techniques for industrial production processes. In addition, it is a superb reference volume for use by professors and graduate students in Industrial Engineering and Statistics departments. It will also be of huge interest to applied statisticians, process engineers, and quality engineers working in the electronics and biotech manufacturing industries. In all, it provides an in-depth presentation of the statistical issues that arise in optimization problems, including confidence regions on the optimal settings of a process, stopping rules in experimental optimization, and more.
This volume is intended to stimulate a change in the practice of decision support, advocating an interdisciplinary approach centred on both social and natural sciences, both theory and practice. It addresses the issue of analysis and management of uncertainty and risk in decision support corresponding to the aims of Integrated Assessment. A pluralistic method is necessary to account for legitimate plural interpretations of uncertainty and multiple risk perceptions. A wide range of methods and tools is presented to contribute to adequate and effective pluralistic uncertainty management and risk analysis in decision support endeavours. Special attention is given to the development of one such approach, the Pluralistic fRamework for Integrated uncertainty Management and risk Analysis (PRIMA), of which the practical value is explored in the context of the Environmental Outlooks produced by the Dutch Institute for Public Health and Environment (RIVM). Audience: This book will be of interest to researchers and practitioners whose work involves decision support, uncertainty management, risk analysis, environmental planning, and Integrated Assessment.
This publication is a compilation of papers presented at the Semiconductor Device Reliabi lity Workshop sponsored by the NATO International Scientific Exchange Program. The Workshop was held in Crete, Greece from June 4 to June 9, 1989. The objective of the Workshop was to review and to further explore advances in the field of semiconductor reliability through invited paper presentations and discussions. The technical emphasis was on quality assurance and reliability of optoelectronic and high speed semiconductor devices. The primary support for the meeting was provided by the Scientific Affairs Division of NATO. We are indebted to NATO for their support and to Dr. Craig Sinclair, who admin isters this program. The chapters of this book follow the format and order of the sessions of the meeting. Thirty-six papers were presented and discussed during the five-day Workshop. In addi tion, two panel sessions were held, with audience participation, where the particularly controversial topics of bum-in and reliability modeling and prediction methods were dis cussed. A brief review of these sessions is presented in this book."
After leading the world during most of the 20th century in economic, political, technological, military, and even social terms, America s role is now being challenged. Its values questioned, and its methods often disparaged, America had become the clear example to be followed or even copied, yet its more recent strategic and political decisions gained little international support and a lot of outright opposition. The quality of its national planning and decision making has been severely compromised, and risk management appears to be largely absent. India and China are now emerging as new economic powers, with advancing technological prowess. Their focus is on socioeconomic development, but their capabilities and potentials are much broader and may challenge America's leadership before long, unless it recognizes the changing demands of the new wide open globalized world. "
In today's global economy, operations strategy in supply chains must assume an ever-expanding and strategic role of risks. These operational and strategic facets entail a brand new set of operational problems and risks that have not always been understood or managed very well. This book provides the means to understand, to model and to analyze these outstanding issues and problems that are the essential elements in managing supply chains today.
This book is intended for students and practitioners who have had a calculus-based statistics course and who have an interest in safety considerations such as reliability, strength, and duration-of-load or service life. Many persons studying statistical science will be employed professionally where the problems encountered are obscure, what should be analyzed is not clear, the appropriate assumptions are equivocal, and data are scant. In this book there is no disclosure with many of the data sets what type of investigation should be made or what assumptions are to be used.
Emerging Nanotechnologies: Test, Defect Tolerance and Reliability covers various technologies that have been developing over the last decades such as chemically assembled electronic nanotechnology, Quantum-dot Cellular Automata (QCA), and nanowires and carbon nanotubes. Each of these technologies offers various advantages and disadvantages. Some suffer from high power, some work in very low temperatures and some others need indeterministic bottom-up assembly. These emerging technologies are not considered as a direct replacement for CMOS technology and may require a completely new architecture to achieve their functionality. Emerging Nanotechnologies: Test, Defect Tolerance and Reliability brings all of these issues together in one place for readers and researchers who are interested in this rapidly changing field.
The papers in this volume integrate results from current research efforts in earthquake engineering with research from the larger risk assessment community. The authors include risk and hazard researchers from the major U.S. hazard and earthquake centers. The volume lays out a road map for future developments in risk modeling and decision support, and positions earthquake engineering research within the family of risk analysis tools and techniques.
This volume includes chapters presenting applications of different metaheuristics in reliability engineering, including ant colony optimization, great deluge algorithm, cross-entropy method and particle swarm optimization. It also presents chapters devoted to cellular automata and support vector machines, and applications of artificial neural networks, a powerful adaptive technique that can be used for learning, prediction and optimization. Several chapters describe aspects of imprecise reliability and applications of fuzzy and vague set theory.
Solder Joint Reliability Prediction for Multiple Environments will provide industry engineers, graduate students and academic researchers, and reliability experts with insights and useful tools for evaluating solder joint reliability of ceramic area array electronic packages under multiple environments. The material presented here is not limited to ceramic area array packages only, it can also be used as a methodology for relating numerical simulations and experimental data into an easy-to-use equation that captures the essential information needed to predict solder joint reliability. Such a methodology is often needed to relate complex information in a simple manner to managers and non-experts in solder joint who work with computer server applications as well as for harsh environments such as those found in the defense, space, and automotive industries.
Verification is too often approached in an ad hoc fashion. Visually inspecting simulation results is no longer feasible and the directed test-case methodology is reaching its limit. Moore's Law demands a productivity revolution in functional verification methodology. Writing Testbenches Using SystemVerilog offers a clear blueprint of a verification process that aims for first-time success using the SystemVerilog language. From simulators to source management tools, from specification to functional coverage, from I's and O's to high-level abstractions, from interfaces to bus-functional models, from transactions to self-checking testbenches, from directed testcases to constrained random generators, from behavioral models to regression suites, this book covers it all. Writing Testbenches Using SystemVerilog presents many of the functional verification features that were added to the Verilog language as part of SystemVerilog. Interfaces, virtual modports, classes, program blocks, clocking blocks and others SystemVerilog features are introduced within a coherent verification methodology and usage model. Writing Testbenches Using SystemVerilog introduces the reader to all elements of a modern, scalable verification methodology. It is an introduction and prelude to the verification methodology detailed in the Verification Methodology Manual for SystemVerilog. It is a SystemVerilog version of the author's bestselling book Writing Testbenches: Functional Verification of HDL Models. |
You may like...
Research Anthology on Implementing…
Information R Management Association
Hardcover
R15,739
Discovery Miles 157 390
|