![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Mechanical engineering & materials > Production engineering
This book compiles and critically discusses modern engineering system degradation models and their impact on engineering decisions. In particular, the authors focus on modeling the uncertain nature of degradation considering both conceptual discussions and formal mathematical formulations. It also describes the basics concepts and the various modeling aspects of life-cycle analysis (LCA). It highlights the role of degradation in LCA and defines optimum design and operation parameters. Given the relationship between operational decisions and the performance of the system's condition over time, maintenance models are also discussed. The concepts and models presented have applications in a large variety of engineering fields such as Civil, Environmental, Industrial, Electrical and Mechanical engineering. However, special emphasis is given to problems related to large infrastructure systems. The book is intended to be used both as a reference resource for researchers and practitioners and as an academic text for courses related to risk and reliability, infrastructure performance modeling and life-cycle assessment.
CMOS Test and Evaluation: A Physical Perspective is a single source for an integrated view of test and data analysis methodology for CMOS products, covering circuit sensitivities to MOSFET characteristics, impact of silicon technology process variability, applications of embedded test structures and sensors, product yield, and reliability over the lifetime of the product. This book also covers statistical data analysis and visualization techniques, test equipment and CMOS product specifications, and examines product behavior over its full voltage, temperature and frequency range.
The book summarizes the main results of the the project ENABLE-S3 covering the following aspects: validation and verification technology bricks (collection and selection of test scenarios, test executions envionments incl. respective models, assessment of test results), evaluation of technology bricks in selected use cases and standardization and related initiatives. ENABLE-S3 is an industry-driven EU-project and aspires to substitute todays' cost-intensive verification and validation efforts by more advanced and efficient methods. In addition, the book includes articles about complementary international activities in order to highlight the global importance of the topic and to cover the wide range of aspects that needs to be covered at a global scale.
Industrial production is one of the most basic human activities indispensable to the economic activity. Due to its complexity, production is not very well understood and modeled as opposed to traditional fields of inquiry such as physics. This book aims at enhancing rigorous understanding of a particular area of production, that of analysis and optimization of production lines and networks using discrete event models and simulation. To our knowledge, this is the first book treating this subject from the point of view mentioned above. We have arrived at the realization that discrete event models and simulation provide perhaps the best tools to model production lines and networks for a number of reasons. Analysis is precise but demands enormous computational resources, usually unavailable in practical situations. Brute force simulation is also precise but slow when quick decisions are to be made. Approximate analytical models are fast but often unreliable as far as accuracy is concerned. The approach of the book, on the other hand, combines speed and accuracy to an exceptional degree in most practical applications.
The underlying principles invented and developed by Dr. Genichi Taguchi (1924 - 2012), for the design of experiments or simulation calculations in multi-parameter systems, are today known as Taguchi Method. Due to the great success, it was extended to many other areas. The book explains the basics of this method in as much detail as necessary and as simply and graphically as possible. The author shows how broad the current application spectrum is and for which different tasks it can be used. The application examples range from optimizing a fermentation process in biotechnology to minimizing costs in mechanical production and maintaining and improving competitiveness in industrial production. The processes described are ideally suited to finding reliable and precise solutions for a wide variety of problems relatively quickly. A real competitive advantage not only in research but also for companies that want to remain competitive in international business competition. Contents Part 1: Analysis of Variables Part 2: Pattern Recognition and Diagnosis Part 3: Prognosis Target groups Students, scientists, engineers or those responsible for development and products learn to use the Taguchi Method with this book - even without any previous mathematical-statistical knowledge. The author Herbert Ruefer studied physics and obtained his doctorate at the Technical University Karlsruhe, Germany. After a research stay at IBM, San Jose, California, he taught at the San Marcos National University in Lima, Peru. He then took on research, development, and training tasks in the chemical industry in Germany. During this time, the first personal contacts with Dr. Genichi Taguchi and Dr. Yuin Wu took place. After his active professional life, he dedicated himself to special optical methods for astronomical observations. He also lectures at the Universidad Nacional Mayor de San Marcos which awarded him an honorary doctorate in 2017.
This book provides an introduction to the most important optical measurement techniques that are applied to engineering problems. It will also serve as a guideline to selecting and applying the appropriate technique to a particular problem. The text of the first edition has been completely revised and new chapters added to describe the latest developments in Phase-Doppler Velocimetry and Particle Image Velocimetry.The editors and authors have made a special effort not only to describe and to explain the fundamentals of measuring techniques, but also to provide guidelines for their application and to demonstrate the capabilities of the various methods.The book comes with a CD-ROM containing high-speed movies visualizing the methods described in the book.
Based on the results of the study carried out in 1996 to investigate the state of the art of workflow and process technology, MCC initiated the Collaboration Management Infrastructure (CMI) research project to develop innovative agent-based process technology that can support the process requirements of dynamically changing organizations and the requirements of nomadic computing. With a research focus on the flow of interaction among people and software agents representing people, the project deliverables will include a scalable, heterogeneous, ubiquitous and nomadic infrastructure for business processes. The resulting technology is being tested in applications that stress an intensive mobile collaboration among people as part of large, evolving business processes. Workflow and Process Automation: Concepts and Technology provides an overview of the problems and issues related to process and workflow technology, and in particular to definition and analysis of processes and workflows, and execution of their instances. The need for a transactional workflow model is discussed and a spectrum of related transaction models is covered in detail. A plethora of influential projects in workflow and process automation is summarized. The projects are drawn from both academia and industry. The monograph also provides a short overview of the most popular workflow management products, and the state of the workflow industry in general. Workflow and Process Automation: Concepts and Technology offers a road map through the shortcomings of existing solutions of process improvement by people with daily first-hand experience, and is suitable as a secondary text for graduate-level courses on workflow and process automation, and as a reference for practitioners in industry.
Fault Diagnosis of Dynamic Systems provides readers with a glimpse into the fundamental issues and techniques of fault diagnosis used by Automatic Control (FDI) and Artificial Intelligence (DX) research communities. The book reviews the standard techniques and approaches widely used in both communities. It also contains benchmark examples and case studies that demonstrate how the same problem can be solved using the presented approaches. The book also introduces advanced fault diagnosis approaches that are currently still being researched, including methods for non-linear, hybrid, discrete-event and software/business systems, as well as, an introduction to prognosis. Fault Diagnosis of Dynamic Systems is valuable source of information for researchers and engineers starting to work on fault diagnosis and willing to have a reference guide on the main concepts and standard approaches on fault diagnosis. Readers with experience on one of the two main communities will also find it useful to learn the fundamental concepts of the other community and the synergies between them. The book is also open to researchers or academics who are already familiar with the standard approaches, since they will find a collection of advanced approaches with more specific and advanced topics or with application to different domains. Finally, engineers and researchers looking for transferable fault diagnosis methods will also find useful insights in the book.
This book presents some definitions and concepts applied in Latin America on lean manufacturing (LM), the LM tools most widely used and human and cultural aspects that most matter in this field. The book contains a total of 14 tools used and reported by authors from different countries in Latin America, with definition, timeline with related research, benefits that have been reported in literature and case studies implemented in Latin American companies. Finally, the book presents a list of softwares available to facilitate the tools' implementation, monitoring and improvement.
This edited book offers further advances, new perspectives, and developments from world leaders in the field of through-life engineering services (TES). It builds up on the earlier book by the same authors entitled: "Through-life Engineering Services: Motivation, Theory and Practice." This compendium introduces and discusses further, the developments in workshop-based and 'in situ' maintenance and support of high-value engineering products, as well as the application of drone technology for autonomous and self-healing product support. The links between 'integrated planning' and planned obsolescence, risk and cost modelling are also examined. The role of data, information, and knowledge management relative to component and system degradation and failure is also presented. This is supported by consideration of the effects upon the maintenance and support decision by the presence of 'No Fault Found' error signals within system data. Further to this the role of diagnostics and prognostics is also discussed. In addition, this text presents the fundamental information required to deliver an effective TES solution/strategy and identification of core technologies. The book contains reference and discussion relative to automotive. rail, and several other industrial case studies to highlight the potential of TES to redefine the product creation and development process. Additionally the role of warranty and service data in the product creation and delivery system is also introduced. This book offers a valuable reference resource for academics, practitioners and students of TES and the associated supporting technologies and business models that underpin whole-life product creation and delivery systems through the harvesting and application of condition and use based data.
The book covers four research domains representing a trend for modern manufacturing control: Holonic and Multi-agent technologies for industrial systems; Intelligent Product and Product-driven Automation; Service Orientation of Enterprise s strategic and technical processes; and Distributed Intelligent Automation Systems. These evolution lines have in common concepts related to "service orientation" derived from the Service Oriented Architecture (SOA) paradigm. The service-oriented multi-agent systems approach discussed in the book is characterized by the use of a set of distributed autonomous and cooperative agents, embedded in smart components that use the SOA principles, being oriented by offer and request of services, in order to fulfil production systems and value chain goals. A new integrated vision combining emergent technologies is offered, to create control structures with distributed intelligence supporting the vertical and horizontal enterprise integration and running in truly distributed and global working environments. The service value creation model at enterprise level consists into using Service Component Architectures for business process applications, based on entities which handle services. In this componentization view, a service is a piece of software encapsulating the business/control logic or resource functionality of an entity that exhibits an individual competence and responds to a specific request to fulfil a local (product) or global (batch) objective. The service value creation model at enterprise level consists into using Service Component Architectures for business process applications, based on entities which handle services. In this componentization view, a service is a piece of software encapsulating the business/control logic or resource functionality of an entity that exhibits an individual competence and responds to a specific request to fulfil a local (product) or global (batch) objective.
Fault-tolerant control aims at a gradual shutdown response in automated systems when faults occur. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults, which bring about sudden shutdowns and loss of availability. The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process that can be used to ensure fault tolerance. It also introduces design methods suitable for diagnostic systems and fault-tolerant controllers for continuous processes that are described by analytical models of discrete-event systems represented by automata. The book is suitable for engineering students, engineers in industry and researchers who wish to get an overview of the variety of approaches to process diagnosis and fault-tolerant control. The authors have extensive teaching experience with graduate and PhD students, as well as with industrial experts. Parts of this book have been used in courses for this audience. The authors give a comprehensive introduction to the main ideas of diagnosis and fault-tolerant control and present some of their most recent research achievements obtained together with their research groups in a close cooperation with European research projects. The third edition resulted from a major re-structuring and re-writing of the former edition, which has been used for a decade by numerous research groups. New material includes distributed diagnosis of continuous and discrete-event systems, methods for reconfigurability analysis, and extensions of the structural methods towards fault-tolerant control. The bibliographical notes at the end of all chapters have been up-dated. The chapters end with exercises to be used in lectures.
The aim of this book is to summarize probabilistic safety assessment (PSA) of nuclear power plants (NPPs), and to demonstrate that NPPs can be considered a safe method of producing energy, even in light of the Fukushima accident. The book examines level 1 and 2 full power, low power and shutdown probabilistic safety assessment of WWER440 reactors, and summarizes the author s experience gained during the last 35 years. It provides useful examples taken from PSA training courses delivered by the author and organized by the International Atomic Energy Agency. Such training courses were organised in Argonne National Laboratory (Chicago, IL, USA), Abdus Salaam International Centre for Theoretical Physics (Trieste, Italy) in Malaysia, Vietnam and Jordan to support experts from developing countries. The role of probabilistic safety assessment (PSA) for NPPs (nuclear power plants) is an estimation of the risks in absolute terms and in comparison with other risks of the technical and the natural world. Plant-specific PSAs are being prepared for the NPPs and being applied for detection of weaknesses, design improvement and backfitting, incident analysis, accident management, emergency preparedness, prioritization of research & development and support of regulatory activities. There are three levels of PSA, being performed for full power and low power operation and shutdown operating modes of the plant: Level 1 PSA, Level 2 PSA and Level 3 PSA. The nuclear regulatory authorities do not require the level 3 PSA for NPPs in the member countries of the European Union. So, only limited number of NPPs has available the level 3 PSA in Europe. However, in the light of the Fukushima accident the performance of such analyses is strongly recommended in the future. This book is intended for professionals working in the nuclear industry, and researchers and students interested in nuclear research. "
This book provides a comprehensive and practically minded introduction into serious games for law enforcement agencies. Serious games offer wide ranging benefits for law enforcement with applications from professional trainings to command-level decision making to the preparation for crises events. This book explains the conceptual foundations of virtual and augmented reality, gamification and simulation. It further offers practical guidance on the process of serious games development from user requirements elicitation to evaluation. The chapters are intended to provide principles, as well as hands-on knowledge to plan, design, test and apply serious games successfully in a law enforcement environment. A diverse set of case studies showcases the enormous variety that is possible in serious game designs and application areas and offers insights into concrete design decisions, design processes, benefits and challenges. The book is meant for law enforcement professionals interested in commissioning their own serious games as well as game designers interested in collaborative pedagogy and serious games for the law enforcement and security sector.
Applications of Soft Computing have recently increased and methodological development has been strong. The book is a collection of new interesting industrial applications introduced by several research groups and industrial partners. It describes the principles and results of industrial applications of Soft Computing methods and introduces new possibilities to gain technical and economic benefits by using this methodology. The book shows how fuzzy logic and neural networks have been used in the Finnish paper and metallurgical industries putting emphasis on processes, applications and technical and economic results.
Though the game-theoretic approach has been vastly studied and utilized in relation to economics of industrial organizations, it has hardly been used to tackle safety management in multi-plant chemical industrial settings. Using Game Theory for Improving Safety within Chemical Industrial Parks presents an in-depth discussion of game-theoretic modeling which may be applied to improve cross-company prevention and -safety management in a chemical industrial park. By systematically analyzing game-theoretic models and approaches in relation to managing safety in chemical industrial parks, Using Game Theory for Improving Safety within Chemical Industrial Parks explores the ways game theory can predict the outcome of complex strategic investment decision making processes involving several adjacent chemical plants. A number of game-theoretic decision models are discussed to provide strategic tools for decision-making situations. Offering clear and straightforward explanations of methodologies, Using Game Theory for Improving Safety within Chemical Industrial Parks provides managers and management teams with approaches to asses situations and to improve strategic safety- and prevention arrangements.
The author is one of the prominent researchers in the field of Data Envelopment Analysis (DEA), a powerful data analysis tool that can be used in performance evaluation and benchmarking. This book is based upon the author's years of research and teaching experiences. It is difficult to evaluate an organization's performance when multiple performance metrics are present. The difficulties are further enhanced when the relationships among the performance metrics are complex and involve unknown tradeoffs. This book introduces Data Envelopment Analysis (DEA) as a multiple-measure performance evaluation and benchmarking tool. The focus of performance evaluation and benchmarking is shifted from characterizing performance in terms of single measures to evaluating performance as a multidimensional systems perspective. Conventional and new DEA approaches are presented and discussed using Excel spreadsheets - one of the most effective ways to analyze and evaluate decision alternatives. The user can easily develop and customize new DEA models based upon these spreadsheets. DEA models and approaches are presented to deal with performance evaluation problems in a variety of contexts. For example, a context-dependent DEA measures the relative attractiveness of similar operations/processes/products. Sensitivity analysis techniques can be easily applied, and used to identify critical performance measures. Two-stage network efficiency models can be utilized to study performance of supply chain. DEA benchmarking models extend DEA's ability in performance evaluation. Various cross efficiency approaches are presented to provide peer evaluation scores. This book also provides an easy-to-use DEA software - DEAFrontier. This DEAFrontier is an Add-In for Microsoft (R) Excel and provides a custom menu of DEA approaches. This version of DEAFrontier is for use with Excel 97-2013 under Windows and can solve up to 50 DMUs, subject to the capacity of Excel Solver. It is an extremely powerful tool that can assist decision-makers in benchmarking and analyzing complex operational performance issues in manufacturing organizations as well as evaluating processes in banking, retail, franchising, health care, public services and many other industries.
Micromanufacturing and Nanotechnology is an emerging technological infrastructure and process that involves manufacturing of products and systems at the micro and nano scale levels. Development of micro and nano scale products and systems are underway due to the reason that they are faster, accurate and less expensive. Moreover, the basic functional units of such systems possesses remarkable mechanical, electronic and chemical properties compared to the macro-scale counterparts. Since this infrastructure has already become the prefered choice for the design and development of next generation products and systems it is now necessary to disseminate the conceptual and practical phenomenological know-how in a broader context. This book incorporates a selection of research and development papers. Its scope is the history and background, underlynig design methodology, application domains and recent developments.
This book offers a comprehensive guide to implementing a company-wide management system (CWMS), utilising up-to-date methodologies of lean-six sigma in order to achieve high levels of business excellence. It builds the foundation for quality and continuous improvement, which can be implemented in any organization. The book begins with an introduction to and an overview of CWMSs, and reviews the existing literature on various management systems. It then discusses the integration and implementation of lean-six sigma in supply chain management. The integration approach presented highlights the link between the existing management systems and shows how continuous improvement methodologies are incorporated. The book then examines the components of CWMS, comparing them to other systems. It also explores Kano-based six sigma and concludes with further recommendations for reading. This book covers five management systems integrated into one novel approach that can be followed by organizations wishing to achieve quality and business excellence. Covering lean-six sigma - an essential element of management systems - it is a valuable resource for practitioners and academics alike.
Change, in products and systems, has become a constant in manufacturing. Changeable and Reconfigurable Manufacturing Systems discusses many key strategies for success in this environment. Changes can most often be anticipated but some go beyond the design range. This requires providing innovative change enablers and adaptation mechanisms. Changeable and Reconfigurable Manufacturing Systems presents the new concept of Changeability as an umbrella framework that encompasses many paradigms such as agility, adaptability, flexibility and reconfigurability. The book provides the definitions and classification of key terms in this new field, and emphasizes the required physical/hard and logical/soft change enablers. Over 22 chapters, the book presents cutting edge technologies, the latest thinking and research results, as well as future directions to help manufacturers stay competitive. It contains original contributions and results from senior international experts, experienced practitioners and accomplished researchers in the field of manufacturing, together with industrial applications. Changeable and Reconfigurable Manufacturing Systems serves as a comprehensive reference and textbook for industrial professionals, managers, engineers, specialists, researchers and academics in manufacturing, industrial and mechanical engineering; and general readers who are interested to learn about the new and emerging manufacturing paradigms and their potential impact on the workplace and future jobs.
This book includes details on the environmental implications of recycling, modeling of recycling, processing of recycled materials, recycling potential of materials, characterisation of recycled materials, reverse logistics, case studies of recycling various materials etc.
Machining dynamics play an essential role in the performance of the machine tools and machining processes which directly affect the removal rate, workpiece surface quality and dimensional and form accuracy. Machining Dynamics: Fundamentals and Applications will be bought by advanced undergraduate and postgraduate students studying manufacturing engineering and machining technology in addition to manufacturing engineers, production supervisors, planning and application engineers, and designers.
This book provides readers with extensive information on path planning optimization for both single and multiple Autonomous Guided Vehicles (AGVs), and discusses practical issues involved in advanced industrial applications of AGVs. After discussing previously published research in the field and highlighting the current gaps, it introduces new models developed by the authors with the goal of reducing costs and increasing productivity and effectiveness in the manufacturing industry. The new models address the increasing complexity of manufacturing networks, due for example to the adoption of flexible manufacturing systems that involve automated material handling systems, robots, numerically controlled machine tools, and automated inspection stations, while also considering the uncertainty and stochastic nature of automated equipment such as AGVs. The book discusses and provides solutions to important issues concerning the use of AGVs in the manufacturing industry, including material flow optimization with AGVs, programming manufacturing systems equipped with AGVs, reliability models, the reliability of AGVs, routing under uncertainty, and risks involved in AGV-based transportation. The clear style and straightforward descriptions of problems and their solutions make the book an excellent resource for graduate students. Moreover, thanks to its practice-oriented approach, the novelty of the findings and the contemporary topic it reports on, the book offers new stimulus for researchers and practitioners in the broad field of production engineering.
The problem of controlling the output of a system so as to achieve asymptotic tracking of prescribed trajectories and/or asymptotic re jection of undesired disturbances is a central problem in control the ory. A classical setup in which the problem was posed and success fully addressed - in the context of linear, time-invariant and finite dimensional systems - is the one in which the exogenous inputs, namely commands and disturbances, may range over the set of all possible trajectories ofa given autonomous linear system, commonly known as the exogeneous system or, more the exosystem. The case when the exogeneous system is a harmonic oscillator is, of course, classical. Even in this special case, the difference between state and error measurement feedback in the problem ofoutput reg ulation is profound. To know the initial condition of the exosystem is to know the amplitude and phase of the corresponding sinusoid. On the other hand, to solve the output regulation problem in this case with only error measurement feedback is to track, or attenu ate, a sinusoid ofknown frequency but with unknown amplitude and phase. This is in sharp contrast with alternative approaches, such as exact output tracking, where in lieu of the assumption that a signal is within a class of signals generated by an exogenous system, one instead assumes complete knowledge of the past, present and future time history of the trajectory to be tracked."
This textbook on Reliability Theory focusses on Applications in Preventive Maintenance (PM). All models are presented in connection with the relevant statistical material. Short and simply written the book is almost self-contained. The reader needs not more than basic knowledge of calculus, probability and statistics. Each chapter is concluded by a series of exercices with detailed solutions. Numerical solutions are elaborated with Mathematica software. Novel topics are discussed, like PM with learning, choice of the best time-scale for PM, handling multidimensional state description, dealing with uncertainty in data. The book is meant for graduate students, researchers and engineers specializing in Quality Control, Logistics, Reliability and Maintenance Engineering. |
You may like...
Satellite and Terrestrial Radio…
Davide Dardari, Marco Luise, …
Paperback
Advanced Digital Signal Processing: From…
Edmond Thor
Hardcover
|