![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Energy technology & engineering > Electrical engineering > General
This volume contains selected presentations of the "EUROMECH Colloquium 412 on LES of complex transitional and turbulent flows" held at the Munich University of Technology from 4 to 6 October 2000. The articles focus on new developments in the field of large-eddy simulation of complex flows and are related to the topics: modelling and analysis of subgrid scales, numerical issues in LES cartesian grids for complex geometries, curvilinear and non-structured grids for complex geometries. DES and RANS-LES coupling, aircraft wake vortices, combustion and magnetohydrodynamics. Progress has been made not only in understanding and modelling the dynamics of unresolved scales, but also in designing means that prevent the contamination of LES predictions by discretization errors. Progress is reported as well on the use of cartesian and curvilinear coordinates to compute flow in and around complex geometries and in the field of LES with unstructured grids. A chapter is dedicated to the detached-eddy simulation technique and its recent achievements and to the promising technique of coupling RANS and LES solutions in order to push the resolution-based Reynolds number limit of wall-resolving LES to higher values. Complexity due to physical mechanisms links the last two chapters. It is shown that LES constitutes the tool to analyse the physics of aircraft wake vortices during landing and takeoff. Its thorough understanding is a prerequisite for reliable predictions of the distance between consecutive landing airplanes. Subgrid combustion modelling for LES of single and two-phase reacting flows is demonstrated to have the potential to deal with finite-rate kinetics in high Reynolds numberflows of full-scale gas turbine engines. Fluctuating magnetic fields are more reliably predicted by LES when tensor-diffusivity rather than gradient-diffusion models are used. An encouraging result in the context of turbulence control by magnetic fields.
In the last decade, signi?cant changes have occurred in the ?eld of vehicle motion planning, and for UAVs in particular. UAV motion planning is especially dif?cult due to several complexities not considered by earlier planning strategies: the - creased importance of differential constraints, atmospheric turbulence which makes it impossible to follow a pre-computed plan precisely, uncertainty in the vehicle state, and limited knowledge about the environment due to limited sensor capabilities. These differences have motivated the increased use of feedback and other control engineering techniques for motion planning. The lack of exact algorithms for these problems and dif?culty inherent in characterizing approximation algorithms makes it impractical to determine algorithm time complexity, completeness, and even soundness. This gap has not yet been addressed by statistical characterization of experimental performance of algorithms and benchmarking. Because of this overall lack of knowledge, it is dif?cult to design a guidance system, let alone choose the algorithm. Throughout this paper we keep in mind some of the general characteristics and requirements pertaining to UAVs. A UAV is typically modeled as having velocity and acceleration constraints (and potentially the higher-order differential constraints associated with the equations of motion), and the objective is to guide the vehicle towards a goal through an obstacle ?eld. A UAV guidance problem is typically characterized by a three-dimensional problem space, limited information about the environment, on-board sensors with limited range, speed and acceleration constraints, and uncertainty in vehicle state and sensor data.
Fibre-to-the-Home networks constitute a fundamental telecom segment with the required potential to match the huge capacity of transport networks with the new user communication demands. Huge investments in access network infrastructure are expected for the next decade, with many initiatives already launched around the globe recently, driven by the new broadband service demands and the necessity by operators to deploy a future-proof infrastructure in the field. Dense FTTH Passive Optical Networks (PONs) is a cost-efficient way to build fibre access, and international standards (G/E-PON) have been already launched, leading to new set of telecom products for mass deployment. However, these systems only make use of less than 1% of the optical bandwidth; thus, relevant research is taking place to maximize the capacity of these systems, with the latest opto-electronic technologies, demonstrating that the huge bandwidth available through the fibre access can be exploited in a cost-efficient and reliable manner. Next-Generation FTTH Passive Optical Networks gathers and analyzes the most relevant techniques developed recently on technologies for the next generation FTTH networks, trying to answer the question: what's after G/E-PONs?
Multicore Processors and Systems provides a comprehensive overview of emerging multicore processors and systems. It covers technology trends affecting multicores, multicore architecture innovations, multicore software innovations, and case studies of state-of-the-art commercial multicore systems. A cross-cutting theme of the book is the challenges associated with scaling up multicore systems to hundreds of cores. The book provides an overview of significant developments in the architectures for multicore processors and systems. It includes chapters on fundamental requirements for multicore systems, including processing, memory systems, and interconnect. It also includes several case studies on commercial multicore systems that have recently been developed and deployed across multiple application domains. The architecture chapters focus on innovative multicore execution models as well as infrastructure for multicores, including memory systems and on-chip interconnections. The case studies examine multicore implementations across different application domains, including general purpose, server, media/broadband, network processing, and signal processing. Multicore Processors and Systems is the first book that focuses solely on multicore processors and systems, and in particular on the unique technology implications, architectures, and implementations. The book has contributing authors that are from both the academic and industrial communities.
Power engineering has become a multidisciplinary field ranging from linear algebra, electronics, signal processing to artificial intelligence including recent trends like bio-inspired computation, lateral computing and so on. In this book, Ukil builds the bridge between these inter-disciplinary power engineering practices. The book looks into two major fields used in modern power systems: intelligent systems and the signal processing. The intelligent systems section comprises of fuzzy logic, neural network and support vector machine. The author looks at relevant theories on the topics without assuming much particular background. Following the theoretical basics, he studies their applications in various problems in power engineering, like, load forecasting, phase balancing, or disturbance analysis. These application studies are of two types: full application studies explained like in-depth case-studies, and semi-developed application ideas with scope for further extension. This is followed by pointers to further research information. In the second part, the book leads into the signal processing from the basics of the system theory, followed by fundamentals of different signal processing transforms with examples. A section follows about the sampling technique and the digital filters which are the ultimate processing tools. The theoretical basics are substantiated by some of the applications in power engineering, both in-depth and semi-developed as before. This also ends up with pointers to further research information. Intelligent Systems and Signal Processing in Power Engineering is helpful for students, researchers and engineers, trying to solve power engineering problems using intelligent systems and signal processing, or seeking applications of intelligent systems and signal processing in power engineering."
Promptly growing demand for telecommunication services and information interchange has led to the fact that communication became one of the most dynamical branches of an infrastructure of a modern society. The book introduces to the bases of classical MDP theory; problems of a finding optimal CAC in models are investigated and various problems of improvement of characteristics of traditional and multimedia wireless communication networks are considered together with both classical and new methods of theory MDP which allow defining optimal strategy of access in teletraffic systems. The book will be useful to specialists in the field of telecommunication systems and also to students and post-graduate students of corresponding specialties.
In this book, recent progress in batteries is firstly reviewed by researchers in three leading Japanese battery companies, SONY, Matsushita and Sanyo, and then the future problems in battery development are stated. Then, recent development of solid state ionics for batteries, including lithium ion battery, metal-hydride battery, and fuel cells, are reviewed. A battery comprises essentially three components: positive electrode, negative electrode, and electrolyte. Each component is discussed for the construction of all-solid-state Batteries. Theoretical understanding of properties of battery materials by using molecular orbital calculations is also introduced.
The continous development of computer technology supported by the VLSI revolution stimulated the research in the field .of multiprocessors systems. The main motivation for the migration of design efforts from conventional architectures towards multiprocessor ones is the possibi I ity to obtain a significant processing power together with the improvement of price/performance, reliability and flexibility figures. Currently, such systems are moving from research laboratories to real field appl ications. Future technological advances and new generations of components are I ikely to further enhance this trend. This book is intended to provide basic concepts and design methodologies for engineers and researchers involved in the development of mul tiprocessor systems and/or of appl ications based on multiprocessor architectures. In addition the book can be a source of material for computer architecture courses at graduate level. A preliminary knowledge of computer architecture and logical design has been assumed in wri ting this book. Not all the problems related with the development of multiprocessor systems are addressed in th i s book. The covered range spans from the electrical and logical design problems, to architectural issues, to design methodologis for system software. Subj ects such as software development in a multiprocessor environment or loosely coupled multiprocessor systems are out of the scope of the book. Since the basic elements, processors and memories, are now available as standard integrated circuits, the key design problem is how to put them together in an efficient and reliable way."
This book constitutes the Final Report of COST Action 279, Analysis and DesignofAdvancedMultiserviceNetworkssupportingMultimedia, Mobility, andInterworking, a guided tour of the state-of-the-art work on diverse aspects of modern telecommunications networks design developed within this Action during the four years of its operation, started on July 1, 2001, and ended on June 30, 2005. As stated in its founding charter, its Memorandum of Understanding, the work area of COST 279 is the analysis, design, and control aspects of prese- day networks-quite a wide scope. Behind the unifying fac, ade put on by the Internet Protocol (IP) network layer, todays networks hide a mess of hete- geneity: heterogeneity at the level of applications, both concerning the traf?c they produce and the network Quality of Service (QoS) they require, and h- erogeneity at the level of network component subsystems, in particular an - creasingly important mobile/wireless access segment. A common ground for the treatment of this disparate set of topics was given by the strong meth- ological component contained in the approach followed in COST 279, with importance placed on the development and application, whenever possible, of analytical techniques and models for the mathematical understanding of the systems under study. The results expected from the Action ranged thus from mathematical models and algorithms as entities of own interest to the und- standing of systembehavior via their application."
Information technology is the enabling foundation for all of human activity at the beginning of the 21st century, and advances in this area are crucial to all of us. These advances are taking place all over the world and can only be followed and perceived when researchers from all over the world assemble, and exchange their ideas in conferences such as the one presented in this proceedings volume regarding the 26th International Symposium on Computer and Information Systems, held at the Royal Society in London on 26th to 28th September 2011. Computer and Information Sciences II contains novel advances in the state of the art covering applied research in electrical and computer engineering and computer science, across the broad area of information technology. It provides access to the main innovative activities in research across the world, and points to the results obtained recently by some of the most active teams in both Europe and Asia.
Power Quality Enhancement Using Custom Power Devices considers the
structure, control and performance of series compensating DVR, the
shunt DSTATCOM and the shunt with series UPQC for power quality
improvement in electricity distribution.
This volume includes a comprehensive theoretical treatment and current state-of-the art applications of the quartz crystal microbalance (QCM). It discusses interface circuits and the study of viscoelasticity and micromechanics as well as surface roughness with the QCM. Coverage also details the broad field of analytical applications of piezoelectric sensors.
The international conference on Automation and Robotics-ICAR2011 is held during December 12-13, 2011 in Dubai, UAE. The proceedings of ICAR2011 have been published by Springer Lecture Notes in Electrical Engineering, which include 163 excellent papers selected from more than 400 submitted papers. The conference is intended to bring together the researchers and engineers/technologists working in different aspects of intelligent control systems and optimization, robotics and automation, signal processing, sensors, systems modeling and control, industrial engineering, production and management. This part of proceedings includes 81 papers contributed by many researchers in relevant topic areas covered at ICAR2011 from various countries such as France, Japan, USA, Korea and China etc. Many papers introduced their advanced research work recently; some of them gave a new solution to problems in the field, with powerful evidence and detail demonstration. Others stated the application of their designed and realized systems. The session topic of this proceeding is intelligent control and robotics and automation, which includes papers about Distributed Control Systems, Intelligent Fault Detection and Identification, Machine Learning in Control, Neural Networks based Control Systems, Fuzzy Control, Genetic Algorithms, Robot Design, Human-robots Interfaces, Network Robotics, and Autonomous Systems, Industrial Networks and Automation, Modeling, Simulation and Architectures, Vision, Recognition and Reconstruction, Virtual Reality, Image Processing, and so on. All of papers here involved the authors' numerous time and energy, will be proved valuable in their research field. Sincere thanks to the committee and all the authors, moreover anonymous reviewers from many fields and organizations. That is a power for all of us to go on research work for the world."
Computational Electromagnetics is a young and growing discipline, expanding as a result of the steadily increasing demand for software for the design and analysis of electrical devices. This book introduces three of the most popular numerical methods for simulating electromagnetic fields: the finite difference method, the finite element method and the method of moments. In particular it focuses on how these methods are used to obtain valid approximations to the solutions of Maxwell's equations, using, for example, "staggered grids" and "edge elements." The main goal of the book is to make the reader aware of different sources of errors in numerical computations, and also to provide the tools for assessing the accuracy of numerical methods and their solutions. To reach this goal, convergence analysis, extrapolation, von Neumann stability analysis, and dispersion analysis are introduced and used frequently throughout the book. Another major goal of the book is to provide students with enough practical understanding of the methods so they are able to write simple programs on their own. To achieve this, the book contains several MATLAB programs and detailed description of practical issues such as assembly of finite element matrices and handling of unstructured meshes. Finally, the book aims at making the students well-aware of the strengths and weaknesses of the different methods, so they can decide which method is best for each problem. In thissecond edition, extensive computer projects are added as well as new material throughout. Reviews of previous edition: "The well-written monograph is devoted to students at the undergraduate level, but is also useful for practising engineers." (Zentralblatt MATH, 2007)"
This book explains the fundamental concepts of information theory, so as to help students better understand modern communication technologies. It was especially written for electrical and communication engineers working on communication subjects. The book especially focuses on the understandability of the topics, and accordingly uses simple and detailed mathematics, together with a wealth of solved examples. The book consists of four chapters, the first of which explains the entropy and mutual information concept for discrete random variables. Chapter 2 introduces the concepts of entropy and mutual information for continuous random variables, along with the channel capacity. In turn, Chapter 3 is devoted to the typical sequences and data compression. One of Shannon's most important discoveries is the channel coding theorem, and it is critical for electrical and communication engineers to fully comprehend the theorem. As such, Chapter 4 solely focuses on it. To gain the most from the book, readers should have a fundamental grasp of probability and random variables; otherwise, they will find it nearly impossible to understand the topics discussed.
This work is on biometric data indexing for large-scale identification systems with a focus on different biometrics data indexing methods. It provides state-of-the-art coverage including different biometric traits, together with the pros and cons for each. Discussion of different multimodal fusion strategies are also included.
Recent years have brought substantial developments in electrical
drive technology, with the appearance of highly rated,
very-high-speed power-electronic switches, combined with
microcomputer control systems.
Fault-tolerance in integrated circuits is not an exclusive concern regarding space designers or highly-reliable application engineers. Rather, designers of next generation products must cope with reduced margin noises due to technological advances. The continuous evolution of the fabrication technology process of semiconductor components, in terms of transistor geometry shrinking, power supply, speed, and logic density, has significantly reduced the reliability of very deep submicron integrated circuits, in face of the various internal and external sources of noise. The very popular Field Programmable Gate Arrays, customizable by SRAM cells, are a consequence of the integrated circuit evolution with millions of memory cells to implement the logic, embedded memories, routing, and more recently with embedded microprocessors cores. These re-programmable systems-on-chip platforms must be fault-tolerant to cope with present days requirements. This book discusses fault-tolerance techniques for SRAM-based Field Programmable Gate Arrays (FPGAs). It starts by showing the model of the problem and the upset effects in the programmable architecture. to protect integrated circuits against errors. A large set of methods for designing fault tolerance systems in SRAM-based FPGAs is described. Some presented techniques are based on developing a new fault-tolerant architecture with new robustness FPGA elements. Other techniques are based on protecting the high-level hardware description before the synthesis in the FPGA. The reader has the flexibility of choosing the most suitable fault-tolerance technique for its project and to compare a set of fault tolerant techniques for programmable logic applications. |
You may like...
Control Systems in Engineering and…
P. Balasubramaniam, Sathiyaraj Thambiayya, …
Hardcover
R3,081
Discovery Miles 30 810
The Electrostatic Accelerator - A…
Ragnar Hellborg, Harry J. Whitlow
Paperback
R754
Discovery Miles 7 540
Power-to-Gas: Bridging the Electricity…
Mohammad Amin Mirzaei, Mahdi Habibi, …
Paperback
R3,213
Discovery Miles 32 130
Smart Sensors and MEMS - Intelligent…
S. Nihtianov, A. Luque
Paperback
Building Services Engineering for…
Peter Tanner, Stephen Jones, …
Paperback
R1,352
Discovery Miles 13 520
Practical Grounding, Bonding, Shielding…
G. Vijayaraghavan, Mark Brown, …
Paperback
R1,427
Discovery Miles 14 270
|