![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Electronics & communications engineering > Communications engineering / telecommunications > General
Basic Concepts in Information Theory and Coding is an outgrowth of a one semester introductory course that has been taught at the University of Southern California since the mid-1960s. Lecture notes from that course have evolved in response to student reaction, new technological and theoretical develop ments, and the insights of faculty members who have taught the course (in cluding the three of us). In presenting this material, we have made it accessible to a broad audience by limiting prerequisites to basic calculus and the ele mentary concepts of discrete probability theory. To keep the material suitable for a one-semester course, we have limited its scope to discrete information theory and a general discussion of coding theory without detailed treatment of algorithms for encoding and decoding for various specific code classes. Readers will find that this book offers an unusually thorough treatment of noiseless self-synchronizing codes, as well as the advantage of problem sections that have been honed by reactions and interactions of several gen erations of bright students, while Agent 00111 provides a context for the discussion of abstract concepts."
Information systems for very large applications present problems of scale which generate the need for particular software design techniques. The system used by BT for its customer services is usable as a paradigm for any user operating with a large and complex client base. This book will cover some of the more important systems currently deployed by BT to manage its multi-million customer network, the architecture that guides these systems, the evolving technology from which they are built and the future directions in their evolution. Computing Systems for Global Telecommunications is essential reading for software engineers working on all types of large Operational Support Systems; systems designers working for telecommunications providers; advanced undergraduate and postgraduate students and researchers studying software engineering.
Since the publication of the first edition of Fundamentals of Digital Switching in 1983, there has been substantial improvement in digital switching technology and in digital networks. Packet switching has advanced from a low-speed data-oriented switching approach into a robust broadband technology which supports services ranging from low-speed data to video. This technology has eclipsed the flexibility of circuit switching. Fiber optic cable has advanced since the first edition and has substantially changed the technology of transmission. to research in optical devices to find a still better means of This success has led switching. Digital switching systems continue to benefit from the 100-fold improvement in the capabilities of semiconductor devices which has occurred during the past decade. The chip industry forecasts a similar escalation in complexity during the next 10 years. Networks of switching systems have changed due to regulatory policy reform in many nations, including the breakup of the Bell System in the United States, the introduction of new types of carriers in Japan, competition in the United Kingdom, and a reexamination of public policy in virtually all nations. Standards bodies have been productive in specifying new capabilities for future networks involving interactive and distributive services through STM and A TM technologies.
Source coding theory has as its goal the characterization of the optimal performance achievable in idealized communication systems which must code an information source for transmission over a digital communication or storage channel for transmission to a user. The user must decode the information into a form that is a good approximation to the original. A code is optimal within some class if it achieves the best possible fidelity given whatever constraints are imposed on the code by the available channel. In theory, the primary constraint imposed on a code by the channel is its rate or resolution, the number of bits per second or per input symbol that it can transmit from sender to receiver. In the real world, complexity may be as important as rate. The origins and the basic form of much of the theory date from Shan non's classical development of noiseless source coding and source coding subject to a fidelity criterion (also called rate-distortion theory) [73] [74]. Shannon combined a probabilistic notion of information with limit theo rems from ergodic theory and a random coding technique to describe the optimal performance of systems with a constrained rate but with uncon strained complexity and delay. An alternative approach called asymptotic or high rate quantization theory based on different techniques and approx imations was introduced by Bennett at approximately the same time [4]. This approach constrained the delay but allowed the rate to grow large.
Extremum-seeking control tracks a varying maximum or minimum in a performance function such as output or cost. It attempts to determine the optimal performance of a control system as it operates, thereby reducing downtime and the need for system analysis. Extremum-seeking Control and Applications is divided into two parts. In the first, the authors review existing analog-optimization-based extremum-seeking control including gradient-, perturbation- and sliding-mode-based control designs. They then propose a novel numerical-optimization-based extremum-seeking control based on optimization algorithms and state regulation. This control design is developed for simple linear time-invariant systems and then extended for a class of feedback linearizable nonlinear systems. The two main optimization algorithms - line search and trust region methods - are analyzed for robustness. Finite-time and asymptotic state regulators are put forward for linear and nonlinear systems respectively. Further design flexibility is achieved using the robustness results of the optimization algorithms and the asymptotic state regulator by which existing nonlinear adaptive control techniques can be introduced for robust design. The approach used is easier to implement and tends to be more robust than those that use perturbation-based extremum-seeking control. The second part of the book deals with a variety of applications of extremum-seeking control: a comparative study of extremum-seeking control schemes in antilock braking system design; source seeking, formation control, collision and obstacle avoidance for groups of autonomous agents; mobile radar networks; and impedance matching. MATLAB (R)/Simulink (R) code which can be downloaded from www.springer.com/ISBN helps readers to reproduce the results presented in the text and gives them a head start for implementing the algorithms in their own applications. Extremum-seeking Control and Applications will interest academics and graduate students working in control, and industrial practitioners from a variety of backgrounds: systems, automotive, aerospace, communications, semiconductor and chemical engineering.
IT changes everyday's life, especially in education and medicine. The goal of ITME 2014 is to further explore the theoretical and practical issues of Ubiquitous Computing Application and Wireless Sensor Network. It also aims to foster new ideas and collaboration between researchers and practitioners. The organizing committee is soliciting unpublished papers for the main conference and its special tracks.
It is important to understand what came before and how to meld new products with legacy systems. Network managers need to understand the context and origins of the systems they are using. Programmers need an understanding of the reasons behind the interfaces they must satisfy and the relationship of the software they build to the whole network. And finally, sales representatives need to see the context into which their products must fit.
Online fault diagnosis is crucial to ensure safe operation of complex dynamic systems in spite of faults affecting the system behaviors. Consequences of the occurrence of faults can be severe and result in human casualties, environmentally harmful emissions, high repair costs, and economical losses caused by unexpected stops in production lines. The majority of real systems are hybrid dynamic systems (HDS). In HDS, the dynamical behaviors evolve continuously with time according to the discrete mode (configuration) in which the system is. Consequently, fault diagnosis approaches must take into account both discrete and continuous dynamics as well as the interactions between them in order to perform correct fault diagnosis. This book presents recent and advanced approaches and techniques that address the complex problem of fault diagnosis of hybrid dynamic and complex systems using different model-based and data-driven approaches in different application domains (inductor motors, chemical process formed by tanks, reactors and valves, ignition engine, sewer networks, mobile robots, planetary rover prototype etc.). These approaches cover the different aspects of performing single/multiple online/offline parametric/discrete abrupt/tear and wear fault diagnosis in incremental/non-incremental manner, using different modeling tools (hybrid automata, hybrid Petri nets, hybrid bond graphs, extended Kalman filter etc.) for different classes of hybrid dynamic and complex systems.
The book presents theory and algorithms for secure networked inference in the presence of Byzantines. It derives fundamental limits of networked inference in the presence of Byzantine data and designs robust strategies to ensure reliable performance for several practical network architectures. In particular, it addresses inference (or learning) processes such as detection, estimation or classification, and parallel, hierarchical, and fully decentralized (peer-to-peer) system architectures. Furthermore, it discusses a number of new directions and heuristics to tackle the problem of design complexity in these practical network architectures for inference.
This book presents the proceedings of the 3rd Brazilian Technology Symposium (BTSym), which is a multi/trans/interdisciplinary event offering an excellent forum for presentations and discussions of the latest scientific and technological developments in various areas of research, with an emphasis on smart design and future technologies. It brings together researchers, students and professionals from the industrial and academic sectors to discuss current technological issues. Among the main topics covered in this book, we can highlight Artificial Neural Networks, Computational Vision, Security Applications, Web Tool, Cloud Environment, Network Functions Virtualization, Software-Defined Networks, IoT, Residential Automation, Data Acquisition, Industry 4.0, Cyber-Physical Systems, Digital Image Processing, Infrared Images, Patters Recognition, Digital Video Processing, Precoding, Embedded Systems, Machine Learning, Remote Sensing, Wireless Sensor Network, Heterogeneous Networks, Unmanned Ground Vehicle, Unmanned Aerial System, Security, Surveillance, Traffic Analysis, Digital Television, 5G, Image Filter, Partial Differential Equation, Smoothing Filters, Voltage Controlled Ring Oscillator, Difference Amplifier, Photocatalysis, Photodegradation, Cosmic Radiation Effects, Radiation Hardening Techniques, Surface Electromyography, Sickle cell disease methodology, MicroRNAs, Image Processing Venipuncture, Cognitive Ergonomics, Ecosystem services, Environmental, Power Generation, Ecosystem services valuation, Solid Waste and University Extension.
This book focuses on the theory and application of interdependent networks. The contributors consider the influential networks including power and energy networks, transportation networks, and social networks. The first part of the book provides the next generation sustainability framework as well as a comprehensive introduction of smart cities with special emphasis on energy, communication, data analytics and transportation. The second part offers solutions to performance and security challenges of developing interdependent networks in terms of networked control systems, scalable computation platforms, and dynamic social networks. The third part examines the role of electric vehicles in the future of sustainable interdependent networks. The fourth and last part of this volume addresses the promises of control and management techniques for the future power grids.
Although adaptive filtering and adaptive array processing began with research and development efforts in the late 1950's and early 1960's, it was not until the publication of the pioneering books by Honig and Messerschmitt in 1984 and Widrow and Stearns in 1985 that the field of adaptive signal processing began to emerge as a distinct discipline in its own right. Since 1984 many new books have been published on adaptive signal processing, which serve to define what we will refer to throughout this book as conventional adaptive signal processing. These books deal primarily with basic architectures and algorithms for adaptive filtering and adaptive array processing, with many of them emphasizing practical applications. Most of the existing textbooks on adaptive signal processing focus on finite impulse response (FIR) filter structures that are trained with strategies based on steepest descent optimization, or more precisely, the least mean square (LMS) approximation to steepest descent. While literally hundreds of archival research papers have been published that deal with more advanced adaptive filtering concepts, none of the current books attempt to treat these advanced concepts in a unified framework. The goal of this new book is to present a number of important, but not so well known, topics that currently exist scattered in the research literature. The book also documents some new results that have been conceived and developed through research conducted at the University of Illinois during the past five years.
This book presents exciting recent research on the compression of images and text. Part 1 presents the (lossy) image compression techniques of vector quantization, iterated transforms (fractal compression), and techniques that employ optical hardware. Part 2 presents the (lossless) text compression techniques of arithmetic coding, context modeling, and dictionary methods (LZ methods); this part of the book also addresses practical massively parallel architectures for text compression. Part 3 presents theoretical work in coding theory that has applications to both text and image compression. The book ends with an extensive bibliography of data compression papers and books which can serve as a valuable aid to researchers in the field. Points of Interest: Data compression is becoming a key factor in the digital storage of text, speech graphics, images, and video, digital communications, data bases, and supercomputing. The book addresses hot' data compression topics such as vector quantization, fractal compression, optical data compression hardware, massively parallel hardware, LZ methods, arithmetic coding. Contributors are all accomplished researchers. Extensive bibliography to aid researchers in the field.
This book emphasizes the increasingly important role that Computational Intelligence (CI) methods are playing in solving a myriad of entangled Wireless Sensor Networks (WSN) related problems. The book serves as a guide for surveying several state-of-the-art WSN scenarios in which CI approaches have been employed. The reader finds in this book how CI has contributed to solve a wide range of challenging problems, ranging from balancing the cost and accuracy of heterogeneous sensor deployments to recovering from real-time sensor failures to detecting attacks launched by malicious sensor nodes and enacting CI-based security schemes. Network managers, industry experts, academicians and practitioners alike (mostly in computer engineering, computer science or applied mathematics) benefit from th e spectrum of successful applications reported in this book. Senior undergraduate or graduate students may discover in this book some problems well suited for their own research endeavors.
Optical networks epitomize complex communication systems, and they comprise the Internet s infrastructural backbone. The first of its kind, this book develops the mathematical framework needed from a control perspective to tackle various game-theoretical problems in optical networks. In doing so, it aims to help design control algorithms that optimally allocate the resources of these networks. With its fresh problem-solving approach, Game Theory in Optical Networks is a unique resource for researchers, practitioners, and graduate students in applied mathematics and systems/control engineering, as well as those in electrical and computer engineering."
This book provides a novel method for topic detection and classification in social networks. The book addresses several research and technical challenges that are currently being investigated by the research community, from the analysis of relations and communications between members of a community, to quality, authority, relevance and timeliness of the content, traffic prediction based on media consumption, spam detection, to security, privacy and protection of personal information. Furthermore, the book discusses innovative techniques to address those challenges and provides novel solutions based on information theory, sequence analysis and combinatorics, which are applied on real data obtained from Twitter.
Due to the progress in VLSI technology, integrated circuit chips are now available that allow video/image signal processing to be performed with a single VLSI chip or small sets of VLSI chips. Recent standardization on bandwidth compression schemes for still images (JPEG) and motion pictures (H.261, R723, MPEG) also encourage the development of VLSI video/image processors for cost-effective solutions. Furthermore, recent trends suggest that the standardization on HDTB bandwidth compression for broadcasting and storage purposes is just around the corner. In terms of device technology, however, the progress achieved in increasing speed is not as high as that achieved by integration. The development of high speed systems is due to architectural effort, rather than device technology. This is why high speed architectures, such as those for special wired logic realization and for multi-processors are of great interest to VLSI system designers. VLSI Video/Image Signal Processing is an edited volume of original research comprising invited contributions by leading researchers.
The book presents a collection of peer-reviewed articles from the 11th KES International Conference on Intelligent Decision Technologies (KES-IDT-19), held Malta on 17-19 June 2019. The conference provided opportunities for the presentation of new research results and discussion about them. It was also an opportunity to generation of new ideas in the field of intelligent decision making. The range of topics explored is wide, and covers methods of classification, prediction, data analysis, decision support, modelling and many more in such areas as finance, cybersecurity, economy, health, management and transportation. The topics cover also problems of data science, signal processing and knowledge engineering.
Current research fields in science and technology were presented and discussed at the EKC2008, informing about the interests and directions of the scientists and engineers in EU countries and Korea. The Conference has emerged from the idea of bringing together EU and Korea to get to know each other better, especially in fields of science and technology. The focus of the conference is put on the topics: Computational Fluid Dynamics, Mechatronics and Mechanical Engineering, Information and Communications Technology, Life and Natural Sciences, Energy and Environmental Technology.
The existence of electrical noise is basically due to the fact that electrical charge is not continuous but is carried in discrete amounts equal to the electron charge. Electrical noise represents a fundamental limit on the performance of electronic circuits and systems. With the explosive growth in the personal mobile communications market, the need for noise analysis/simulation techniques for nonlinear electronic circuits and systems has been re-emphasized. Even though most of the signal processing is done in the digital domain, every wireless communication device has an analog front-end which is usually the bottleneck in the design of the whole system. The requirements for low-power operation and higher levels of integration create new challenges in the design of the analog signal processing subsystems of these mobile communication devices. The effect of noise on the performance of these inherently nonlinear analog circuits is becoming more and more significant. Analysis and Simulation of Noise in Nonlinear Electronic Circuits and Systems presents analysis, simulation and characterization techniques and behavioral models for noise in nonlinear electronic circuits and systems, along with practical examples. This book treats the problem within the framework of, and using techniques from, the probabilistic theory of stochastic processes and stochastic differential systems. Analysis and Simulation of Noise in Nonlinear Electronic Circuits and Systems will be of interest to RF/analog designers as well as engineers interested in stochastic modeling and simulation.
Asynchronous Transfer Mode (ATM) networks are widely considered to be the new generation of high speed communication systems both for broadband public information highways and for local and wide area private networks. ATM is designed to integrate existing and future voice, audio, image and data services. Moreover, ATM aims to simplify the complexity of switching and buffer management, to optimise intermediate node processing and buffering and to limit transmission delays. However, to support such diverse services on one integrated communication network, it is most essential, through careful engineering, to achieve a fruitful balance amongst the conflicting requirements of different quality of service constraints ensuring that one service does not have adverse implications on another. Over recent years there has been a great deal of progress in research and development of ATM technology, but there are still many interesting and important problems to be resolved such as traffic characterisation and control, routing and optimisation, ATM switching techniques and the provision of quality of service. This book presents thirty-two research papers, both from industry and academia, reflecting latest original achievements in the theory and practice of performance modelling of ATM networks worldwide. These papers were selected, subject to peer review, from those submitted as extended and revised versions out of fifty-nine shorter papers presented at the Second IFIP Workshop on "Performance Modelling and Evaluation of ATM Networks" July 4-7, 1994, Bradford University. At least three referees from the scientific committee and externally were involved in the selection of each paper.
This book focuses on the design and testing of large-scale, distributed signal processing systems, with a special emphasis on systems architecture, tooling and best practices. Architecture modeling, model checking, model-based evaluation and model-based design optimization occupy central roles. Target systems with resource constraints on processing, communication or energy supply require non-trivial methodologies to model their non-functional requirements, such as timeliness, robustness, lifetime and "evolution" capacity. Besides the theoretical foundations of the methodology, an engineering process and toolchain are described. Real-world cases illustrate the theory and practice tested by the authors in the course of the European project ARTEMIS DEMANES. The book can be used as a "cookbook" for designers and practitioners working with complex embedded systems like sensor networks for the structural integrity monitoring of steel bridges, and distributed micro-climate control systems for greenhouses and smart homes.
This book offers a comprehensive report on the technological aspects of Mobile Health (mHealth) and discusses the main challenges and future directions in the field. It is divided into eight parts: (1) preventive and curative medicine; (2) remote health monitoring; (3) interoperability; (4) framework, architecture, and software/hardware systems; (5) cloud applications; (6) radio technologies and applications; (7) communication networks and systems; and (8) security and privacy mechanisms. The first two parts cover sensor-based and bedside systems for remotely monitoring patients' health condition, which aim at preventing the development of health problems and managing the prognosis of acute and chronic diseases. The related chapters discuss how new sensing and wireless technologies can offer accurate and cost-effective means for monitoring and evaluating behavior of individuals with dementia and psychiatric disorders, such as wandering behavior and sleep impairments. The following two parts focus on architectures and higher level systems, and on the challenges associated with their interoperability and scalability, two important aspects that stand in the way of the widespread deployment of mHealth systems. The remaining parts focus on telecommunication support systems for mHealth, including radio technologies, communication and cloud networks, and secure health-related applications and systems. All in all, the book offers a snapshot of the state-of-art in mHealth systems, and addresses the needs of a multidisciplinary audience, including engineers, computer scientists, healthcare providers, and medical professionals, working in both academia and the industry, as well as stakeholders at government agencies and non-profit organizations.
Synthesis and Optimization of DSP Algorithms describes approaches taken to synthesising structural hardware descriptions of digital circuits from high-level descriptions of Digital Signal Processing (DSP) algorithms. The book contains: -A tutorial on the subjects of digital design and architectural
synthesis, intended for DSP engineers,
This book is an introduction to the mathematical description of information in science and engineering. The necessary ma- thematical theory will be treated in a more vivid way than in the usual theoretical proof structure. This enables the reader to develop an idea of the connections between diffe- rent information measures and to understand the trains of thoughts in their derivation. As there exist a great number of different possible ways to describe information, these measures are presented in a coherent manner. Some examples of the information measures examined are: Shannon informati- on, applied in coding theory; Akaike information criterion, used in system identification to determine auto-regressive models and in neural networks to identify the number of neu- rons; and Cramer-Rao bound or Fisher information, describing the minimal variances achieved by unbiased estimators. |
You may like...
Practical TCP/IP and Ethernet Networking…
Deon Reynders, Edwin Wright
Paperback
R1,491
Discovery Miles 14 910
Optimization of Manufacturing Systems…
Yingfeng Zhang, Fei Tao
Paperback
|