![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Reference & Interdisciplinary > Communication studies > Information theory > General
In a bold attempt to redirect the ways theories of communication
are conceived and research on communication processes are
conducted, this volume questions prevailing communication
scholarship that emphasizes the cultural, psychological, and
sociological variables that impact on, and/or are impacted by,
communication. Instead of focusing on the "consequences" of
communication, this books urges readers to examine the
"consequentiality" of communication -- what it is about the
communication process that enables it to play a defining role in
our lives. Communication is not a neutral conveyor of meanings
derived from culture, cognition, or social structure, and is not
explained by correlations with external variables. Meaning emerges
from the communication process itself; it is dependent upon what
transpires during the real-time moments of communicators behaving
with each other. To properly study this new paradigm, a new
vocabulary for thinking about the consequentiality of communication
is needed and proposed.
In a bold attempt to redirect the ways theories of communication
are conceived and research on communication processes are
conducted, this volume questions prevailing communication
scholarship that emphasizes the cultural, psychological, and
sociological variables that impact on, and/or are impacted by,
communication. Instead of focusing on the "consequences" of
communication, this books urges readers to examine the
"consequentiality" of communication -- what it is about the
communication process that enables it to play a defining role in
our lives. Communication is not a neutral conveyor of meanings
derived from culture, cognition, or social structure, and is not
explained by correlations with external variables. Meaning emerges
from the communication process itself; it is dependent upon what
transpires during the real-time moments of communicators behaving
with each other. To properly study this new paradigm, a new
vocabulary for thinking about the consequentiality of communication
is needed and proposed.
Whereas many organizational communication texts address internal
communication processes, few consider the efforts that companies
expend to communicate with external stakeholders. Likewise, many
texts that concentrate on public relations or advertising consider
external communication, but fail to give attention to internal
communication. Combining both points of view, this text explains
how an entire organization operates through enactments of personnel
and external stakeholders.
Social scientists often dismiss the media as untrustworthy and irresponsible and the media frequently regard social scientists as incapable of giving a straight answer. The contributors to this volume complain of having been misrepresented, misquoted and edited out of all recognition. That this clash of cultures should occur is not surprising given the different priorities and perspectives of the social sciences and the media. This work examines these issues from the viewpoint of the media and social scientists who have had extensive media contact. The academics contributing to this book have conducted research on a diverse range of topics including: education, stress, football hooliganism, intelligence, risk factors for illness, drug use, performance appraisal in universities, politics, sex, religion, pornography, female sexuality, terrorism, youth culture and media studies. There are also chapters from well-known media practitioners, from radio, the television and newspapers. Based on the contributions, the editors offer practical suggestions for social scientists to help them work more effectively with the media and thereby reach a wider audience.
"Information Theory and Statistical Learning" presents theoretical and practical results about information theoretic methods used in the context of statistical learning. The book will present a comprehensive overview of the large range of different methods that have been developed in a multitude of contexts. Each chapter is written by an expert in the field. The book is intended for an interdisciplinary readership working in machine learning, applied statistics, artificial intelligence, biostatistics, computational biology, bioinformatics, web mining or related disciplines. Advance Praise for "Information Theory and Statistical Learning" "A new epoch has arrived for information sciences to integrate various disciplines such as information theory, machine learning, statistical inference, data mining, model selection etc. I am enthusiastic about recommending the present book to researchers and students, because it summarizes most of these new emerging subjects and methods, which are otherwise scattered in many places." Shun-ichi Amari, RIKEN Brain Science Institute, Professor-Emeritus at the University of Tokyo
The geometry of curves has fascinated mathematicians for 2500 years, and the theory has become highly abstract. Recently links have been made with the subject of error correction, leading to the creation of geometric Goppa codes, a new and important area of coding theory. This book is an updated and extended version of the last part of the successful book Error-Correcting Codes and Finite Fields. It provides an elementary introduction to Goppa codes, and includes many examples, calculations, and applications. The book is in two parts with an emphasis on motivation, and applications of the theory take precedence over proofs of theorems. The formal theory is, however, provided in the second part of the book, and several of the concepts and proofs have been simplified without sacrificing rigour.
This book provides a much needed short, reliable and stimulating guide to the mass media in present day society. Incisive, surprising and stimulating it will become an essential text in thinking and writing about the mass media.
Recently, much attention has been paid to image processing with multi-resolution and hierarchical structures such as pyramids and trees. This volume deals with recursive pyramids, which combine the advantages of available multiresolution structures and which are convenient both for global and local image processing. Recursive pyramids are based on regular hierarchical (recursive) structures containing data on image fragments of different sizes. Such an image representation technique enables the effective manipulation of pictorial information as well as the development of special hardware or data structures. The major aspects of this book are two original mathematical models of greyscale and binary images represented by recursive structures. Image compression, transmission and processing are discussed using these models. A number of applications are presented, including optical character recognition, expert systems and special computer architecture for pictorial data processing. The majority of results are presented as algorithms applicable to discrete information fields of arbitrary dimensions (e.g. 2-D or 3-D images). The book is divided into six chapters: Chapter 1 provides a brief introduction. Chapter 2 then deals with recursive structures and their properties. Chapter 3 introduces pyramidal image models. Image coding and the progressive transmission of images with gradual refinement are discussed in Chapter 4. Chapters 5 and 6 are devoted to image processing with pyramidal-recursive structures and applications. The volume concludes with a comprehensive bibliography. This work should interest applied mathematicians and computer scientists whose work involves computer vision, information theory and other aspects of image representation techniques.
Administrators of academic professional and technical communication (PTSC) programs have long relied upon lore--stories of what works--to understand and communicate about the work of program administration. Stories are interesting, telling, engaging, and necessary. But a discipline focused primarily on stories, especially the ephemeral stories narrated at conferences and deliberated at department meetings, usually suffice primarily to solve immediate problems and address day-to-day concerns and activities. This edited collection captures some of those stories and layers them with theoretical perspectives and reflection, to enhance their usefulness to the PTSC program administration community at large. Like the ephemeral stories PTSC program administrators are accustomed to, the stories told in this volume are set within specific institutional contexts that reflect specific institutional challenges. They emphasize the intellectual traces--the debts the authors owe to those who have informed and transformed their administrative work. In so doing, this collection creates another conversation--albeit a robust, diverse, and theoretically informed one--around which program leaders might define or redefine their roles and re-envision their administrative work as the rich, complex, intellectual engagement that we find it to be. This volume asks authors to move beyond a notion of administration as an activity based solely in institutional details and processes. In so doing, they emphasize theory as they share their reflections on core administrative processes and significant moments in the histories of their associated programs, thereby affording opportunities for critical examination in conjunction with practical advice.
Administrators of academic professional and technical communication (PTSC) programs have long relied upon lore--stories of what works--to understand and communicate about the work of program administration. Stories are interesting, telling, engaging, and necessary. But a discipline focused primarily on stories, especially the ephemeral stories narrated at conferences and deliberated at department meetings, usually suffice primarily to solve immediate problems and address day-to-day concerns and activities. This edited collection captures some of those stories and layers them with theoretical perspectives and reflection, to enhance their usefulness to the PTSC program administration community at large. Like the ephemeral stories PTSC program administrators are accustomed to, the stories told in this volume are set within specific institutional contexts that reflect specific institutional challenges. They emphasize the intellectual traces--the debts the authors owe to those who have informed and transformed their administrative work. In so doing, this collection creates another conversation--albeit a robust, diverse, and theoretically informed one--around which program leaders might define or redefine their roles and re-envision their administrative work as the rich, complex, intellectual engagement that we find it to be. This volume asks authors to move beyond a notion of administration as an activity based solely in institutional details and processes. In so doing, they emphasize theory as they share their reflections on core administrative processes and significant moments in the histories of their associated programs, thereby affording opportunities for critical examination in conjunction with practical advice.
This book introduces the reader to Serres' unique manner of 'doing philosophy' that can be traced throughout his entire oeuvre: namely as a novel manner of bearing witness. It explores how Serres takes note of a range of epistemologically unsettling situations, which he understands as arising from the short-circuit of a proprietary notion of capital with a praxis of science that commits itself to a form of reasoning which privileges the most direct path (simple method) in order to expend minimal efforts while pursuing maximal efficiency. In Serres' universal economy, value is considered as a function of rarity, not as a stock of resources. This book demonstrates how Michel Serres has developed an architectonics that is coefficient with nature. Mathematic and Information in the Philosophy of Michel Serres acquaints the reader with Serres' monist manner of addressing the universality and the power of knowledge - that is at once also the anonymous and empty faculty of incandescent, inventive thought. The chapters of the book demarcate, problematize and contextualize some of the epistemologically unsettling situations Serres addresses, whilst also examining the particular manner in which he responds to and converses with these situations.
This largely self-contained book on the theory of quantum information focuses on precise mathematical formulations and proofs of fundamental facts that form the foundation of the subject. It is intended for graduate students and researchers in mathematics, computer science, and theoretical physics seeking to develop a thorough understanding of key results, proof techniques, and methodologies that are relevant to a wide range of research topics within the theory of quantum information and computation. The book is accessible to readers with an understanding of basic mathematics, including linear algebra, mathematical analysis, and probability theory. An introductory chapter summarizes these necessary mathematical prerequisites, and starting from this foundation, the book includes clear and complete proofs of all results it presents. Each subsequent chapter includes challenging exercises intended to help readers to develop their own skills for discovering proofs concerning the theory of quantum information.
This book gives the definitive mathematical answer to what thermodynamics really is: a variational calculus applied to probability distributions. Extending Gibbs's notion of ensemble, the Author imagines the ensemble of all possible probability distributions and assigns probabilities to them by selection rules that are fairly general. The calculus of the most probable distribution in the ensemble produces the entire network of mathematical relationships we recognize as thermodynamics. The first part of the book develops the theory for discrete and continuous distributions while the second part applies this thermodynamic calculus to problems in population balance theory and shows how the emergence of a giant component in aggregation, and the shattering transition in fragmentation may be treated as formal phase transitions. While the book is intended as a research monograph, the material is self-contained and the style sufficiently tutorial to be accessible for self-paced study by an advanced graduate student in such fields as physics, chemistry, and engineering.
This book will focus on utilizing statistical modelling of the software source code, in order to resolve issues associated with the software development processes. Writing and maintaining software source code is a costly business; software developers need to constantly rely on large existing code bases. Statistical modelling identifies the patterns in software artifacts and utilize them for predicting the possible issues.
As a collection of ideas and methodologies, systems thinking has made an impact in organizations and in particular in the information systems field. However, this main emphasis on organizations limits the scope of systems thinking and practice. There is a need first to use systems thinking in addressing societal problems, and second to enable people involved in developing the information society to reflect on the impacts of systems and technologies in society as a whole. Thus, there are opportunities to review the scope and potential of systems thinking and practice to deal with information society-related issues. Systems Practice in the Information Society provides students of information systems as well as practicing Inofrmation Systems managers with concepts and strategies to enable them to understand and use systems thinking methodologies and address challenges posed by the development of information-based societies. This book brings experiences, ideas, and applications of systemic thinking in designing and evaluating socio-technological initiatives. Using a number of cultural contexts, this book explores how organizations, including governments, can enable better access to information and communication technologies and improve the quality of life of individuals.
As a collection of ideas and methodologies, systems thinking has made an impact in organizations and in particular in the information systems field. However, this main emphasis on organizations limits the scope of systems thinking and practice. There is a need first to use systems thinking in addressing societal problems, and second to enable people involved in developing the information society to reflect on the impacts of systems and technologies in society as a whole. Thus, there are opportunities to review the scope and potential of systems thinking and practice to deal with information society-related issues. Systems Practice in the Information Society provides students of information systems as well as practicing Inofrmation Systems managers with concepts and strategies to enable them to understand and use systems thinking methodologies and address challenges posed by the development of information-based societies. This book brings experiences, ideas, and applications of systemic thinking in designing and evaluating socio-technological initiatives. Using a number of cultural contexts, this book explores how organizations, including governments, can enable better access to information and communication technologies and improve the quality of life of individuals.
Heavy tails -extreme events or values more common than expected -emerge everywhere: the economy, natural events, and social and information networks are just a few examples. Yet after decades of progress, they are still treated as mysterious, surprising, and even controversial, primarily because the necessary mathematical models and statistical methods are not widely known. This book, for the first time, provides a rigorous introduction to heavy-tailed distributions accessible to anyone who knows elementary probability. It tackles and tames the zoo of terminology for models and properties, demystifying topics such as the generalized central limit theorem and regular variation. It tracks the natural emergence of heavy-tailed distributions from a wide variety of general processes, building intuition. And it reveals the controversy surrounding heavy tails to be the result of flawed statistics, then equips readers to identify and estimate with confidence. Over 100 exercises complete this engaging package.
In concurrent and distributed systems, processes can complete tasks together by playing their parts in a joint plan. The plan, or protocol, can be written as a choreography: a formal description of overall behaviour that processes should collaborate to implement, like authenticating a user or purchasing an item online. Formality brings clarity, but not only that: choreographies can contribute to important safety and liveness properties. This book is an ideal introduction to theory of choreographies for students, researchers, and professionals in computer science and applied mathematics. It covers languages for writing choreographies, their semantics, and principles for implementing choreographies correctly. The text treats the study of choreographies as a discipline in its own right, following a systematic approach that starts from simple foundations and proceeds to more advanced features in incremental steps. Each chapter includes examples and exercises aimed at helping with understanding the theory and its relation to practice.
Networks surround us, from social networks to protein - protein interaction networks within the cells of our bodies. The theory of random graphs provides a necessary framework for understanding their structure and development. This text provides an accessible introduction to this rapidly expanding subject. It covers all the basic features of random graphs - component structure, matchings and Hamilton cycles, connectivity and chromatic number - before discussing models of real-world networks, including intersection graphs, preferential attachment graphs and small-world models. Based on the authors' own teaching experience, it can be used as a textbook for a one-semester course on random graphs and networks at advanced undergraduate or graduate level. The text includes numerous exercises, with a particular focus on developing students' skills in asymptotic analysis. More challenging problems are accompanied by hints or suggestions for further reading.
This book presents a comprehensive mathematical theory that explains precisely what information flow is, how it can be assessed quantitatively - so bringing precise meaning to the intuition that certain information leaks are small enough to be tolerated - and how systems can be constructed that achieve rigorous, quantitative information-flow guarantees in those terms. It addresses the fundamental challenge that functional and practical requirements frequently conflict with the goal of preserving confidentiality, making perfect security unattainable. Topics include: a systematic presentation of how unwanted information flow, i.e., "leaks", can be quantified in operationally significant ways and then bounded, both with respect to estimated benefit for an attacking adversary and by comparisons between alternative implementations; a detailed study of capacity, refinement, and Dalenius leakage, supporting robust leakage assessments; a unification of information-theoretic channels and information-leaking sequential programs within the same framework; and a collection of case studies, showing how the theory can be applied to interesting realistic scenarios. The text is unified, self-contained and comprehensive, accessible to students and researchers with some knowledge of discrete probability and undergraduate mathematics, and contains exercises to facilitate its use as a course textbook.
This book deals with the autoregressive method for digital processing of random oscillations. The method is based on a one-to-one transformation of the numeric factors of the Yule series model to linear elastic system characteristics. This parametric approach allowed to develop a formal processing procedure from the experimental data to obtain estimates of logarithmic decrement and natural frequency of random oscillations. A straightforward mathematical description of the procedure makes it possible to optimize a discretization of oscillation realizations providing efficient estimates. The derived analytical expressions for confidence intervals of estimates enable a priori evaluation of their accuracy. Experimental validation of the method is also provided. Statistical applications for the analysis of mechanical systems arise from the fact that the loads experienced by machineries and various structures often cannot be described by deterministic vibration theory. Therefore, a sufficient description of real oscillatory processes (vibrations) calls for the use of random functions. In engineering practice, the linear vibration theory (modeling phenomena by common linear differential equations) is generally used. This theory's fundamental concepts such as natural frequency, oscillation decrement, resonance, etc. are credited for its wide use in different technical tasks. In technical applications two types of research tasks exist: direct and inverse. The former allows to determine stochastic characteristics of the system output X(t) resulting from a random process E(t) when the object model is considered known. The direct task enables to evaluate the effect of an operational environment on the designed object and to predict its operation under various loads. The inverse task is aimed at evaluating the object model on known processes E(t) and X(t), i.e. finding model (equations) factors. This task is usually met at the tests of prototypes to identify (or verify) its model experimentally. To characterize random processes a notion of "shaping dynamic system" is commonly used. This concept allows to consider the observing process as the output of a hypothetical system with the input being stationary Gauss-distributed ("white") noise. Therefore, the process may be exhaustively described in terms of parameters of that system. In the case of random oscillations, the "shaping system" is an elastic system described by the common differential equation of the second order: X (t)+2hX (t)+ _0^2 X(t)=E(t), where 0 = 2 / 0 is the natural frequency, T0 is the oscillation period, and h is a damping factor. As a result, the process X(t) can be characterized in terms of the system parameters - natural frequency and logarithmic oscillations decrement = hT0 as well as the process variance. Evaluation of these parameters is subjected to experimental data processing based on frequency or time-domain representations of oscillations. It must be noted that a concept of these parameters evaluation did not change much during the last century. For instance, in case of the spectral density utilization, evaluation of the decrement values is linked with bandwidth measurements at the points of half-power of the observed oscillations. For a time-domain presentation, evaluation of the decrement requires measuring covariance values delayed by a time interval divisible by T0. Both estimation procedures are derived from a continuous description of research phenomena, so the accuracy of estimates is linked directly to the adequacy of discrete representation of random oscillations. This approach is similar a concept of transforming differential equations to difference ones with derivative approximation by corresponding finite differences. The resulting discrete model, being an approximation, features a methodical error which can be decreased but never eliminated. To render such a presentation more accurate it is imperative to decrease the discretization interval and to increase realization size growing requirements for computing power. The spectral density and covariance function estimates comprise a non-parametric (non-formal) approach. In principle, any non-formal approach is a kind of art i.e. the results depend on the performer's skills. Due to interference of subjective factors in spectral or covariance estimates of random signals, accuracy of results cannot be properly determined or justified. To avoid the abovementioned difficulties, the application of linear time-series models with well-developed procedures for parameter estimates is more advantageous. A method for the analysis of random oscillations using a parametric model corresponding discretely (no approximation error) with a linear elastic system is developed and presented in this book. As a result, a one-to-one transformation of the model's numerical factors to logarithmic decrement and natural frequency of random oscillations is established. It allowed to develop a formal processing procedure from experimental data to obtain the estimates of and 0. The proposed approach allows researchers to replace traditional subjective techniques by a formal processing procedure providing efficient estimates with analytically defined statistical uncertainties.
Winner of the Neumann Prize for the History of Mathematics. In their second collaboration, biographers Jimmy Soni and Rob Goodman present the story of Claude Shannon—one of the foremost intellects of the twentieth century and the architect of the Information Age, whose insights stand behind every computer built, email sent, video streamed, and webpage loaded. Claude Shannon was a groundbreaking polymath, a brilliant tinkerer, and a digital pioneer. He constructed the first wearable computer, outfoxed Vegas casinos, and built juggling robots. He also wrote the seminal text of the digital revolution, which has been called “the Magna Carta of the Information Age.” In this elegantly written, exhaustively researched biography, Soni and Goodman reveal Claude Shannon’s full story for the first time. With unique access to Shannon’s family and friends, A Mind at Play brings this singular innovator and always playful genius to life.
Discover the foundations of quantum mechanics, and explore how these principles are powering a new generation of advances in quantum engineering, in this ground-breaking undergraduate textbook. It explains physical and mathematical principles using cutting-edge electronic, optoelectronic and photonic devices, linking underlying theory with real-world applications; focuses on current technologies and avoids historic approaches, getting students quickly up-to-speed to tackle contemporary engineering challenges; provides an introduction to the foundations of quantum information, and a wealth of real-world quantum examples, including quantum well infrared photodetectors, solar cells, quantum teleportation, quantum computing, band gap engineering, quantum cascade lasers, low-dimensional materials, and van der Waals heterostructures; and includes pedagogical features such as objectives and end-of-chapter homework problems to consolidate student understanding, and solutions for instructors. Designed to inspire the development of future quantum devices and systems, this is the perfect introduction to quantum mechanics for undergraduate electrical engineers and materials scientists.
This compact course is written for the mathematically literate reader who wants to learn to analyze data in a principled fashion. The language of mathematics enables clear exposition that can go quite deep, quite quickly, and naturally supports an axiomatic and inductive approach to data analysis. Starting with a good grounding in probability, the reader moves to statistical inference via topics of great practical importance - simulation and sampling, as well as experimental design and data collection - that are typically displaced from introductory accounts. The core of the book then covers both standard methods and such advanced topics as multiple testing, meta-analysis, and causal inference. |
You may like...
Engineering and the Ultimate - An…
Jonathan Bartlett, Dominic Halsmer, …
Hardcover
R701
Discovery Miles 7 010
Emergent Information: A Unified Theory…
Wolfgang Hofkirchner
Hardcover
R2,558
Discovery Miles 25 580
NMR Quantum Information Processing
Ivan Oliveira, Roberto Sarthour Jr., …
Hardcover
R3,754
Discovery Miles 37 540
Encyclopedia of Information Science and…
Mehdi Khosrow-Pour, D.B.A.
Hardcover
R20,967
Discovery Miles 209 670
|