![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Reference & Interdisciplinary > Communication studies > Information theory > General
This book discusses machine learning and artificial intelligence (AI) for agricultural economics. It is written with a view towards bringing the benefits of advanced analytics and prognostics capabilities to small scale farmers worldwide. This volume provides data science and software engineering teams with the skills and tools to fully utilize economic models to develop the software capabilities necessary for creating lifesaving applications. The book introduces essential agricultural economic concepts from the perspective of full-scale software development with the emphasis on creating niche blue ocean products. Chapters detail several agricultural economic and AI reference architectures with a focus on data integration, algorithm development, regression, prognostics model development and mathematical optimization. Upgrading traditional AI software development paradigms to function in dynamic agricultural and economic markets, this volume will be of great use to researchers and students in agricultural economics, data science, engineering, and machine learning as well as engineers and industry professionals in the public and private sectors.
How to draw plausible conclusions from uncertain and conflicting sources of evidence is one of the major intellectual challenges of Artificial Intelligence. It is a prerequisite of the smart technology needed to help humans cope with the information explosion of the modern world. In addition, computational modelling of uncertain reasoning is a key to understanding human rationality. Previous computational accounts of uncertain reasoning have fallen into two camps: purely symbolic and numeric. This book represents a major advance by presenting a unifying framework which unites these opposing camps. The Incidence Calculus can be viewed as both a symbolic and a numeric mechanism. Numeric values are assigned indirectly to evidence via the possible worlds in which that evidence is true. This facilitates purely symbolic reasoning using the possible worlds and numeric reasoning via the probabilities of those possible worlds. Moreover, the indirect assignment solves some difficult technical problems, like the combinat ion of dependent sources of evidcence, which had defeated earlier mechanisms. Weiru Liu generalises the Incidence Calculus and then compares it to a succes sion of earlier computational mechanisms for uncertain reasoning: Dempster-Shafer Theory, Assumption-Based Truth Maintenance, Probabilis tic Logic, Rough Sets, etc. She shows how each of them is represented and interpreted in Incidence Calculus. The consequence is a unified mechanism which includes both symbolic and numeric mechanisms as special cases. It provides a bridge between symbolic and numeric approaches, retaining the advantages of both and overcoming some of their disadvantages."
RDF-based knowledge graphs require additional formalisms to be fully context-aware, which is presented in this book. This book also provides a collection of provenance techniques and state-of-the-art metadata-enhanced, provenance-aware, knowledge graph-based representations across multiple application domains, in order to demonstrate how to combine graph-based data models and provenance representations. This is important to make statements authoritative, verifiable, and reproducible, such as in biomedical, pharmaceutical, and cybersecurity applications, where the data source and generator can be just as important as the data itself. Capturing provenance is critical to ensure sound experimental results and rigorously designed research studies for patient and drug safety, pathology reports, and medical evidence generation. Similarly, provenance is needed for cyberthreat intelligence dashboards and attack maps that aggregate and/or fuse heterogeneous data from disparate data sources to differentiate between unimportant online events and dangerous cyberattacks, which is demonstrated in this book. Without provenance, data reliability and trustworthiness might be limited, causing data reuse, trust, reproducibility and accountability issues. This book primarily targets researchers who utilize knowledge graphs in their methods and approaches (this includes researchers from a variety of domains, such as cybersecurity, eHealth, data science, Semantic Web, etc.). This book collects core facts for the state of the art in provenance approaches and techniques, complemented by a critical review of existing approaches. New research directions are also provided that combine data science and knowledge graphs, for an increasingly important research topic.
This volume constitutes the refereed and revised post-conference proceedings of the 5th IFIP WG 5.15 International Conference on Information Technology in Disaster Risk Reduction, ITDRR 2020, in Sofia, Bulgaria, in December 2020.* The 18 full papers and 6 short papers presented were carefully reviewed and selected from 52 submissions. The papers focus on various aspects and challenges of coping with disaster risk reduction. The main topics include areas such as natural disasters, remote sensing, big data, cloud computing, Internet of Things, mobile computing, emergency management, disaster information processing, disaster risk assessment and management. *The conference was held virtually.
The Second Edition of Quantum Information Processing, Quantum Computing, and Quantum Error Correction: An Engineering Approach presents a self-contained introduction to all aspects of the area, teaching the essentials such as state vectors, operators, density operators, measurements, and dynamics of a quantum system. In additional to the fundamental principles of quantum computation, basic quantum gates, basic quantum algorithms, and quantum information processing, this edition has been brought fully up to date, outlining the latest research trends. These include: Key topics include: Quantum error correction codes (QECCs), including stabilizer codes, Calderbank-Shor-Steane (CSS) codes, quantum low-density parity-check (LDPC) codes, entanglement-assisted QECCs, topological codes, and surface codes Quantum information theory, and quantum key distribution (QKD) Fault-tolerant information processing and fault-tolerant quantum error correction, together with a chapter on quantum machine learning. Both quantum circuits- and measurement-based quantum computational models are described The next part of the book is spent investigating physical realizations of quantum computers, encoders and decoders; including photonic quantum realization, cavity quantum electrodynamics, and ion traps In-depth analysis of the design and realization of a quantum information processing and quantum error correction circuits This fully up-to-date new edition will be of use to engineers, computer scientists, optical engineers, physicists and mathematicians.
Heavy tails -extreme events or values more common than expected -emerge everywhere: the economy, natural events, and social and information networks are just a few examples. Yet after decades of progress, they are still treated as mysterious, surprising, and even controversial, primarily because the necessary mathematical models and statistical methods are not widely known. This book, for the first time, provides a rigorous introduction to heavy-tailed distributions accessible to anyone who knows elementary probability. It tackles and tames the zoo of terminology for models and properties, demystifying topics such as the generalized central limit theorem and regular variation. It tracks the natural emergence of heavy-tailed distributions from a wide variety of general processes, building intuition. And it reveals the controversy surrounding heavy tails to be the result of flawed statistics, then equips readers to identify and estimate with confidence. Over 100 exercises complete this engaging package.
Photonics has long been considered an attractive substrate for next generation implementations of machine-learning concepts. Reservoir Computing tremendously facilitated the realization of recurrent neural networks in analogue hardware. This concept exploits the properties of complex nonlinear dynamical systems, giving rise to photonic reservoirs implemented by semiconductor lasers, telecommunication modulators and integrated photonic chips.
This book tells the story of radical transparency in a datafied world. It is a story that not only includes the beginnings of WikiLeaks and its endings as a weapon of the GRU, but also exposes numerous other decentralised disclosure networks designed to crack open democracy - for good or ill - that followed in its wake. This is a story that can only be understood through rethinking how technologies of government, practices of media, and assumptions of democracy interact. By combining literatures of governmentality, media studies, and democracy, this illuminating account offers novel insights and critiques of the transparency ideal through its material-political practice. Case studies uncover evolving media practices that, regardless of being scraped from public records or leaked from internal sources, still divulge secrets. The narrative also traces new corporate players such as Clearview AI, the civic-minded ICIJ, and state-based public health disclosures in times of pandemic to reveal how they all form unique proto-institutional instances of disclosure as a technology of government. The analysis of novel forms of digital radical transparency - from a trickle of paper-based leaks to the modern digital .torrent - is grounded in analogues from the analogue past, which combine to tell the whole story of how transparency functions in and helps form democracy.
The production and consumption of information and communication technologies (or ICTs) are becoming deeply embedded within our societies. The influence and implications of this have an impact at a macro level, in the way our governments, economies, and businesses operate, andat a micro level in our everyday lives. This handbook is about the many challenges presented by ICTs. It sets out an intellectual agenda that examines the implications of ICTs for individuals, organizations, democracy, and the economy. Explicity interdisciplinary, and combining empirical research with theoretical work, it is organised around four themes covering the knowledge economy; organizational dynamics, strategy, and design; governance and democracy; and culture, community and new media literacies. It provides a comprehensive resource for those working in the social sciences, and in the physical sciences and engineering fields, with leading contemporary research informed principally by the disciplines of anthropology, economics, philosophy, politics, and sociology.
In this volume, author Tim Gorichanaz seeks to re-frame the discussion of information engagement through the lens of information experience, an exciting emerging area within information science. Unlike traditional information behavior research, which is limited to how people need, seek, and search for information, information experience looks at how people understand, use, and are shaped by information. In this way, information experience connects with other human-centered areas of information research and design, including information literacy and human-computer interaction. Split into three parts, Information Experience in Theory and Design presents a multifaceted investigation of information experience, centered around the themes of understanding, self, and meaning. Part One (Understanding) explores the link between information, understanding and questioning; how moral change arises from information; and how to design for understanding. Part Two (Self) explores the concept of the human self as information; the links between information, identity and society; and how to design for self-care. Finally, Part Three (Meaning) explores the connection between information and meaning; how meaning and craft contribute to the good life; and how to design for meaning. Offering a rigorous theoretical foundation for information experience and insights for design, Gorichanaz brings together research from across the information field as well as philosophy. For researchers or students in any area of the information field, from librarianship to human-computer interaction, this is an exciting new text investigating a fascinating new field of study.
Conti examines presidential rhetoric on trade, providing a detailed analysis of presidential trade arguments and strategies throughout American history. She then concentrates on the rhetoric of contemporary presidents, who have had to contend with both the burgeoning trade deficit and the displacement of military competitiveness with post-cold war economic competitiveness. Despite vast disparities in governing philosophies and strategies, Presidents Reagan, Bush, and Clinton all preached the virtues of free trade while continuing a policy of select protectionist actions. As Conti suggests, the arcane details of trade policy, the continuing pervasiveness of nontariff barriers, and the impending negotiation of international trade agreements combine to make presidential leadership on economic issues critical. How effective that leadership can be is, in large part, dependent upon the effectiveness of presidential rhetoric. Students, scholars, and researchers in the field of speech communication and rhetoric, political communication, public affairs, and the presidency will find this a stimulating survey.
With the development of Big Data platforms for managing massive amount of data and wide availability of tools for processing these data, the biggest limitation is the lack of trained experts who are qualified to process and interpret the results. This textbook is intended for graduate students and experts using methods of cluster analysis and applications in various fields. Suitable for an introductory course on cluster analysis or data mining, with an in-depth mathematical treatment that includes discussions on different measures, primitives (points, lines, etc.) and optimization-based clustering methods, Cluster Analysis and Applications also includes coverage of deep learning based clustering methods. With clear explanations of ideas and precise definitions of concepts, accompanied by numerous examples and exercises together with Mathematica programs and modules, Cluster Analysis and Applications may be used by students and researchers in various disciplines, working in data analysis or data science.
First book on the subject, illustrative examples, some original results, self-contained material, a reference book.
This new edition of a well-received textbook provides a concise introduction to both the theoretical and experimental aspects of quantum information at the graduate level. While the previous edition focused on theory, the book now incorporates discussions of experimental platforms. Several chapters on experimental implementations of quantum information protocols have been added: implementations using neutral atoms, trapped ions, optics, and solidstate systems are each presented in its own chapter. Previous chapters on entanglement, quantum measurements, quantum dynamics, quantum cryptography, and quantum algorithms have been thoroughly updated, and new additions include chapters on the stabilizer formalism and the Gottesman-Knill theorem as well as aspects of classical and quantum information theory. To facilitate learning, each chapter starts with a clear motivation to the topic and closes with exercises and a recommended reading list. Quantum Information Processing: Theory and Implementation will be essential to graduate students studying quantum information as well as and researchers in other areas of physics who wish to gain knowledge in the field.
This book is offers a comprehensive overview of information theory and error control coding, using a different approach then in existed literature. The chapters are organized according to the Shannon system model, where one block affects the others. A relatively brief theoretical introduction is provided at the beginning of every chapter, including a few additional examples and explanations, but without any proofs. And a short overview of some aspects of abstract algebra is given at the end of the corresponding chapters. The characteristic complex examples with a lot of illustrations and tables are chosen to provide detailed insights into the nature of the problem. Some limiting cases are presented to illustrate the connections with the theoretical bounds. The numerical values are carefully selected to provide in-depth explanations of the described algorithms. Although the examples in the different chapters can be considered separately, they are mutually connected and the conclusions for one considered problem relate to the others in the book.
The phenomenal international bestseller that shows us how to stop trying to predict everything - and take advantage of uncertainty What have the invention of the wheel, Pompeii, the Wall Street Crash, Harry Potter and the internet got in common? Why are all forecasters con-artists? Why should you never run for a train or read a newspaper? This book is all about Black Swans: the random events that underlie our lives, from bestsellers to world disasters. Their impact is huge; they're impossible to predict; yet after they happen we always try to rationalize them. 'Taleb is a bouncy and even exhilarating guide ... I came to relish what he said, and even develop a sneaking affection for him as a person' Will Self, Independent on Sunday 'He leaps like some superhero of the mind' Boyd Tonkin, Independent
We will never know the precise identity of America's first political consultant. It is likely that candidates were seeking favorable coverage in colonial newspapers as early as 1704; it is also likely that by 1745 candidates were using handbills and pamphlets to augment press coverage of campaigns; and we know that one successful candidate, George Washington in 1758, purchased refreshments for potential voters. These traditional approaches to winning votes have in recent years been amplified by consultants who have shown how cable networks, videocassettes, modems, faxes, focus groups, and other means of communication can be put to partisan use. In this book, Robert V. Friedenberg examines all of the communication techniques used in contemporary political campaigning. After providing a history of political consulting, Friedenberg examines the principal communication specialities used in contemporary campaigns. Throughout, political consultants discuss their approaches and evaluate the benefits and shortcomings of these methods. An invaluable text for what is arguably the most rapidly changing field of applied communication, this work is must reading for students and researchers of American politics, applied communication, and contemporary political theory.
This book proposes tools for analysis of multidimensional and metric data, by establishing a state-of-the-art of the existing solutions and developing new ones. It mainly focuses on visual exploration of these data by a human analyst, relying on a 2D or 3D scatter plot display obtained through Dimensionality Reduction. Performing diagnosis of an energy system requires identifying relations between observed monitoring variables and the associated internal state of the system. Dimensionality reduction, which allows to represent visually a multidimensional dataset, constitutes a promising tool to help domain experts to analyse these relations. This book reviews existing techniques for visual data exploration and dimensionality reduction such as tSNE and Isomap, and proposes new solutions to challenges in that field. In particular, it presents the new unsupervised technique ASKI and the supervised methods ClassNeRV and ClassJSE. Moreover, MING, a new approach for local map quality evaluation is also introduced. These methods are then applied to the representation of expert-designed fault indicators for smart-buildings, I-V curves for photovoltaic systems and acoustic signals for Li-ion batteries.
Most discussions of the digital divide focus on the gap between African Americans and others when it comes to using, and benefiting from, the technological and business opportunities of the information age. Although many African Americans are locked out of the information revolution, others are an integral part of its development and progress. Barber profiles 26 of those leaders here, engagingly and informatively blending biography with insight and analysis. Most discussions of the digital divide focus on the gap between African Americans and others when it comes to using, and benefiting from, the technological and business opportunities of the information age. Although many African Americans are locked out of the information revolution, others are an integral part of its development and progress. Barber profiles 26 of them here, engagingly and informatively blending biography with insight and analysis. Documenting history as it is being made, this book features achievers in all fields of relevant endeavor, including scientists, business leaders, power brokers, and community leaders. Among them are Robert Johnson, CEO of Black Entertainment Television; Richard Parsons, CEO of AOL Time-Warner; congressmen and other policymakers in Washington, D.C.; and men and women who are working to bridge the digital divide in satellite radio, web-based portals, and on the ground with IT workshops. This book is not just about business success or technological progress. The African American digerati are solving one of the great social challenges of the 21st century: creating a black community that is prosperous in a society that has changed from being a land-based industrial society to a cyberspace-based information society.
DEGREESI a work that provides such a comprehensive reassessment of Information Retrieval (IR) theory, with regards to the user-oriented model. -- Journal of the American Society for Information Science
This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.
This book offers an introduction to ten key topics in quantum information science and quantum coherent phenomena, aimed at graduate-student level. The chapters cover some of the most recent developments in this dynamic research field where theoretical and experimental physics, combined with computer science, provide a fascinating arena for groundbreaking new concepts in information processing. The book addresses both the theoretical and experimental aspects of the subject, and clearly demonstrates how progress in experimental techniques has stimulated a great deal of theoretical effort and vice versa. Experiments are shifting from simply preparing and measuring quantum states to controlling and manipulating them, and the book outlines how the first real applications, notably quantum key distribution for secure communication, are starting to emerge. The chapters cover quantum retrodiction, ultracold quantum gases in optical lattices, optomechanics, quantum algorithms, quantum key distribution, quantum control based on measurement, orbital angular momentum of light, entanglement theory, trapped ions and quantum metrology, and open quantum systems subject to decoherence. The contributing authors have been chosen not just on the basis of their scientific expertise, but also because of their ability to offer pedagogical and well-written contributions which will be of interest to students and established researchers.
Originally published in 1995, Large Deviations for Performance Analysis consists of two synergistic parts. The first half develops the theory of large deviations from the beginning, through recent results on the theory for processes with boundaries, keeping to a very narrow path: continuous-time, discrete-state processes. By developing only what is needed for the applications, the theory is kept to a manageable level, both in terms of length and in terms of difficulty. Within its scope, the treatment is detailed, comprehensive and self-contained. As the book shows, there are sufficiently many interesting applications of jump Markov processes to warrant a special treatment. The second half is a collection of applications developed at Bell Laboratories. The applications cover large areas of the theory of communication networks: circuit switched transmission, packet transmission, multiple access channels, and the M/M/1 queue. Aspects of parallel computation are covered as well including, basics of job allocation, rollback-based parallel simulation, assorted priority queueing models that might be used in performance models of various computer architectures, and asymptotic coupling of processors. These applications are thoroughly analysed using the tools developed in the first half of the book. |
You may like...
Encyclopedia of Information Science and…
Mehdi Khosrow-Pour, D.B.A.
Hardcover
R20,954
Discovery Miles 209 540
Engineering and the Ultimate - An…
Jonathan Bartlett, Dominic Halsmer, …
Hardcover
R701
Discovery Miles 7 010
Encyclopedia of Information Science and…
Mehdi Khosrow-Pour, D.B.A.
Hardcover
R20,967
Discovery Miles 209 670
|