![]() |
![]() |
Your cart is empty |
||
Books > Reference & Interdisciplinary > Communication studies > Information theory > General
In concurrent and distributed systems, processes can complete tasks together by playing their parts in a joint plan. The plan, or protocol, can be written as a choreography: a formal description of overall behaviour that processes should collaborate to implement, like authenticating a user or purchasing an item online. Formality brings clarity, but not only that: choreographies can contribute to important safety and liveness properties. This book is an ideal introduction to theory of choreographies for students, researchers, and professionals in computer science and applied mathematics. It covers languages for writing choreographies, their semantics, and principles for implementing choreographies correctly. The text treats the study of choreographies as a discipline in its own right, following a systematic approach that starts from simple foundations and proceeds to more advanced features in incremental steps. Each chapter includes examples and exercises aimed at helping with understanding the theory and its relation to practice.
This book proposes tools for analysis of multidimensional and metric data, by establishing a state-of-the-art of the existing solutions and developing new ones. It mainly focuses on visual exploration of these data by a human analyst, relying on a 2D or 3D scatter plot display obtained through Dimensionality Reduction. Performing diagnosis of an energy system requires identifying relations between observed monitoring variables and the associated internal state of the system. Dimensionality reduction, which allows to represent visually a multidimensional dataset, constitutes a promising tool to help domain experts to analyse these relations. This book reviews existing techniques for visual data exploration and dimensionality reduction such as tSNE and Isomap, and proposes new solutions to challenges in that field. In particular, it presents the new unsupervised technique ASKI and the supervised methods ClassNeRV and ClassJSE. Moreover, MING, a new approach for local map quality evaluation is also introduced. These methods are then applied to the representation of expert-designed fault indicators for smart-buildings, I-V curves for photovoltaic systems and acoustic signals for Li-ion batteries.
Networks surround us, from social networks to protein - protein interaction networks within the cells of our bodies. The theory of random graphs provides a necessary framework for understanding their structure and development. This text provides an accessible introduction to this rapidly expanding subject. It covers all the basic features of random graphs - component structure, matchings and Hamilton cycles, connectivity and chromatic number - before discussing models of real-world networks, including intersection graphs, preferential attachment graphs and small-world models. Based on the authors' own teaching experience, it can be used as a textbook for a one-semester course on random graphs and networks at advanced undergraduate or graduate level. The text includes numerous exercises, with a particular focus on developing students' skills in asymptotic analysis. More challenging problems are accompanied by hints or suggestions for further reading.
This book discusses machine learning and artificial intelligence (AI) for agricultural economics. It is written with a view towards bringing the benefits of advanced analytics and prognostics capabilities to small scale farmers worldwide. This volume provides data science and software engineering teams with the skills and tools to fully utilize economic models to develop the software capabilities necessary for creating lifesaving applications. The book introduces essential agricultural economic concepts from the perspective of full-scale software development with the emphasis on creating niche blue ocean products. Chapters detail several agricultural economic and AI reference architectures with a focus on data integration, algorithm development, regression, prognostics model development and mathematical optimization. Upgrading traditional AI software development paradigms to function in dynamic agricultural and economic markets, this volume will be of great use to researchers and students in agricultural economics, data science, engineering, and machine learning as well as engineers and industry professionals in the public and private sectors.
This book presents a unified approach to a rich and rapidly evolving research domain at the interface between statistical physics, theoretical computer science/discrete mathematics, and coding/information theory. It is accessible to graduate students and researchers without a specific training in any of these fields. The selected topics include spin glasses, error correcting codes, satisfiability, and are central to each field. The approach focuses on large random instances and adopts a common probabilistic formulation in terms of graphical models. It presents message passing algorithms like belief propagation and survey propagation, and their use in decoding and constraint satisfaction solving. It also explains analysis techniques like density evolution and the cavity method, and uses them to study phase transitions.
The book is a concise, self-contained and fully updated introduction to automata theory - a fundamental topic of computer sciences and engineering. The material is presented in a rigorous yet convincing way and is supplied with a wealth of examples, exercises and down-to-the earth convincing explanatory notes. An ideal text to a spectrum of one-term courses in computer sciences, both at the senior undergraduate and graduate students.
This self-contained introduction to machine learning, designed from the start with engineers in mind, will equip students with everything they need to start applying machine learning principles and algorithms to real-world engineering problems. With a consistent emphasis on the connections between estimation, detection, information theory, and optimization, it includes: an accessible overview of the relationships between machine learning and signal processing, providing a solid foundation for further study; clear explanations of the differences between state-of-the-art techniques and more classical methods, equipping students with all the understanding they need to make informed technique choices; demonstration of the links between information-theoretical concepts and their practical engineering relevance; reproducible examples using Matlab, enabling hands-on student experimentation. Assuming only a basic understanding of probability and linear algebra, and accompanied by lecture slides and solutions for instructors, this is the ideal introduction to machine learning for engineering students of all disciplines.
This book constitutes selected and revised papers presented at the Second International Conference on Communication, Networks and Computing, CNC 2020, held in Gwalior, India, in December 2020. The 23 full papers and 7 short papers were thoroughly reviewed and selected from the 102 submissions. They are organized in topical sections on wired and wireless communication systems; high dimensional data representation and processing; networking and information security; computing Techniques for efficient networks design; vehicular technology and application; electronics circuit for communication system.
This book constitutes the refereed proceedings of the 18th IMA International Conference on Cryptography and Coding, IMACC 2021, held in December 2021. Due to COVID 19 pandemic the conference was held virtually. The 14 papers presented were carefully reviewed and selected from 30 submissions. The conference focuses on a diverse set of topics both in cryptography and coding theory.
Discover the foundations of quantum mechanics, and explore how these principles are powering a new generation of advances in quantum engineering, in this ground-breaking undergraduate textbook. It explains physical and mathematical principles using cutting-edge electronic, optoelectronic and photonic devices, linking underlying theory with real-world applications; focuses on current technologies and avoids historic approaches, getting students quickly up-to-speed to tackle contemporary engineering challenges; provides an introduction to the foundations of quantum information, and a wealth of real-world quantum examples, including quantum well infrared photodetectors, solar cells, quantum teleportation, quantum computing, band gap engineering, quantum cascade lasers, low-dimensional materials, and van der Waals heterostructures; and includes pedagogical features such as objectives and end-of-chapter homework problems to consolidate student understanding, and solutions for instructors. Designed to inspire the development of future quantum devices and systems, this is the perfect introduction to quantum mechanics for undergraduate electrical engineers and materials scientists.
The three-volume set LNCS 13042, LNCS 13043 and LNCS 13044 constitutes the refereed proceedings of the 19th International Conference on Theory of Cryptography, TCC 2021, held in Raleigh, NC, USA, in November 2021. The total of 66 full papers presented in this three-volume set was carefully reviewed and selected from 161 submissions. They cover topics on proof systems, attribute-based and functional encryption, obfuscation, key management and secure communication.
This book constitutes the thoroughly refereed post-conference proceedings of the 21st Japanese Conference on Discrete and Computational Geometry and Graphs, JCDCGGG 2018, held in Quezon City, Philippines, in September 2018. The total of 14 papers included in this volume was carefully reviewed and selected from 25 submissions. The papers feature advances made in the field of computational geometry and focus on emerging technologies, new methodology and applications, graph theory and dynamics.
This compact course is written for the mathematically literate reader who wants to learn to analyze data in a principled fashion. The language of mathematics enables clear exposition that can go quite deep, quite quickly, and naturally supports an axiomatic and inductive approach to data analysis. Starting with a good grounding in probability, the reader moves to statistical inference via topics of great practical importance - simulation and sampling, as well as experimental design and data collection - that are typically displaced from introductory accounts. The core of the book then covers both standard methods and such advanced topics as multiple testing, meta-analysis, and causal inference.
This compact course is written for the mathematically literate reader who wants to learn to analyze data in a principled fashion. The language of mathematics enables clear exposition that can go quite deep, quite quickly, and naturally supports an axiomatic and inductive approach to data analysis. Starting with a good grounding in probability, the reader moves to statistical inference via topics of great practical importance - simulation and sampling, as well as experimental design and data collection - that are typically displaced from introductory accounts. The core of the book then covers both standard methods and such advanced topics as multiple testing, meta-analysis, and causal inference.
With the development of Big Data platforms for managing massive amount of data and wide availability of tools for processing these data, the biggest limitation is the lack of trained experts who are qualified to process and interpret the results. This textbook is intended for graduate students and experts using methods of cluster analysis and applications in various fields. Suitable for an introductory course on cluster analysis or data mining, with an in-depth mathematical treatment that includes discussions on different measures, primitives (points, lines, etc.) and optimization-based clustering methods, Cluster Analysis and Applications also includes coverage of deep learning based clustering methods. With clear explanations of ideas and precise definitions of concepts, accompanied by numerous examples and exercises together with Mathematica programs and modules, Cluster Analysis and Applications may be used by students and researchers in various disciplines, working in data analysis or data science.
Behandelt die signaltheoretischen Grundlagen der irregul ren Abtastung ausgehend von einer exakten funktionalanalytischen Betrachtungsweise sowie der auch in der Wavelet-Theorie verwendeten Theorie der Rahmen. Ferner werden die f r die Anwendung der irregul ren Abtastung in der Signalverarbeitung erforderlichen Rekonstruktionsalgorithmen f r verschiedene Signalklassen Frequenzband-begrenzter Signale hergeleitet. Eine Diskussion von praxisrelevanten Beispielanwendungen der irregul ren Abtastung rundet die Darstellung ab. Das Buch zeichnet sich durch eine pr zise und vollst ndige Vorgehensweise aus; jegliche ben tigten funktionalanalytischen Grundlagen werden ausf hrlich dargestellt sowie alle Herleitungen angegeben.
This book is specially designed to refresh and elevate the level of understanding of the foundational background in probability and distributional theory required to be successful in a graduate-level statistics program. Advanced undergraduate students and introductory graduate students from a variety of quantitative backgrounds will benefit from the transitional bridge that this volume offers, from a more generalized study of undergraduate mathematics and statistics to the career-focused, applied education at the graduate level. In particular, it focuses on growing fields that will be of potential interest to future M.S. and Ph.D. students, as well as advanced undergraduates heading directly into the workplace: data analytics, statistics and biostatistics, and related areas.
As digital transformations continue to accelerate in the world, discourses of big data have come to dominate in a number of fields, from politics and economics, to media and education. But how can we really understand the digital world when so much of the writing through which we grapple with it remains deeply problematic? In a compelling new work of feminist critical theory, Bassett, Kember and O'Riordan scrutinise many of the assumptions of a masculinist digital world, highlighting the tendency of digital humanities scholarship to venerate and essentialise technical forms, and to adopt gendered writing and citation practices. Contesting these writings, practices and politics, the authors foreground feminist traditions and contributions to the field, offering alternative modes of knowledge production, and a radically different, poetic writing style. Through this prism, Furious brings into focus themes including the automation of home and domestic work, the Anthropocene, and intersectional feminist technofutures.
Not so if the book has been translated into Arabic. Now the reader can discern no meaning in the letters. The text conveys almost no information to the reader, yet the linguistic informa tion contained by the book is virtually the same as in the English original. The reader, familiar with books will still recognise two things, however: First, that the book is a book. Second, that the squiggles on the page represent a pattern of abstractions which probably makes sense to someone who understands the mean ing of those squiggles. Therefore, the book as such, will still have some meaning for the English reader, even if the content of the text has none. Let us go to a more extreme case. Not a book, but a stone, or a rock with engravings in an ancient language no longer under stood by anyone alive. Does such a stone not contain human information even if it is not decipherable? Suppose at some point in the future, basic knowledge about linguistics and clever computer aids allow us to decipher it? Or suppose someone discovers the equivalent of a Rosetta stone which allows us to translate it into a known language, and then into English? Can one really say that the stone contained no information prior to translation? It is possible to argue that the stone, prior to deciphering contained only latent information."
Searching for Trust explores the intersection of trust, disinformation, and blockchain technology in an age of heightened institutional and epistemic mistrust. It adopts a unique archival theoretic lens to delve into how computational information processing has gradually supplanted traditional record keeping, putting at risk a centuries-old tradition of the 'moral defense of the record' and replacing it with a dominant ethos of information-processing efficiency. The author argues that focusing on information-processing efficiency over the defense of records against manipulation and corruption (the ancient task of the recordkeeper) has contributed to a diminution of the trustworthiness of information and a rise of disinformation, with attendant destabilization of the epistemic trust fabric of societies. Readers are asked to consider the potential and limitations of blockchains as the technological embodiment of the moral defense of the record and as means to restoring societal trust in an age of disinformation.
This book constitutes the proceedings of the 17th Conference on Computability in Europe, CiE 2021, organized by the University of Ghent in July 2021. Due to COVID-19 pandemic the conference was held virtually. The 48 full papers presented in this volume were carefully reviewed and selected from 50 submissions. CiE promotes the development of computability-related science, ranging over mathematics, computer science and applications in various natural and engineering sciences, such as physics and biology, as well as related fields, such as philosophy and history of computing. CiE 2021 had as its motto Connecting with Computability, a clear acknowledgement of the connecting and interdisciplinary nature of the conference series which is all the more important in a time where people are more than ever disconnected from one another due to the COVID-19 pandemic.
Searching for Trust explores the intersection of trust, disinformation, and blockchain technology in an age of heightened institutional and epistemic mistrust. It adopts a unique archival theoretic lens to delve into how computational information processing has gradually supplanted traditional record keeping, putting at risk a centuries-old tradition of the 'moral defense of the record' and replacing it with a dominant ethos of information-processing efficiency. The author argues that focusing on information-processing efficiency over the defense of records against manipulation and corruption (the ancient task of the recordkeeper) has contributed to a diminution of the trustworthiness of information and a rise of disinformation, with attendant destabilization of the epistemic trust fabric of societies. Readers are asked to consider the potential and limitations of blockchains as the technological embodiment of the moral defense of the record and as means to restoring societal trust in an age of disinformation.
Die Frage, ob das BGB den Herausforderungen der Digitalisierung noch gerecht werden kann, steht im Mittelpunkt dieser Publikation. Um dem auf den Grund zu gehen, untersucht der Autor den Begriff "digitale Inhalte", der seit Umsetzung der Verbraucherrechterichtlinie im BGB zu finden ist. Mithilfe einer Kategorisierung der Erscheinungsformen digitaler Inhalte ordnet er diese vertragstypologisch zu. Mit dem so gefundenen Ergebnis macht der Autor legislative Vorschlage fur eine alternative rechtliche Handhabbarkeit. Dabei nimmt er auch den von der Europaischen Kommission vorgelegten Vorschlag fur eine spezielle Richtlinie fur digitale Inhalte in den Blick und verbindet mit ihm die Zuversicht, dass es sich dabei um ein sinnvolles Update fur das BGB handeln koennte.
This book constitutes revised selected papers from the workshops of the 4th Asia-Pacific Web and Web-Age Information Management International Joint Conference on Web and Big Data, APWeb-WAIM 2020: The Third International Workshop on Knowledge Graph Management and Applications, KGMA 2020; The Second International Workshop on Semi-structured Big Data Management and Applications, SemiBDMA 2020, and The First International Workshop on Deep Learning in Large-scale Unstructured Data Analytics, DeepLUDA 2020, held in Tianjin, China, in September 2020. Due to the COVID-19 pandemic the conference was held online. The 13 papers were thoroughly reviewed and selected from the numerous submissions and present recent research on the theory, design, and implementation of data management systems. |
![]() ![]() You may like...
Complex System Modelling and Control…
Quanmin Zhu, Ahmad Taher Azar
Hardcover
Air Traffic Control Automated Systems
Bestugin A.R., Eshenko A.A., …
Hardcover
R3,569
Discovery Miles 35 690
Control and Filtering of Fuzzy Systems…
Shanling Dong, Zheng-Guang Wu, …
Hardcover
R3,032
Discovery Miles 30 320
|