![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing
The creation of a new public realm through the use of the Internet and ICT may positively promote political liberties and freedom of speech, but could also threaten the political and public autonomy of the individual. Human Rights and the Impact of ICT in the Public Sphere: Participation, Democracy, and Political Autonomy focuses on the new technological era as an innovative way to initiate democratic dialogue, but one that can also endanger individual rights to freedom, privacy, and autonomy. This reference book focuses on the new opportunities technology offers for political expression and will be of use to both academic and legal audiences, including academics, students, independent authorities, legislative bodies, and lawyers.
This self-contained essay collection is published to commemorate half a century of Bell's theorem. Like its much acclaimed predecessor "Quantum [Un]Speakables: From Bell to Quantum Information" (published 2002), it comprises essays by many of the worlds leading quantum physicists and philosophers. These revisit the foundations of quantum theory as well as elucidating the remarkable progress in quantum technologies achieved in the last couple of decades. Fundamental concepts such as entanglement, nonlocality and contextuality are described in an accessible manner and, alongside lively descriptions of the various theoretical and experimental approaches, the book also delivers interesting philosophical insights. The collection as a whole will serve as a broad introduction for students and newcomers as well as delighting the scientifically literate general reader.
Cloud service benchmarking can provide important, sometimes surprising insights into the quality of services and leads to a more quality-driven design and engineering of complex software architectures that use such services. Starting with a broad introduction to the field, this book guides readers step-by-step through the process of designing, implementing and executing a cloud service benchmark, as well as understanding and dealing with its results. It covers all aspects of cloud service benchmarking, i.e., both benchmarking the cloud and benchmarking in the cloud, at a basic level. The book is divided into five parts: Part I discusses what cloud benchmarking is, provides an overview of cloud services and their key properties, and describes the notion of a cloud system and cloud-service quality. It also addresses the benchmarking lifecycle and the motivations behind running benchmarks in particular phases of an application lifecycle. Part II then focuses on benchmark design by discussing key objectives (e.g., repeatability, fairness, or understandability) and defining metrics and measurement methods, and by giving advice on developing own measurement methods and metrics. Next, Part III explores benchmark execution and implementation challenges and objectives as well as aspects like runtime monitoring and result collection. Subsequently, Part IV addresses benchmark results, covering topics such as an abstract process for turning data into insights, data preprocessing, and basic data analysis methods. Lastly, Part V concludes the book with a summary, suggestions for further reading and pointers to benchmarking tools available on the Web. The book is intended for researchers and graduate students of computer science and related subjects looking for an introduction to benchmarking cloud services, but also for industry practitioners who are interested in evaluating the quality of cloud services or who want to assess key qualities of their own implementations through cloud-based experiments.
This volume presents a selection of reports from scientific projects requiring high end computing resources on the Hitachi SR8000-F1 supercomputer operated by Leibniz Computing Center in Munich. All reports were presented at the joint HLRB and KONWHIR workshop at the Technical University of Munich in October 2002. The following areas of scientific research are covered: Applied Mathematics, Biosciences, Chemistry, Computational Fluid Dynamics, Cosmology, Geosciences, High-Energy Physics, Informatics, Nuclear Physics, Solid-State Physics. Moreover, projects from interdisciplinary research within the KONWIHR framework (Competence Network for Scientific High Performance Computing in Bavaria) are also included. Each report summarizes its scientific background and discusses the results with special consideration of the quantity and quality of Hitachi SR8000 resources needed to complete the research.
Numbering with colors is tutorial in nature, with many practical examples given throughout the presentation. It is heavily illustrated with gray-scale images, but also included is an 8-page signature of 4-color illustrations to support the presentation. While the organization is somewhat similar to that found in "The Data Handbook," there is little overlap with the content material in that publication. The first section in the book discusses Color Physics, Physiology and Psychology, talking about the details of the eye, the visual pathway, and how the brain converts colors into perceptions of hues. This is followed by the second section, in which Color Technologies are explained, i.e. how we describe colors using the CIE diagram, and how colors can be reproduced using various technologies such as offset printing and video screens. The third section of the book, Using Colors, relates how scientists and engineers can use color to help gain insight into their data sets through true color, false color, and pseudocolor imaging.
This book contains extended and revised versions of the best papers presented at the 28th IFIP WG 10.5/IEEE International Conference on Very Large Scale Integration, VLSI-SoC 2020, held in Salt Lake City, UT, USA, in October 2020.*The 16 full papers included in this volume were carefully reviewed and selected from the 38 papers (out of 74 submissions) presented at the conference. The papers discuss the latest academic and industrial results and developments as well as future trends in the field of System-on-Chip (SoC) design, considering the challenges of nano-scale, state-of-the-art and emerging manufacturing technologies. In particular they address cutting-edge research fields like low-power design of RF, analog and mixed-signal circuits, EDA tools for the synthesis and verification of heterogenous SoCs, accelerators for cryptography and deep learning and on-chip Interconnection system, reliability and testing, and integration of 3D-ICs. *The conference was held virtually.
This pioneering book presents new models for the thermomechanical behavior of composite materials and structures taking into account internal physico-chemical transformations such as thermodecomposition, sublimation and melting at high temperatures (up to 3000 K). It is of great importance for the design of new thermostable materials and for the investigation of reliability and fire safety of composite structures. It also supports the investigation of interaction of composites with laser irradiation and the design of heat-shield systems. Structural methods are presented for calculating the effective mechanical and thermal properties of matrices, fibres and unidirectional, reinforced by dispersed particles and textile composites, in terms of properties of their constituent phases. Useful calculation methods are developed for characteristics such as the rate of thermomechanical erosion of composites under high-speed flow and the heat deformation of composites with account of chemical shrinkage. The author expansively compares modeling results with experimental data, and readers will find unique experimental results on mechanical and thermal properties of composites under temperatures up to 3000 K. Chapters show how the behavior of composite shells under high temperatures is simulated by the finite-element method and so cylindrical and axisymmetric composite shells and composite plates are investigated under local high-temperature heating. < The book will be of interest to researchers and to engineers designing composite structures, and invaluable to materials scientists developing advanced performance thermostable materials.
This unique text/reference provides an overview of crossbar-based interconnection networks, offering novel perspectives on these important components of high-performance, parallel-processor systems. A particular focus is placed on solutions to the blocking and scalability problems. Topics and features: introduces the fundamental concepts in interconnection networks in multi-processor systems, including issues of blocking, scalability, and crossbar networks; presents a classification of interconnection networks, and provides information on recognizing each of the networks; examines the challenges of blocking and scalability, and analyzes the different solutions that have been proposed; reviews a variety of different approaches to improve fault tolerance in multistage interconnection networks; discusses the scalable crossbar network, which is a non-blocking interconnection network that uses small-sized crossbar switches as switching elements. This invaluable work will be of great benefit to students, researchers and practitioners interested in computer networks, parallel processing and reliability engineering. The text is also essential reading for course modules on interconnection network design and reliability.
Here's the first focused discussion of issues and technology in developing networked multimedia systems. This book includes a unique explanation of color specification and its role in achieving high picture quality, high compression ratio and high information retrieval performance, plus valuable coverage of principles and techniques of multimedia information indexing and retrieval critical for future systems.
Alfred Tarski was one of the two giants of the twentieth-century development of logic, along with Kurt Goedel. The four volumes of this collection contain all of Tarski's papers and abstracts published during his lifetime, as well as a comprehensive bibliography. Here will be found many of the works, spanning the period 1921 through 1979, which are the bedrock of contemporary areas of logic, whether in mathematics or philosophy. These areas include the theory of truth in formalized languages, decision methods and undecidable theories, foundations of geometry, set theory, and model theory, algebraic logic, and universal algebra.
This book discusses applications of blockchain in healthcare sector. The security of confidential and sensitive data is of utmost importance in healthcare industry. The introduction of blockchain methods in an effective manner will bring secure transactions in a peer-to-peer network. The book also covers gaps of the current available books/literature available for use cases of Distributed Ledger Technology (DLT) in healthcare. The information and applications discussed in the book are immensely helpful for researchers, database professionals, and practitioners. The book also discusses protocols, standards, and government regulations which are very useful for policymakers.
The book provides a comprehensive introduction and a novel mathematical foundation of the field of information geometry with complete proofs and detailed background material on measure theory, Riemannian geometry and Banach space theory. Parametrised measure models are defined as fundamental geometric objects, which can be both finite or infinite dimensional. Based on these models, canonical tensor fields are introduced and further studied, including the Fisher metric and the Amari-Chentsov tensor, and embeddings of statistical manifolds are investigated. This novel foundation then leads to application highlights, such as generalizations and extensions of the classical uniqueness result of Chentsov or the Cramer-Rao inequality. Additionally, several new application fields of information geometry are highlighted, for instance hierarchical and graphical models, complexity theory, population genetics, or Markov Chain Monte Carlo. The book will be of interest to mathematicians who are interested in geometry, information theory, or the foundations of statistics, to statisticians as well as to scientists interested in the mathematical foundations of complex systems.
Everything you know about the future is wrong. Presumptive Design: Design Provocations for Innovation is for people "inventing" the future: future products, services, companies, strategies and policies. It introduces a design-research method that shortens time to insights from months to days. Presumptive Design is a fundamentally agile approach to identifying your audiences' key needs. Offering rapidly crafted artifacts, your teams collaborate with your customers to identify preferred and profitable elements of your desired outcome. Presumptive Design focuses on your users' problem space, informing your business strategy, your project's early stage definition, and your innovation pipeline. Comprising discussions of design theory with case studies and how-to's, the book offers business leadership, management and innovators the benefits of design thinking and user experience in the context of early stage problem definition. Presumptive Design is an advanced technique and quick to use: within days of reading this book, your research and design teams can apply the approach to capture a risk-reduced view of your future.
This book details the conceptual foundations, design and implementation of the domain-specific language (DSL) development system DjDSL. DjDSL facilitates design-decision-making on and implementation of reusable DSL and DSL-product lines, and represents the state-of-the-art in language-based and composition-based DSL development. As such, it unites elements at the crossroads between software-language engineering, model-driven software engineering, and feature-oriented software engineering. The book is divided into six chapters. Chapter 1 ("DSL as Variable Software") explains the notion of DSL as variable software in greater detail and introduces readers to the idea of software-product line engineering for DSL-based software systems. Chapter 2 ("Variability Support in DSL Development") sheds light on a number of interrelated dimensions of DSL variability: variable development processes, variable design-decisions, and variability-implementation techniques for DSL. The three subsequent chapters are devoted to the key conceptual and technical contributions of DjDSL: Chapter 3 ("Variable Language Models") explains how to design and implement the abstract syntax of a DSL in a variable manner. Chapter 4 ("Variable Context Conditions") then provides the means to refine an abstract syntax (language model) by using composable context conditions (invariants). Next, Chapter 5 ("Variable Textual Syntaxes") details solutions to implementing variable textual syntaxes for different types of DSL. In closing, Chapter 6 ("A Story of a DSL Family") shows how to develop a mixed DSL in a step-by-step manner, demonstrating how the previously introduced techniques can be employed in an advanced example of developing a DSL family. The book is intended for readers interested in language-oriented as well as model-driven software development, including software-engineering researchers and advanced software developers alike. An understanding of software-engineering basics (architecture, design, implementation, testing) and software patterns is essential. Readers should especially be familiar with the basics of object-oriented modelling (UML, MOF, Ecore) and programming (e.g., Java).
This treatise presents an integrated perspective on the interplay of set theory and graph theory, providing an extensive selection of examples that highlight how methods from one theory can be used to better solve problems originated in the other. Features: explores the interrelationships between sets and graphs and their applications to finite combinatorics; introduces the fundamental graph-theoretical notions from the standpoint of both set theory and dyadic logic, and presents a discussion on set universes; explains how sets can conveniently model graphs, discussing set graphs and set-theoretic representations of claw-free graphs; investigates when it is convenient to represent sets by graphs, covering counting and encoding problems, the random generation of sets, and the analysis of infinite sets; presents excerpts of formal proofs concerning graphs, whose correctness was verified by means of an automated proof-assistant; contains numerous exercises, examples, definitions, problems and insight panels.
This book presents the proceedings of The EAI International Conference on Computer Science: Applications in Engineering and Health Services (COMPSE 2019). The conference highlighted the latest research innovations and applications of algorithms designed for optimization applications within the fields of Science, Computer Science, Engineering, Information Technology, Management, Finance and Economics and Health Systems. Focusing on a variety of methods and systems as well as practical examples, this conference is a significant resource for post graduate-level students, decision makers, and researchers in both public and private sectors who are seeking research-based methods for modelling uncertain and unpredictable real-world problems.
In today's technology-crazed environement, distance learning is touted as a cost-effective option for delivering employee training and higher education programs, such as as bachelor's, master's and even doctroal degrees. Distance Learning Technologies: Issues, Trends and Opportunities provides readers with an in-depth understanding of distance learning and the technologies available for this innovative medium of learning and instruction. It races the development of distance learning from its history and includes suggestions for a solid strategic implementation plan to ensure its successful and effective deployment.
This book discusses the fusion of mobile and WiFi network data with semantic technologies and diverse context sources for offering semantically enriched context-aware services in the telecommunications domain. It presents the OpenMobileNetwork as a platform for providing estimated and semantically enriched mobile and WiFi network topology data using the principles of Linked Data. This platform is based on the OpenMobileNetwork Ontology consisting of a set of network context ontology facets that describe mobile network cells as well as WiFi access points from a topological perspective and geographically relate their coverage areas to other context sources. The book also introduces Linked Crowdsourced Data and its corresponding Context Data Cloud Ontology, which is a crowdsourced dataset combining static location data with dynamic context information. Linked Crowdsourced Data supports the OpenMobileNetwork by providing the necessary context data richness for more sophisticated semantically enriched context-aware services. Various application scenarios and proof of concept services as well as two separate evaluations are part of the book. As the usability of the provided services closely depends on the quality of the approximated network topologies, it compares the estimated positions for mobile network cells within the OpenMobileNetwork to a small set of real-world cell positions. The results prove that context-aware services based on the OpenMobileNetwork rely on a solid and accurate network topology dataset. The book also evaluates the performance of the exemplary Semantic Tracking as well as Semantic Geocoding services, verifying the applicability and added value of semantically enriched mobile and WiFi network data.
The study of network theory is a highly interdisciplinary field, which has emerged as a major topic of interest in various disciplines ranging from physics and mathematics, to biology and sociology. This book promotes the diverse nature of the study of complex networks by balancing the needs of students from very different backgrounds. It references the most commonly used concepts in network theory, provides examples of their applications in solving practical problems, and clear indications on how to analyse their results. In the first part of the book, students and researchers will discover the quantitative and analytical tools necessary to work with complex networks, including the most basic concepts in network and graph theory, linear and matrix algebra, as well as the physical concepts most frequently used for studying networks. They will also find instruction on some key skills such as how to proof analytic results and how to manipulate empirical network data. The bulk of the text is focused on instructing readers on the most useful tools for modern practitioners of network theory. These include degree distributions, random networks, network fragments, centrality measures, clusters and communities, communicability, and local and global properties of networks. The combination of theory, example and method that are presented in this text, should ready the student to conduct their own analysis of networks with confidence and allow teachers to select appropriate examples and problems to teach this subject in the classroom.
This book discusses examples in parametric inference with R. Combining basic theory with modern approaches, it presents the latest developments and trends in statistical inference for students who do not have an advanced mathematical and statistical background. The topics discussed in the book are fundamental and common to many fields of statistical inference and thus serve as a point of departure for in-depth study. The book is divided into eight chapters: Chapter 1 provides an overview of topics on sufficiency and completeness, while Chapter 2 briefly discusses unbiased estimation. Chapter 3 focuses on the study of moments and maximum likelihood estimators, and Chapter 4 presents bounds for the variance. In Chapter 5, topics on consistent estimator are discussed. Chapter 6 discusses Bayes, while Chapter 7 studies some more powerful tests. Lastly, Chapter 8 examines unbiased and other tests. Senior undergraduate and graduate students in statistics and mathematics, and those who have taken an introductory course in probability, will greatly benefit from this book. Students are expected to know matrix algebra, calculus, probability and distribution theory before beginning this course. Presenting a wealth of relevant solved and unsolved problems, the book offers an excellent tool for teachers and instructors who can assign homework problems from the exercises, and students will find the solved examples hugely beneficial in solving the exercise problems.
The state of the art in supercomputing is summarized in this volume. The book presents selected results of the projects of the High Performance Computing Center Stuttgart (HLRS) for the year 2001. Together these contributions provide an overview of recent developments in high performance computing and simulation. Reflecting the close cooperation of the HLRS with industry, special emphasis has been put on the industrial relevance of the presented results and methods. The book therefore becomes a collection of showcases for an innovative usage of state-of-the-art modeling, novel numerical algorithms and the use of leading edge high performance computing systems in a GRID-like environment.
This book offers readers an easy introduction into quantum computing as well as into the design for corresponding devices. The authors cover several design tasks which are important for quantum computing and introduce corresponding solutions. A special feature of the book is that those tasks and solutions are explicitly discussed from a design automation perspective, i.e., utilizing clever algorithms and data structures which have been developed by the design automation community for conventional logic (i.e., for electronic devices and systems) and are now applied for this new technology. By this, relevant design tasks can be conducted in a much more efficient fashion than before - leading to improvements of several orders of magnitude (with respect to runtime and other design objectives). Describes the current state of the art for designing quantum circuits, for simulating them, and for mapping them to real hardware; Provides a first comprehensive introduction into design automation for quantum computing that tackles practically relevant tasks; Targets the quantum computing community as well as the design automation community, showing both perspectives to quantum computing, and what impressive improvements are possible when combining the knowledge of both communities. |
![]() ![]() You may like...
Artificial Intelligence Perspective for…
Sezer Bozkus Kahyaoglu, Vahap Tecim
Hardcover
R3,138
Discovery Miles 31 380
Singularly Perturbed Boundary Value…
Matteo Dalla Riva, Massimo Lanza De Cristoforis, …
Hardcover
R4,633
Discovery Miles 46 330
Pseudo-Differential Operators…
Shahla Molahajloo, Stevan Pilipovic, …
Hardcover
Emerging Technologies in Data Mining and…
Joao Manuel R.S. Tavares, Satyajit Chakrabarti, …
Hardcover
R5,821
Discovery Miles 58 210
Handbook on Mobile and Ubiquitous…
Laurence T. Yang, Evi Syukur, …
Paperback
R1,587
Discovery Miles 15 870
|