Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > General theory of computing > Data structures
This treatise presents an integrated perspective on the interplay of set theory and graph theory, providing an extensive selection of examples that highlight how methods from one theory can be used to better solve problems originated in the other. Features: explores the interrelationships between sets and graphs and their applications to finite combinatorics; introduces the fundamental graph-theoretical notions from the standpoint of both set theory and dyadic logic, and presents a discussion on set universes; explains how sets can conveniently model graphs, discussing set graphs and set-theoretic representations of claw-free graphs; investigates when it is convenient to represent sets by graphs, covering counting and encoding problems, the random generation of sets, and the analysis of infinite sets; presents excerpts of formal proofs concerning graphs, whose correctness was verified by means of an automated proof-assistant; contains numerous exercises, examples, definitions, problems and insight panels.
This book describes simple to complex ASIC design practical scenarios using Verilog. It builds a story from the basic fundamentals of ASIC designs to advanced RTL design concepts using Verilog. Looking at current trends of miniaturization, the contents provide practical information on the issues in ASIC design and synthesis using Synopsys DC and their solution. The book explains how to write efficient RTL using Verilog and how to improve design performance. It also covers architecture design strategies, multiple clock domain designs, low-power design techniques, DFT, pre-layout STA and the overall ASIC design flow with case studies. The contents of this book will be useful to practicing hardware engineers, students, and hobbyists looking to learn about ASIC design and synthesis.
In this work we plan to revise the main techniques for enumeration algorithms and to show four examples of enumeration algorithms that can be applied to efficiently deal with some biological problems modelled by using biological networks: enumerating central and peripheral nodes of a network, enumerating stories, enumerating paths or cycles, and enumerating bubbles. Notice that the corresponding computational problems we define are of more general interest and our results hold in the case of arbitrary graphs. Enumerating all the most and less central vertices in a network according to their eccentricity is an example of an enumeration problem whose solutions are polynomial and can be listed in polynomial time, very often in linear or almost linear time in practice. Enumerating stories, i.e. all maximal directed acyclic subgraphs of a graph G whose sources and targets belong to a predefined subset of the vertices, is on the other hand an example of an enumeration problem with an exponential number of solutions, that can be solved by using a non trivial brute-force approach. Given a metabolic network, each individual story should explain how some interesting metabolites are derived from some others through a chain of reactions, by keeping all alternative pathways between sources and targets. Enumerating cycles or paths in an undirected graph, such as a protein-protein interaction undirected network, is an example of an enumeration problem in which all the solutions can be listed through an optimal algorithm, i.e. the time required to list all the solutions is dominated by the time to read the graph plus the time required to print all of them. By extending this result to directed graphs, it would be possible to deal more efficiently with feedback loops and signed paths analysis in signed or interaction directed graphs, such as gene regulatory networks. Finally, enumerating mouths or bubbles with a source s in a directed graph, that is enumerating all the two vertex-disjoint directed paths between the source s and all the possible targets, is an example of an enumeration problem in which all the solutions can be listed through a linear delay algorithm, meaning that the delay between any two consecutive solutions is linear, by turning the problem into a constrained cycle enumeration problem. Such patterns, in a de Bruijn graph representation of the reads obtained by sequencing, are related to polymorphisms in DNA- or RNA-seq data.
This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is investigated for compression of Light Field images based on the HEVC technology. A new linear prediction method using sparse constraints is also described, enabling improved coding performance of the HEVC standard, particularly for images with complex textures based on repeated structures. Finally, the authors present a new, generalized intra-prediction framework for the HEVC standard, which unifies the directional prediction methods used in the current video compression standards, with linear prediction methods using sparse constraints. Experimental results for the compression of natural images are provided, demonstrating the advantage of the unified prediction framework over the traditional directional prediction modes used in HEVC standard.
This book highlights some of the unique aspects of spatio-temporal graph data from the perspectives of modeling and developing scalable algorithms. The authors discuss in the first part of this book, the semantic aspects of spatio-temporal graph data in two application domains, viz., urban transportation and social networks. Then the authors present representational models and data structures, which can effectively capture these semantics, while ensuring support for computationally scalable algorithms. In the first part of the book, the authors describe algorithmic development issues in spatio-temporal graph data. These algorithms internally use the semantically rich data structures developed in the earlier part of this book. Finally, the authors introduce some upcoming spatio-temporal graph datasets, such as engine measurement data, and discuss some open research problems in the area. This book will be useful as a secondary text for advanced-level students entering into relevant fields of computer science, such as transportation and urban planning. It may also be useful for researchers and practitioners in the field of navigational algorithms.
This volume collects contributions written by different experts in honor of Prof. Jaime Munoz Masque. It covers a wide variety of research topics, from differential geometry to algebra, but particularly focuses on the geometric formulation of variational calculus; geometric mechanics and field theories; symmetries and conservation laws of differential equations, and pseudo-Riemannian geometry of homogeneous spaces. It also discusses algebraic applications to cryptography and number theory. It offers state-of-the-art contributions in the context of current research trends. The final result is a challenging panoramic view of connecting problems that initially appear distant.
The volume contains latest research on software reliability assessment, testing, quality management, inventory management, mathematical modeling, analysis using soft computing techniques and management analytics. It links researcher and practitioner perspectives from different branches of engineering and management, and from around the world for a bird's eye view on the topics. The interdisciplinarity of engineering and management research is widely recognized and considered to be the most appropriate and significant in the fast changing dynamics of today's times. With insights from the volume, companies looking to drive decision making are provided actionable insight on each level and for every role using key indicators, to generate mobile-enabled scorecards, time-series based analysis using charts, and dashboards. At the same time, the book provides scholars with a platform to derive maximum utility in the area by subscribing to the idea of managing business through performance and business analytics.
Pattern Recognition on Oriented Matroids covers a range of innovative problems in combinatorics, poset and graph theories, optimization, and number theory that constitute a far-reaching extension of the arsenal of committee methods in pattern recognition. The groundwork for the modern committee theory was laid in the mid-1960s, when it was shown that the familiar notion of solution to a feasible system of linear inequalities has ingenious analogues which can serve as collective solutions to infeasible systems. A hierarchy of dialects in the language of mathematics, for instance, open cones in the context of linear inequality systems, regions of hyperplane arrangements, and maximal covectors (or topes) of oriented matroids, provides an excellent opportunity to take a fresh look at the infeasible system of homogeneous strict linear inequalities - the standard working model for the contradictory two-class pattern recognition problem in its geometric setting. The universal language of oriented matroid theory considerably simplifies a structural and enumerative analysis of applied aspects of the infeasibility phenomenon. The present book is devoted to several selected topics in the emerging theory of pattern recognition on oriented matroids: the questions of existence and applicability of matroidal generalizations of committee decision rules and related graph-theoretic constructions to oriented matroids with very weak restrictions on their structural properties; a study (in which, in particular, interesting subsequences of the Farey sequence appear naturally) of the hierarchy of the corresponding tope committees; a description of the three-tope committees that are the most attractive approximation to the notion of solution to an infeasible system of linear constraints; an application of convexity in oriented matroids as well as blocker constructions in combinatorial optimization and in poset theory to enumerative problems on tope committees; an attempt to clarify how elementary changes (one-element reorientations) in an oriented matroid affect the family of its tope committees; a discrete Fourier analysis of the important family of critical tope committees through rank and distance relations in the tope poset and the tope graph; the characterization of a key combinatorial role played by the symmetric cycles in hypercube graphs. Contents Oriented Matroids, the Pattern Recognition Problem, and Tope Committees Boolean Intervals Dehn-Sommerville Type Relations Farey Subsequences Blocking Sets of Set Families, and Absolute Blocking Constructions in Posets Committees of Set Families, and Relative Blocking Constructions in Posets Layers of Tope Committees Three-Tope Committees Halfspaces, Convex Sets, and Tope Committees Tope Committees and Reorientations of Oriented Matroids Topes and Critical Committees Critical Committees and Distance Signals Symmetric Cycles in the Hypercube Graphs
Based on the latest version of the language, this book offers a self-contained, concise and coherent introduction to programming with Python. The book's primary focus is on realistic case study applications of Python. Each practical example is accompanied by a brief explanation of the problem-terminology and concepts, followed by necessary program development in Python using its constructs, and simulated testing. Given the open and participatory nature of development, Python has a variety of incorporated data structures, which has made it difficult to present it in a coherent manner. Further, some advanced concepts (super, yield, generator, decorator, etc.) are not easy to explain. The book specially addresses these challenges; starting with a minimal subset of the core, it offers users a step-by-step guide to achieving proficiency.
This book explains the most prominent and some promising new, general techniques that combine metaheuristics with other optimization methods. A first introductory chapter reviews the basic principles of local search, prominent metaheuristics, and tree search, dynamic programming, mixed integer linear programming, and constraint programming for combinatorial optimization purposes. The chapters that follow present five generally applicable hybridization strategies, with exemplary case studies on selected problems: incomplete solution representations and decoders; problem instance reduction; large neighborhood search; parallel non-independent construction of solutions within metaheuristics; and hybridization based on complete solution archives. The authors are among the leading researchers in the hybridization of metaheuristics with other techniques for optimization, and their work reflects the broad shift to problem-oriented rather than algorithm-oriented approaches, enabling faster and more effective implementation in real-life applications. This hybridization is not restricted to different variants of metaheuristics but includes, for example, the combination of mathematical programming, dynamic programming, or constraint programming with metaheuristics, reflecting cross-fertilization in fields such as optimization, algorithmics, mathematical modeling, operations research, statistics, and simulation. The book is a valuable introduction and reference for researchers and graduate students in these domains.
With the growing popularity of "big data", the potential value of personal data has attracted more and more attention. Applications built on personal data can create tremendous social and economic benefits. Meanwhile, they bring serious threats to individual privacy. The extensive collection, analysis and transaction of personal data make it difficult for an individual to keep the privacy safe. People now show more concerns about privacy than ever before. How to make a balance between the exploitation of personal information and the protection of individual privacy has become an urgent issue. In this book, the authors use methodologies from economics, especially game theory, to investigate solutions to the balance issue. They investigate the strategies of stakeholders involved in the use of personal data, and try to find the equilibrium. The book proposes a user-role based methodology to investigate the privacy issues in data mining, identifying four different types of users, i.e. four user roles, involved in data mining applications. For each user role, the authors discuss its privacy concerns and the strategies that it can adopt to solve the privacy problems. The book also proposes a simple game model to analyze the interactions among data provider, data collector and data miner. By solving the equilibria of the proposed game, readers can get useful guidance on how to deal with the trade-off between privacy and data utility. Moreover, to elaborate the analysis on data collector's strategies, the authors propose a contract model and a multi-armed bandit model respectively. The authors discuss how the owners of data (e.g. an individual or a data miner) deal with the trade-off between privacy and utility in data mining. Specifically, they study users' strategies in collaborative filtering based recommendation system and distributed classification system. They built game models to formulate the interactions among data owners, and propose learning algorithms to find the equilibria.
This book explores the future of cyber technologies and cyber operations which will influence advances in social media, cyber security, cyber physical systems, ethics, law, media, economics, infrastructure, military operations and other elements of societal interaction in the upcoming decades. It provides a review of future disruptive technologies and innovations in cyber security. It also serves as a resource for wargame planning and provides a strategic vision of the future direction of cyber operations. It informs military strategist about the future of cyber warfare. Written by leading experts in the field, chapters explore how future technical innovations vastly increase the interconnectivity of our physical and social systems and the growing need for resiliency in this vast and dynamic cyber infrastructure. The future of social media, autonomy, stateless finance, quantum information systems, the internet of things, the dark web, space satellite operations, and global network connectivity is explored along with the transformation of the legal and ethical considerations which surround them. The international challenges of cyber alliances, capabilities, and interoperability is challenged with the growing need for new laws, international oversight, and regulation which informs cybersecurity studies. The authors have a multi-disciplinary scope arranged in a big-picture framework, allowing both deep exploration of important topics and high level understanding of the topic. Evolution of Cyber Technologies and Operations to 2035 is as an excellent reference for professionals and researchers working in the security field, or as government and military workers, economics, law and more. Students will also find this book useful as a reference guide or secondary text book.
Transactions are a concept related to the logical database as seen from the perspective of database application programmers: a transaction is a sequence of database actions that is to be executed as an atomic unit of work. The processing of transactions on databases is a well- established area with many of its foundations having already been laid in the late 1970s and early 1980s. The unique feature of this textbook is that it bridges the gap between the theory of transactions on the logical database and the implementation of the related actions on the underlying physical database. The authors relate the logical database, which is composed of a dynamically changing set of data items with unique keys, and the underlying physical database with a set of fixed-size data and index pages on disk. Their treatment of transaction processing builds on the "do-redo-undo" recovery paradigm, and all methods and algorithms presented are carefully designed to be compatible with this paradigm as well as with write-ahead logging, steal-and-no-force buffering, and fine-grained concurrency control. Chapters 1 to 6 address the basics needed to fully appreciate transaction processing on a centralized database system within the context of our transaction model, covering topics like ACID properties, database integrity, buffering, rollbacks, isolation, and the interplay of logical locks and physical latches. Chapters 7 and 8 present advanced features including deadlock-free algorithms for reading, inserting and deleting tuples, while the remaining chapters cover additional advanced topics extending on the preceding foundational chapters, including multi-granular locking, bulk actions, versioning, distributed updates, and write-intensive transactions. This book is primarily intended as a text for advanced undergraduate or graduate courses on database management in general or transaction processing in particular.
This book presents two practical physical attacks. It shows how attackers can reveal the secret key of symmetric as well as asymmetric cryptographic algorithms based on these attacks, and presents countermeasures on the software and the hardware level that can help to prevent them in the future. Though their theory has been known for several years now, since neither attack has yet been successfully implemented in practice, they have generally not been considered a serious threat. In short, their physical attack complexity has been overestimated and the implied security threat has been underestimated. First, the book introduces the photonic side channel, which offers not only temporal resolution, but also the highest possible spatial resolution. Due to the high cost of its initial implementation, it has not been taken seriously. The work shows both simple and differential photonic side channel analyses. Then, it presents a fault attack against pairing-based cryptography. Due to the need for at least two independent precise faults in a single pairing computation, it has not been taken seriously either. Based on these two attacks, the book demonstrates that the assessment of physical attack complexity is error-prone, and as such cryptography should not rely on it. Cryptographic technologies have to be protected against all physical attacks, whether they have already been successfully implemented or not. The development of countermeasures does not require the successful execution of an attack but can already be carried out as soon as the principle of a side channel or a fault attack is sufficiently understood.
This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts of the standard, insight into how it was developed, and in-depth discussion of algorithms and architectures for its implementation."
This book constitutes the refereed proceedings of the 40th International Conference on Current Trends in Theory and Practice of Computer Science, SOFSEM 2014, held in Novy Smokovec, Slovakia, in January 2014. The 40 revised full papers presented in this volume were carefully reviewed and selected from 104 submissions. The book also contains 6 invited talks. The contributions covers topics as: Foundations of Computer Science, Software and Web Engineering, as well as Data, Information and Knowledge Engineering and Cryptography, Security and Verification."
This book provides formal and informal definitions and taxonomies for self-aware computing systems, and explains how self-aware computing relates to many existing subfields of computer science, especially software engineering. It describes architectures and algorithms for self-aware systems as well as the benefits and pitfalls of self-awareness, and reviews much of the latest relevant research across a wide array of disciplines, including open research challenges. The chapters of this book are organized into five parts: Introduction, System Architectures, Methods and Algorithms, Applications and Case Studies, and Outlook. Part I offers an introduction that defines self-aware computing systems from multiple perspectives, and establishes a formal definition, a taxonomy and a set of reference scenarios that help to unify the remaining chapters. Next, Part II explores architectures for self-aware computing systems, such as generic concepts and notations that allow a wide range of self-aware system architectures to be described and compared with both isolated and interacting systems. It also reviews the current state of reference architectures, architectural frameworks, and languages for self-aware systems. Part III focuses on methods and algorithms for self-aware computing systems by addressing issues pertaining to system design, like modeling, synthesis and verification. It also examines topics such as adaptation, benchmarks and metrics. Part IV then presents applications and case studies in various domains including cloud computing, data centers, cyber-physical systems, and the degree to which self-aware computing approaches have been adopted within those domains. Lastly, Part V surveys open challenges and future research directions for self-aware computing systems. It can be used as a handbook for professionals and researchers working in areas related to self-aware computing, and can also serve as an advanced textbook for lecturers and postgraduate students studying subjects like advanced software engineering, autonomic computing, self-adaptive systems, and data-center resource management. Each chapter is largely self-contained, and offers plenty of references for anyone wishing to pursue the topic more deeply.
This book presents essential studies and applications in the context of sliding mode control, highlighting the latest findings from interdisciplinary theoretical studies, ranging from computational algorithm development to representative applications. Readers will learn how to easily tailor the techniques to accommodate their ad hoc applications. To make the content as accessible as possible, the book employs a clear route in each paper, moving from background to motivation, to quantitative development (equations), and lastly to case studies/illustrations/tutorials (simulations, experiences, curves, tables, etc.). Though primarily intended for graduate students, professors and researchers from related fields, the book will also benefit engineers and scientists from industry.
Physically unclonable functions (PUFs) are innovative physical security primitives that produce unclonable and inherent instance-specific measurements of physical objects; in many ways they are the inanimate equivalent of biometrics for human beings. Since they are able to securely generate and store secrets, they allow us to bootstrap the physical implementation of an information security system. In this book the author discusses PUFs in all their facets: the multitude of their physical constructions, the algorithmic and physical properties which describe them, and the techniques required to deploy them in security applications. The author first presents an extensive overview and classification of PUF constructions, with a focus on so-called intrinsic PUFs. He identifies subclasses, implementation properties, and design techniques used to amplify submicroscopic physical distinctions into observable digital response vectors. He lists the useful qualities attributed to PUFs and captures them in descriptive definitions, identifying the truly PUF-defining properties in the process, and he also presents the details of a formal framework for deploying PUFs and similar physical primitives in cryptographic reductions. The author then describes a silicon test platform carrying different intrinsic PUF structures which was used to objectively compare their reliability, uniqueness, and unpredictability based on experimental data. In the final chapters, the author explains techniques for PUF-based entity identification, entity authentication, and secure key generation. He proposes practical schemes that implement these techniques, and derives and calculates measures for assessing different PUF constructions in these applications based on the quality of their response statistics. Finally, he presents a fully functional prototype implementation of a PUF-based cryptographic key generator, demonstrating the full benefit of using PUFs and the efficiency of the processing techniques described. This is a suitable introduction and reference for security researchers and engineers, and graduate students in information security and cryptography.
The book presents laboratory experiments concerning ARM microcontrollers, and discusses the architecture of the Tiva Cortex-M4 ARM microcontrollers from Texas Instruments, describing various ways of programming them. Given the meager peripherals and sensors available on the kit, the authors describe the design of Padma - a circuit board with a large set of peripherals and sensors that connects to the Tiva Launchpad and exploits the Tiva microcontroller family's on-chip features. ARM microcontrollers, which are classified as 32-bit devices, are currently the most popular of all microcontrollers. They cover a wide range of applications that extend from traditional 8-bit devices to 32-bit devices. Of the various ARM subfamilies, Cortex-M4 is a middle-level microcontroller that lends itself well to data acquisition and control as well as digital signal manipulation applications. Given the prominence of ARM microcontrollers, it is important that they should be incorporated in academic curriculums. However, there is a lack of up-to-date teaching material - textbooks and comprehensive laboratory manuals. In this book each of the microcontroller's resources - digital input and output, timers and counters, serial communication channels, analog-to-digital conversion, interrupt structure and power management features - are addressed in a set of more than 70 experiments to help teach a full semester course on these microcontrollers. Beyond these physical interfacing exercises, it describes an inexpensive BoB (break out board) that allows students to learn how to design and build standalone projects, as well a number of illustrative projects.
With the proliferation of Software-as-a-Service (SaaS) offerings, it is becoming increasingly important for individual SaaS providers to operate their services at a low cost. This book investigates SaaS from the perspective of the provider and shows how operational costs can be reduced by using "multi tenancy," a technique for consolidating a large number of customers onto a small number of servers. Specifically, the book addresses multi tenancy on the database level, focusing on in-memory column databases, which are the backbone of many important new enterprise applications. For efficiently implementing multi tenancy in a farm of databases, two fundamental challenges must be addressed, (i) workload modeling and (ii) data placement. The first involves estimating the (shared) resource consumption for multi tenancy on a single in-memory database server. The second consists in assigning tenants to servers in a way that minimizes the number of required servers (and thus costs) based on the assumed workload model. This step also entails replicating tenants for performance and high availability. This book presents novel solutions to both problems.
This book introduces new logic primitives for electronic design automation tools. The author approaches fundamental EDA problems from a different, unconventional perspective, in order to demonstrate the key role of rethinking EDA solutions in overcoming technological limitations of present and future technologies. The author discusses techniques that improve the efficiency of logic representation, manipulation and optimization tasks by taking advantage of majority and biconditional logic primitives. Readers will be enabled to accelerate formal methods by studying core properties of logic circuits and developing new frameworks for logic reasoning engines.
Evolutionary algorithms constitute a class of well-known algorithms, which are designed based on the Darwinian theory of evolution and Mendelian theory of heritage. They are partly based on random and partly based on deterministic principles. Due to this nature, it is challenging to predict and control its performance in solving complex nonlinear problems. Recently, the study of evolutionary dynamics is focused not only on the traditional investigations but also on the understanding and analyzing new principles, with the intention of controlling and utilizing their properties and performances toward more effective real-world applications. In this book, based on many years of intensive research of the authors, is proposing novel ideas about advancing evolutionary dynamics towards new phenomena including many new topics, even the dynamics of equivalent social networks. In fact, it includes more advanced complex networks and incorporates them with the CMLs (coupled map lattices), which are usually used for spatiotemporal complex systems simulation and analysis, based on the observation that chaos in CML can be controlled, so does evolution dynamics. All the chapter authors are, to the best of our knowledge, originators of the ideas mentioned above and researchers on evolutionary algorithms and chaotic dynamics as well as complex networks, who will provide benefits to the readers regarding modern scientific research on related subjects.
In this monograph we introduce and examine four new temporal logic formalisms that can be used as specification languages for the automated verification of the reliability of hardware and software designs with respect to a desired behavior. The work is organized in two parts. In the first part two logics for computations, the graded computation tree logic and the computation tree logic with minimal model quantifiers are discussed. These have proved to be useful in describing correct executions of monolithic closed systems. The second part focuses on logics for strategies, strategy logic and memoryful alternating-time temporal logic, which have been successfully applied to formalize several properties of interactive plays in multi-entities systems modeled as multi-agent games.
This book presents the mathematical background underlying security modeling in the context of next-generation cryptography. By introducing new mathematical results in order to strengthen information security, while simultaneously presenting fresh insights and developing the respective areas of mathematics, it is the first-ever book to focus on areas that have not yet been fully exploited for cryptographic applications such as representation theory and mathematical physics, among others. Recent advances in cryptanalysis, brought about in particular by quantum computation and physical attacks on cryptographic devices, such as side-channel analysis or power analysis, have revealed the growing security risks for state-of-the-art cryptographic schemes. To address these risks, high-performance, next-generation cryptosystems must be studied, which requires the further development of the mathematical background of modern cryptography. More specifically, in order to avoid the security risks posed by adversaries with advanced attack capabilities, cryptosystems must be upgraded, which in turn relies on a wide range of mathematical theories. This book is suitable for use in an advanced graduate course in mathematical cryptography, while also offering a valuable reference guide for experts. |
You may like...
|