![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer hardware & operating systems > Computer architecture & logic design > General
Although integrating security into the design of applications has proven to deliver resilient products, there are few books available that provide guidance on how to incorporate security into the design of an application. Filling this need, Security for Service Oriented Architectures examines both application and security architectures and illustrates the relationship between the two. Supplying authoritative guidance on how to design distributed and resilient applications, the book provides an overview of the various standards that service oriented and distributed applications leverage, including SOAP, HTML 5, SAML, XML Encryption, XML Signature, WS-Security, and WS-SecureConversation. It examines emerging issues of privacy and discusses how to design applications within a secure context to facilitate the understanding of these technologies you need to make intelligent decisions regarding their design. This complete guide to security for web services and SOA considers the malicious user story of the abuses and attacks against applications as examples of how design flaws and oversights have subverted the goals of providing resilient business functionality. It reviews recent research on access control for simple and conversation-based web services, advanced digital identity management techniques, and access control for web-based workflows. Filled with illustrative examples and analyses of critical issues, this book provides both security and software architects with a bridge between software and service-oriented architectures and security architectures, with the goal of providing a means to develop software architectures that leverage security architectures. It is also a reliable source of reference on Web services standards. Coverage includes the four types of architectures, implementing and securing SOA, Web 2.0, other SOA platforms, auditing SOAs, and defending and detecting attacks.
Most of the papers in this volume were presented at the NATO Advanced Research Workshop High Performance Computing: Technology and Application, held in Cetraro, Italy from 24 to 26 of June, 1996. The main purpose of the Workshop was to discuss some key scientific and technological developments in high performance computing, identify significant trends and defme desirable research objectives. The volume structure corresponds, in general, to the outline of the workshop technical agenda: general concepts and emerging systems, software technology, algorithms and applications. One of the Workshop innovations was an effort to extend slightly the scope of the meeting from scientific/engineering computing to enterprise-wide computing. The papers on performance and scalability of database servers, and Oracle DBMS reflect this attempt We hope that after reading this collection of papers the readers will have a good idea about some important research and technological issues in high performance computing. We wish to give our thanks to the NATO Scientific and Environmental Affairs Division for being the principal sponsor for the Workshop. Also we are pleased to acknowledge other institutions and companies that supported the Workshop: European Union: European Commission DGIII-Industry, CNR: National Research Council of Italy, University of Calabria, Alenia Spazio, Centro Italiano Ricerche Aerospaziali, ENEA: Italian National Agency for New Technology, Energy and the Environment, Fujitsu, Hewlett Packard-Convex, Hitachi, NEC, Oracle, and Silicon Graphics-Cray Research. Editors January 1997 vii LIST OF CONTRIBUTORS Ecole Nonnale Sucentsrieure de Lyon, 69364 Abarbanel. Robert
IC designers appraise currently MOS transistor geometries and currents to compromise objectives like gain-bandwidth, slew-rate, dynamic range, noise, non-linear distortion, etc. Making optimal choices is a difficult task. How to minimize for instance the power consumption of an operational amplifier without too much penalty regarding area while keeping the gain-bandwidth unaffected in the same time? Moderate inversion yields high gains, but the concomitant area increase adds parasitics that restrict bandwidth. Which methodology to use in order to come across the best compromise(s)? Is synthesis a mixture of design experience combined with cut and tries or is it a constrained multivariate optimization problem, or a mixture? Optimization algorithms are attractive from a system perspective of course, but what about low-voltage low-power circuits, requiring a more physical approach? The connections amid transistor physics and circuits are intricate and their interactions not always easy to describe in terms of existing software packages. The gm/ID synthesis methodology is adapted to CMOS analog circuits for the transconductance over drain current ratio combines most of the ingredients needed in order to determine transistors sizes and DC currents.
Instruction-Level Parallelism presents a collection of papers that attempts to capture the most significant work that took place during the 1980s in the area of instruction-level (ILP) parallel processing. The papers in this book discuss both compiler techniques and actual implementation experience on very long instruction word (VLIW) and superscalar architectures.
Computersystemsresearch is heavilyinfluencedby changesincomputertechnol- ogy. As technology changes alterthe characteristics ofthe underlying hardware com- ponents of the system, the algorithms used to manage the system need to be re- examinedand newtechniques need to bedeveloped. Technological influencesare par- ticularly evident in the design of storage management systems such as disk storage managers and file systems. The influences have been so pronounced that techniques developed as recently as ten years ago are being made obsolete. The basic problem for disk storage managers is the unbalanced scaling of hard- warecomponenttechnologies. Disk storage managerdesign depends on the technolo- gy for processors, main memory, and magnetic disks. During the 1980s, processors and main memories benefited from the rapid improvements in semiconductortechnol- ogy and improved by several orders ofmagnitude in performance and capacity. This improvement has not been matched by disk technology, which is bounded by the me- chanics ofrotating magnetic media. Magnetic disks ofthe 1980s have improved by a factor of 10in capacity butonly a factor of2 in performance. This unbalanced scaling ofthe hardware components challenges the disk storage manager to compensate for the slower disks and allow performance to scale with the processor and main memory technology. Unless the performance of file systems can be improved over that of the disks, I/O-bound applications will be unable to use the rapid improvements in processor speeds to improve performance for computer users. Disk storage managers must break this bottleneck and decouple application perfor- mance from the disk.
This book constitutes the refereed proceedings of the 4th International Workshop on Reversible Computation, RC 2012, held in Copenhagen, Denmark, in July 2012. The 19 contributions presented in this volume were carefully reviewed and selected from 46 submissions. The papers cover theoretical considerations, reversible software and reversible hardware, and physical realizations and applications in quantum computing.
Software architectures have gained wide popularity in the last decade. They generally play a fundamental role in coping with the inherent difficulties of the development of large-scale and complex software systems. Component-oriented and aspect-oriented programming enables software engineers to implement complex applications from a set of pre-defined components. Software Architectures and Component Technology collects excellent chapters on software architectures and component technologies from well-known authors, who not only explain the advantages, but also present the shortcomings of the current approaches while introducing novel solutions to overcome the shortcomings.The unique features of this book are: * evaluates the current architecture design methods and component composition techniques and explains their shortcomings; * presents three practical architecture design methods in detail; * gives four industrial architecture design examples; * presents conceptual models for distributed message-based architectures; * explains techniques for refining architectures into components; * presents the recent developments in component and aspect-oriented techniques; * explains the status of research on Piccola, Hyper/J(R), Pluggable Composite Adapters and Composition Filters. Software Architectures and Component Technology is a suitable text for graduate level students in computer science and engineering, and as a reference for researchers and practitioners in industry.
In 1968 the Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense began implementation of a computer communication network which permits the interconnection of heter ogeneous computers at geographically distributed centres through out the United States. This network has come to be known as the ARPANET and has grown from the initial four node configuration in 1969 to almost forty nodes (including satellite nodes in Hawaii, Norway, and London) in late 1973. The major goal of ARPANET is to achieve resource sharing among the network users. The resources to be shared include not only programs, but also unique facilities such as the powerful ILLIAC IV computer and large global weather data bases that are economically feasible when widely shared. The ARPANEr employs a distributed store-and-forward packet switching approach that is much better suited for computer communications networks than the more conventional circuit-switch ing approach. Reasons favouring packet switching include lower cost, higher capacity, greater reliability and minimal delay. All of these factors are discussed in these Proceedings."
Multicore Processors and Systems provides a comprehensive overview of emerging multicore processors and systems. It covers technology trends affecting multicores, multicore architecture innovations, multicore software innovations, and case studies of state-of-the-art commercial multicore systems. A cross-cutting theme of the book is the challenges associated with scaling up multicore systems to hundreds of cores. The book provides an overview of significant developments in the architectures for multicore processors and systems. It includes chapters on fundamental requirements for multicore systems, including processing, memory systems, and interconnect. It also includes several case studies on commercial multicore systems that have recently been developed and deployed across multiple application domains. The architecture chapters focus on innovative multicore execution models as well as infrastructure for multicores, including memory systems and on-chip interconnections. The case studies examine multicore implementations across different application domains, including general purpose, server, media/broadband, network processing, and signal processing. Multicore Processors and Systems is the first book that focuses solely on multicore processors and systems, and in particular on the unique technology implications, architectures, and implementations. The book has contributing authors that are from both the academic and industrial communities.
This book describes research performed in the context of trust/distrust propagation and aggregation, and their use in recommender systems. This is a hot research topic with important implications for various application areas. The main innovative contributions of the work are: -new bilattice-based model for trust and distrust, allowing for ignorance and inconsistency -proposals for various propagation and aggregation operators, including the analysis of mathematical properties -Evaluation of these operators on real data, including a discussion on the data sets and their characteristics. -A novel approach for identifying controversial items in a recommender system -An analysis on the utility of including distrust in recommender systems -Various approaches for trust based recommendations (a.o. base on collaborative filtering), an in depth experimental analysis, and proposal for a hybrid approach -Analysis of various user types in recommender systems to optimize bootstrapping of cold start users.
This monograph studies the logical aspects of domains as used in de notational semantics of programming languages. Frameworks of domain logics are introduced; these serve as foundations for systematic derivations of proof systems from denotational semantics of programming languages. Any proof system so derived is guaranteed to agree with denotational se mantics in the sense that the denotation of any program coincides with the set of assertions true of it. The study focuses on two categories for dena tational semantics: SFP domains, and the less standard, but important, category of stable domains. The intended readership of this monograph includes researchers and graduate students interested in the relation between semantics of program ming languages and formal means of reasoning about programs. A basic knowledge of denotational semantics, mathematical logic, general topology, and category theory is helpful for a full understanding of the material. Part I SFP Domains Chapter 1 Introduction This chapter provides a brief exposition to domain theory, denotational se mantics, program logics, and proof systems. It discusses the importance of ideas and results on logic and topology to the understanding of the relation between denotational semantics and program logics. It also describes the motivation for the work presented by this monograph, and how that work fits into a more general program. Finally, it gives a short summary of the results of each chapter. 1. 1 Domain Theory Programming languages are languages with which to perform computa tion."
This book has been written for practitioners, researchers and stu dents in the fields of parallel and distributed computing. Its objective is to provide detailed coverage of the applications of graph theoretic tech niques to the problems of matching resources and requirements in multi ple computer systems. There has been considerable research in this area over the last decade and intense work continues even as this is being written. For the practitioner, this book serves as a rich source of solution techniques for problems that are routinely encountered in the real world. Algorithms are presented in sufficient detail to permit easy implementa tion; background material and fundamental concepts are covered in full. The researcher will find a clear exposition of graph theoretic tech niques applied to parallel and distributed computing. Research results are covered and many hitherto unpublished spanning the last decade results by the author are included. There are many unsolved problems in this field-it is hoped that this book will stimulate further research."
Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.
In contemporary research, the supercomputer now ranks, along with radio telescopes, particle accelerators and the other apparatus of "big science", as an expensive resource, which is nevertheless essential for state of the art research. Supercomputers are usually provided as shar.ed central facilities. However, unlike, telescopes and accelerators, they are find a wide range of applications which extends across a broad spectrum of research activity. The difference in performance between a "good" and a "bad" computer program on a traditional serial computer may be a factor of two or three, but on a contemporary supercomputer it can easily be a factor of one hundred or even more! Furthermore, this factor is likely to increase with future generations of machines. In keeping with the large capital and recurrent costs of these machines, it is appropriate to devote effort to training and familiarization so that supercomputers are employed to best effect. This volume records the lectures delivered at a Summer School held at The Coseners House in Abingdon, which was an attempt to disseminate research methods in the different areas in which supercomputers are used. It is hoped that the publication of the lectures in this form will enable the experiences and achievements of supercomputer users to be shared with a larger audience. We thank all the lecturers and participants for making the Summer School an enjoyable and profitable experience. Finally, we thank the Science and Engineering Research Council and The Computer Board for supporting the Summer School.
A new advanced textbook/reference providing a comprehensive survey of hardware and software architectural principles and methods of computer systems organization and design. The book is suitable for a first course in computer organization. The style is similar to that of the author's book on assembly language in that it strongly supports self-study by students. This organization facilitates compressed presentation of material. Emphasis is also placed on related concepts to practical designs/chips. Topics: material presentation suitable for self- study; concepts related to practical designs and implementations; extensive examples and figures; details provided on several digital logic simulation packages; free MASM download instructions provided; and end-of-chapter exercises.
This volume contains a selection of papers that focus on the state-of the-art in real-time scheduling and resource management. Preliminary versions of these papers were presented at a workshop on the foundations of real-time computing sponsored by the Office of Naval Research in October, 1990 in Washington, D.C. A companion volume by the title Foundations of Real-Time Computing: Fonnal Specifications and Methods complements this book by addressing many of the most advanced approaches currently being investigated in the arena of formal specification and verification of real-time systems. Together, these two texts provide a comprehensive snapshot of current insights into the process of designing and building real-time computing systems on a scientific basis. Many of the papers in this book take care to define the notion of real-time system precisely, because it is often easy to misunderstand what is meant by that term. Different communities of researchers variously use the term real-time to refer to either very fast computing, or immediate on-line data acquisition, or deadline-driven computing. This text is concerned with the very difficult problems of scheduling tasks and resource management in computer systems whose performance is inextricably fused with the achievement of deadlines. Such systems have been enabled for a rapidly increasing set of diverse end-uses by the unremitting advances in computing power per constant-dollar cost and per constant-unit-volume of space. End-use applications of deadline-driven real-time computers span a spectrum that includes transportation systems, robotics and manufacturing, aerospace and defense, industrial process control, and telecommunications."
Compilers and Operating Systems for Low Power focuses on both application-level compiler directed energy optimization and low-power operating systems. Chapters have been written exclusively for this volume by several of the leading researchers and application developers active in the field. The first six chapters focus on low energy operating systems, or more in general, energy-aware middleware services. The next five chapters are centered on compilation and code optimization. Finally, the last chapter takes a more general viewpoint on mobile computing. The material demonstrates the state-of-the-art work and proves that to obtain the best energy/performance characteristics, compilers, system software, and architecture must work together. The relationship between energy-aware middleware and wireless microsensors, mobile computing and other wireless applications are covered. This work will be of interest to researchers in the areas of low-power computing, embedded systems, compiler optimizations, and operating systems.
This book contains the papers that were accepted for presentation at the 1988 NATO Advanced Study Institute on Underwater Acoustic Data Processing, held at the Royal Military College of Canada from 18 to 29 July, 1988. Approximately 110 participants from various NATO countries were in attendance during this two week period. Their research interests range from underwater acoustics to signal processing and computer science; some are renowned scientists and some are recent Ph.D. graduates. The purpose of the ASI was to provide an authoritative summing up of the various research activities related to sonar technology. The exposition on each subject began with one or two tutorials prepared by invited lecturers, followed by research papers which provided indications of the state of development in that specific area. I have broadly classified the papers into three sections under the titles of I. Propagation and Noise, II. Signal Processing and III. Post Processing. The reader will find in Section I papers on low frequency acoustic sources and effects of the medium on underwater acoustic propagation. Problems such as coherence loss due to boundary interaction, wavefront distortion and multipath transmission were addressed. Besides the medium, corrupting noise sources also have a strong influence on the performance of a sonar system and several researchers described methods of modeling these sources.
For real-time systems, the worst-case execution time (WCET) is the key objective to be considered. Traditionally, code for real-time systems is generated without taking this objective into account and the WCET is computed only after code generation. Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems presents the first comprehensive approach integrating WCET considerations into the code generation process. Based on the proposed reconciliation between a compiler and a timing analyzer, a wide range of novel optimization techniques is provided. Among others, the techniques cover source code and assembly level optimizations, exploit machine learning techniques and address the design of modern systems that have to meet multiple objectives. Using these optimizations, the WCET of real-time applications can be reduced by about 30% to 45% on the average. This opens opportunities for decreasing clock speeds, costs and energy consumption of embedded processors. The proposed techniques can be used for all types real-time systems, including automotive and avionics IT systems.
As we continue to build faster and fast. er computers, their performance is be coming increasingly dependent on the memory hierarchy. Both the clock speed of the machine and its throughput per clock depend heavily on the memory hierarchy. The time to complet. e a cache acce88 is oft. en the factor that det. er mines the cycle time. The effectiveness of the hierarchy in keeping the average cost of a reference down has a major impact on how close the sustained per formance is to the peak performance. Small changes in the performance of the memory hierarchy cause large changes in overall system performance. The strong growth of ruse machines, whose performance is more tightly coupled to the memory hierarchy, has created increasing demand for high performance memory systems. This trend is likely to accelerate: the improvements in main memory performance will be small compared to the improvements in processor performance. This difference will lead to an increasing gap between prOCe880r cycle time and main memory acce. time. This gap must be closed by improving the memory hierarchy. Computer architects have attacked this gap by designing machines with cache sizes an order of magnitude larger than those appearing five years ago. Microproce880r-based RISe systems now have caches that rival the size of those in mainframes and supercomputers."
Computer vision is one of the most complex and computationally intensive problem. Like any other computationally intensive problems, parallel pro cessing has been suggested as an approach to solving the problems in com puter vision. Computer vision employs algorithms from a wide range of areas such as image and signal processing, advanced mathematics, graph theory, databases and artificial intelligence. Hence, not only are the comput ing requirements for solving vision problems tremendous but they also demand computers that are efficient to solve problems exhibiting vastly dif ferent characteristics. With recent advances in VLSI design technology, Single Instruction Multiple Data (SIMD) massively parallel computers have been proposed and built. However, such architectures have been shown to be useful for solving a very limited subset of the problems in vision. Specifically, algorithms from low level vision that involve computations closely mimicking the architec ture and require simple control and computations are suitable for massively parallel SIMD computers. An Integrated Vision System (IVS) involves com putations from low to high level vision to be executed in a systematic fashion and repeatedly. The interaction between computations and information dependent nature of the computations suggests that architectural require ments for computer vision systems can not be satisfied by massively parallel SIMD computers."
This book is on dependence concepts and general methods for dependence testing. Here, dependence means data dependence and the tests are compile-time tests. We felt the time was ripe to create a solid theory of the subject, to provide the research community with a uniform conceptual framework in which things fit together nicely. How successful we have been in meeting these goals, of course, remains to be seen. We do not try to include all the minute details that are known, nor do we deal with clever tricks that all good programmers would want to use. We do try to convince the reader that there is a mathematical basis consisting of theories of bounds of linear functions and linear diophantine equations, that levels and direction vectors are concepts that arise rather natu rally, that different dependence tests are really special cases of some general tests, and so on. Some mathematical maturity is needed for a good understand ing of the book: mainly calculus and linear algebra. We have cov ered diophantine equations rather thoroughly and given a descrip of some matrix theory ideas that are not very widely known. tion A reader familiar with linear programming would quickly recog nize several concepts. We have learned a great deal from the works of M. Wolfe, and K. Kennedy and R. Allen. Wolfe's Ph. D. thesis at the University of Illinois and Kennedy & Allen's paper on vectorization of Fortran programs are still very useful sources on this subject."
Enterprise developers face several challenges when it comes to building serverless applications, such as integrating applications and building container images from source. With more than 60 practical recipes, this cookbook helps you solve these issues with Knative--the first serverless platform natively designed for Kubernetes. Each recipe contains detailed examples and exercises, along with a discussion of how and why it works. If you have a good understanding of serverless computing and Kubernetes core resources such as deployment, services, routes, and replicas, the recipes in this cookbook show you how to apply Knative in real enterprise application development. Authors Kamesh Sampath and Burr Sutter include chapters on autoscaling, build and eventing, observability, Knative on OpenShift, and more. With this cookbook, you'll learn how to: Efficiently build, deploy, and manage modern serverless workloads Apply Knative in real enterprise scenarios, including advanced eventing Monitor your Knative serverless applications effectively Integrate Knative with CI/CD principles, such as using pipelines for faster, more successful production deployments Deploy a rich ecosystem of enterprise integration patterns and connectors in Apache Camel K as Kubernetes and Knative components
Tremendous achievements in the area of semiconductor electronics turn - croelectronics into nanoelectronics. Actually, we observe a real technical boom connected with achievements in nanoelectronics. It results in devel- mentofverycomplexintegratedcircuits, particularlythe?eldprogrammable logic devices (FPLD). Up-to-day FPLD chips are so huge, that it is enough only one chip to implement a really complex digital system including a da- path and a control unit. Because of the extreme complexity of modern - crochips, it is very important to develop e?ective design methods oriented on particular properties of logic elements. The development of digital s- tems with use of FPLD microchips is not possible without use of di?erent hardware description languages(HDL), such as VHDL and Verilog. Di?erent computer-aided design tools (CAD) are wide used to develop digital system hardware. As majorityof researchespoint out, the design processis nowvery similar to the process of program development. It allows a researcher to pay more attention to some speci?c problems, where there are no standard f- mal methods of their solution. But application of all these achievements does not guaranteeper sedevelopmentof some competitiveelectronic product, - pecially in the acceptable time-to-market. This problem solution is possible only if a researcher possesses fundamental knowledge of a design process and knows exactly the mode of operation of industrial CAD tools in use. As it is known, any digital system can be represented as a composition of a da- path and a control uni
This volume consists of a collection of 28 papers presented at the NATO Advanced Study Institute held July 14-27, 1985 in the beautiful resort at Les Arcs, France. The director of this ASI was A. K. Sood and A. H. Qureshi was the co-director. Since its introduction in the early 1970s the relational data model has been widely accepted. Several research and industrial efforts are being undertaken to develop special purpose database machines to implement the relational model. In addition, database machines are being explored for applications such as image processing and information retrieval. In this NATO-ASI the lecturers discussed special purpose database machine architectures from the viewpoint of architecture and hardware detail, software, user needs, theoretical framework and applications. The papers presented were of two types - regular papers and short papers. The research in database machines is being conducted in several countries. This fact is under scored when it is noted that papers in this volume are authored by researchers in France, Germany, Italy, Japan, Portugal, Turkey, U.K. and U.S.A. The first paper discusses the experience and applications of users with a commercially available database machine. In the following eight papers the characteristics of six database machines are discussed. The second, third and fourth papers deal with the RDBM project at the Technical University of Braunschweig (Germany). Zeidler discusses the design objectives, architecture and system design of RDBM. Teich presents the hardware utilized for sorting." |
You may like...
Fumonisins in Food
Lauren S. Jackson, Jonathan W. Devries, …
Hardcover
R4,331
Discovery Miles 43 310
Handbook of Big Data Analytics, Volume 1…
Vadlamani Ravi, Aswani Kumar Cherukuri
Hardcover
|