![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
This work presents a study of cache replacement strategies designed for static web content. Proxy servers can improve performance by caching static web content such as cascading style sheets, java script source files, and large files such as images. This topic is particularly important in wireless ad hoc networks, in which mobile devices act as proxy servers for a group of other mobile devices. Opening chapters present an introduction to web requests and the characteristics of web objects, web proxy servers and Squid, and artificial neural networks. This is followed by a comprehensive review of cache replacement strategies simulated against different performance metrics. The work then describes a novel approach to web proxy cache replacement that uses neural networks for decision making, evaluates its performance and decision structures, and examines its implementation in a real environment, namely, in the Squid proxy server.
This book constitutes the refereed proceedings of the International Conference on Modern Probabilistic Methods for Analysis of Telecommunication Networks, Belarusian Winter Workshop in Queueing Theory, BWWQT 2013, held in Minsk, Belarus, in January 2013. The 23 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers present new results in study and optimization of information transmission models in telecommunication networks using different approaches, mainly based on theories of queueing systems and queueing networks.
This book constitutes the refereed proceedings of the 16th National Conference on Computer Engineering and Technology, NCCET 2012, held in Shanghai, China, in August 2012. The 27 papers presented were carefully reviewed and selected from 108 submissions. They are organized in topical sections named: microprocessor and implementation; design of integration circuit; I/O interconnect; and measurement, verification, and others.
Readership This book is devoted to the study of compiler transformations that are needed to expose the parallelism hiddenin a program. This book is notan introductory book to parallel processing, nor is it an introductory book to parallelizing compilers. Weassume thatreaders are familiar withthebooks High Performance Compilers for Parallel Computingby Wolfe [121] and Super- compilers for Parallel and Vector Computers by Zima and Chapman [125], and that they want to know more about scheduling transformations. In this book we describe both task graph scheduling and loop nest scheduling. Taskgraphschedulingaims atexecuting tasks linked by prece- dence constraints; it is a run-time activity. Loop nest scheduling aims at ex- ecutingstatementinstances linked bydata dependences;it is a compile-time activity. We are mostly interested in loop nestscheduling,butwe also deal with task graph scheduling for two main reasons: (i) Beautiful algorithms and heuristics have been reported in the literature recently; and (ii) Several graphscheduling, like list scheduling, are the basis techniques used in task ofthe loop transformations implemented in loop nest scheduling. As for loop nest scheduling our goal is to capture in a single place the fantastic developments of the last decade or so. Dozens of loop trans- formations have been introduced (loop interchange, skewing, fusion, dis- tribution, etc.) before a unifying theory emerged. The theory builds upon the pioneering papers of Karp, Miller, and Winograd [65] and of Lam- port [75], and it relies on sophisticated mathematical tools (unimodular transformations, parametric integer linear programming, Hermite decom- position, Smithdecomposition, etc.).
This book constitutes the refereed proceedings of the 4th TPC Technology Conference, TPCTC 2012, held in Istanbul, Turkey, in August 2012. It contains 10 selected peer-reviewed papers, 2 invited talks, a report from the TPC Public Relations Committee, and a report from the workshop on Big Data Benchmarking, WBDB 2012. The papers present novel ideas and methodologies in performance evaluation, measurement, and characterization.
This book constitutes the refereed post-proceedings of the 9th European Performance Engineering Workshop, EPEW 2012, held in Munich, Germany, and the 28th UK Performance Engineering Workshop, UKPEW 2012, held in Edinburgh, UK, in July 2012. The 15 regular papers and one poster presentation paper presented together with 2 invited talks were carefully reviewed and selected from numerous submissions. The papers cover a wide range of topics from classical performance modeling areas such as wireless network protocols and parallel execution of scientific codes to hot topics such as energy-aware computing to unexpected ventures into ranking professional tennis players. In addition to new case studies, the papers also present new techniques for dealing with the modeling challenges brought about by the increasing complexity and scale of systems today.
This book constitutes the refereed proceedings of the 17th National Conference on Computer Engineering and Technology, NCCET 2013, held in Xining, China, in July 2013. The 26 papers presented were carefully reviewed and selected from 234 submissions. They are organized in topical sections named: Application Specific Processors; Communication Architecture; Computer Application and Software Optimization; IC Design and Test; Processor Architecture; Technology on the Horizon.
Formal Methods for Open Object-Based Distributed Systems presents the leading edge in several related fields, specifically object-orientated programming, open distributed systems and formal methods for object-oriented systems. With increased support within industry regarding these areas, this book captures the most up-to-date information on the subject. Many topics are discussed, including the following important areas: object-oriented design and programming; formal specification of distributed systems; open distributed platforms; types, interfaces and behaviour; formalisation of object-oriented methods. This volume comprises the proceedings of the International Workshop on Formal Methods for Open Object-based Distributed Systems (FMOODS), sponsored by the International Federation for Information Processing (IFIP) which was held in Florence, Italy, in February 1999. Formal Methods for Open Object-Based Distributed Systems is suitable as a secondary text for graduate-level courses in computer science and telecommunications, and as a reference for researchers and practitioners in industry, commerce and government.
Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node.
This book contains a selection of thoroughly refereed and revised papers from the Third International ICST Conference on Digital Forensics and Cyber Crime, ICDF2C 2011, held October 26-28 in Dublin, Ireland. The field of digital forensics is becoming increasingly important for law enforcement, network security, and information assurance. It is a multidisciplinary area that encompasses a number of fields, including law, computer science, finance, networking, data mining, and criminal justice. The 24 papers in this volume cover a variety of topics ranging from tactics of cyber crime investigations to digital forensic education, network forensics, and the use of formal methods in digital investigations. There is a large section addressing forensics of mobile digital devices.
This book constitutes the proceedings of the 8th International ICST Conference, TridentCom 2012, held in Thessanoliki, Greece, in June 2012. Out of numerous submissions the Program Committee finally selected 51 full papers. These papers cover topics such as future Internet testbeds, wireless testbeds, federated and large scale testbeds, network and resource virtualization, overlay network testbeds, management provisioning and tools for networking research, and experimentally driven research and user experience evaluation.
At the beginning of the 1990s research started in how to combine soft comput ing with reconfigurable hardware in a quite unique way. One of the methods that was developed has been called evolvable hardware. Thanks to evolution ary algorithms researchers have started to evolve electronic circuits routinely. A number of interesting circuits - with features unreachable by means of con ventional techniques - have been developed. Evolvable hardware is quite pop ular right now; more than fifty research groups are spread out over the world. Evolvable hardware has become a part of the curriculum at some universi ties. Evolvable hardware is being commercialized and there are specialized conferences devoted to evolvable hardware. On the other hand, surprisingly, we can feel the lack of a theoretical background and consistent design methodology in the area. Furthermore, it is quite difficult to implement really innovative and practically successful evolvable systems using contemporary digital reconfigurable technology."
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
This book constitutes the refereed proceedings of the 16th International Conference on Model Driven Engineering Languages and Systems, MODELS 2013, held in Miami, FL, USA, in September/October 2013. The 47 full papers presented in this volume were carefully reviewed and selected from a total of 180 submissions. They are organized in topical sections named: tool support; dependability; comprehensibility; testing; evolution; verification; product lines; semantics; domain-specific modeling languages; models@RT; design and architecture; model transformation; model analysis; and system synthesis.
Research into grid computing has been driven by the need to solve large-scale, increasingly complex problems for scientific applications. Yet the applications of grid computing for business and casual users did not begin to emerge until the development of the concept of cloud computing, fueled by advances in virtualization techniques, coupled with the increased availability of ever-greater Internet bandwidth. The appeal of this new paradigm is mainly based on its simplicity, and the affordable price for seamless access to both computational and storage resources. This timely text/reference introduces the fundamental principles and techniques underlying grids, clouds and virtualization technologies, as well as reviewing the latest research and expected future developments in the field. Readers are guided through the key topics by internationally recognized experts, enabling them to develop their understanding of an area likely to play an ever more significant role in coming years. Topics and features: presents contributions from an international selection of experts in the field; provides a thorough introduction and overview of existing technologies in grids, clouds and virtualization, including a brief history of the field; examines the basic requirements for performance isolation of virtual machines on multi-core servers, analyzing a selection of system virtualization technologies; examines both business and scientific applications of grids and clouds, including their use in the life sciences and for high-performance computing; explores cloud building technologies, architectures for enhancing grid infrastructures with cloud computing, and cloud performance; discusses energy aware grids and clouds, workflows on grids and clouds, and cloud and grid programming models. This useful text will enable interested readers to familiarize themselves with the key topics of grids, clouds and virtualization, and to contribute to new advances in the field. Researchers, undergraduate and graduate students, system designers and programmers, and IT policy makers will all benefit from the material covered.
This book constitutes the refereed proceedings of the 9th International Workshop on Economics of Grids, Clouds, Systems, and Services, GECON 2012, held in Berlin, Germany, in November 2012. The 12 revised full papers presented together with 6 work in progress papers were carefully reviewed and selected from more than 36 submissions. The papers are organized in the following topical sections: market mechanisms, pricing and negotiation; resource allocation, scheduling and admission control; work in progress on tools and techniques for cost-efficient service selection; market modeling; trust; cloud computing in education; and work in progress on cloud adoption and business models.
This book constitutes the proceedings of the Second International Conference on Network Computing and Information Security, NCIS 2012, held in Shanghai, China, in December 2012. The 104 revised papers presented in this volume were carefully reviewed and selected from 517 submissions. They are organized in topical sections named: applications of cryptography; authentication and non-repudiation; cloud computing; communication and information systems; design and analysis of cryptographic algorithms; information hiding and watermarking; intelligent networked systems; multimedia computing and intelligence; network and wireless network security; network communication; parallel and distributed systems; security modeling and architectures; sensor network; signal and information processing; virtualization techniques and applications; and wireless network.
Database and Application Security XV provides a forum for original research results, practical experiences, and innovative ideas in database and application security. With the rapid growth of large databases and the application systems that manage them, security issues have become a primary concern in business, industry, government and society. These concerns are compounded by the expanding use of the Internet and wireless communication technologies. This volume covers a wide variety of topics related to security and privacy of information in systems and applications, including: * Access control models; * Role and constraint-based access control; * Distributed systems; * Information warfare and intrusion detection; * Relational databases; * Implementation issues; * Multilevel systems; * New application areas including XML. Database and Application Security XV contains papers, keynote addresses, and panel discussions from the Fifteenth Annual Working Conference on Database and Application Security, organized by the International Federation for Information Processing (IFIP) Working Group 11.3 and held July 15-18, 2001 in Niagara on the Lake, Ontario, Canada.
Motivation It is now possible to build powerful single-processor and multiprocessor systems and use them efficiently for data processing, which has seen an explosive ex pansion in many areas of computer science and engineering. One approach to meeting the performance requirements of the applications has been to utilize the most powerful single-processor system that is available. When such a system does not provide the performance requirements, pipelined and parallel process ing structures can be employed. The concept of parallel processing is a depar ture from sequential processing. In sequential computation one processor is in volved and performs one operation at a time. On the other hand, in parallel computation several processors cooperate to solve a problem, which reduces computing time because several operations can be carried out simultaneously. Using several processors that work together on a given computation illustrates a new paradigm in computer problem solving which is completely different from sequential processing. From the practical point of view, this provides sufficient justification to investigate the concept of parallel processing and related issues, such as parallel algorithms. Parallel processing involves utilizing several factors, such as parallel architectures, parallel algorithms, parallel programming lan guages and performance analysis, which are strongly interrelated. In general, four steps are involved in performing a computational problem in parallel. The first step is to understand the nature of computations in the specific application domain.
Developing correct and efficient software is far more complex for parallel and distributed systems than it is for sequential processors. Some of the reasons for this added complexity are: the lack of a universally acceptable parallel and distributed programming paradigm, the criticality of achieving high performance, and the difficulty of writing correct parallel and distributed programs. These factors collectively influence the current status of parallel and distributed software development tools efforts. Tools and Environments for Parallel and Distributed Systems addresses the above issues by describing working tools and environments, and gives a solid overview of some of the fundamental research being done worldwide. Topics covered in this collection are: mainstream program development tools, performance prediction tools and studies; debugging tools and research; and nontraditional tools. Audience: Suitable as a secondary text for graduate level courses in software engineering and parallel and distributed systems, and as a reference for researchers and practitioners in industry.
Domain Modelling for Interactive Systems Design brings together in one place important contributions and up-to-date research results in this fast moving area. Domain Modelling for Interactive Systems Design serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Logic and Complexity looks at basic logic as it is used in Computer Science, and provides students with a logical approach to Complexity theory. With plenty of exercises, this book presents classical notions of mathematical logic, such as decidability, completeness and incompleteness, as well as new ideas brought by complexity theory such as NP-completeness, randomness and approximations, providing a better understanding for efficient algorithmic solutions to problems. Divided into three parts, it covers: - Model Theory and Recursive Functions - introducing the basic model theory of propositional, 1st order, inductive definitions and 2nd order logic. Recursive functions, Turing computability and decidability are also examined. - Descriptive Complexity - looking at the relationship between definitions of problems, queries, properties of programs and their computational complexity. - Approximation - explaining how some optimization problems and counting problems can be approximated according to their logical form. Logic is important in Computer Science, particularly for verification problems and database query languages such as SQL. Students and researchers in this field will find this book of great interest.
Multithreaded computer architecture has emerged as one of the most promising and exciting avenues for the exploitation of parallelism. This new field represents the confluence of several independent research directions which have united over a common set of issues and techniques. Multithreading draws on recent advances in dataflow, RISC, compiling for fine-grained parallel execution, and dynamic resource management. It offers the hope of dramatic performance increases through parallel execution for a broad spectrum of significant applications based on extensions to `traditional' approaches. Multithreaded Computer Architecture is divided into four parts, reflecting four major perspectives on the topic. Part I provides the reader with basic background information, definitions, and surveys of work which have in one way or another been pivotal in defining and shaping multithreading as an architectural discipline. Part II examines key elements of multithreading, highlighting the fundamental nature of latency and synchronization. This section presents clever techniques for hiding latency and supporting large synchronization name spaces. Part III looks at three major multithreaded systems, considering issues of machine organization and compilation strategy. Part IV concludes the volume with an analysis of multithreaded architectures, showcasing methodologies and actual measurements. Multithreaded Computer Architecture: A Summary of the State of the Art is an excellent reference source and may be used as a text for advanced courses on the subject.
In writing this book, our goal was to produce a text suitable for a first course in mathematical logic more attuned than the traditional textbooks to the re cent dramatic growth in the applications oflogic to computer science. Thus, our choice oftopics has been heavily influenced by such applications. Of course, we cover the basic traditional topics: syntax, semantics, soundnes5, completeness and compactness as well as a few more advanced results such as the theorems of Skolem-Lowenheim and Herbrand. Much ofour book, however, deals with other less traditional topics. Resolution theorem proving plays a major role in our treatment of logic especially in its application to Logic Programming and PRO LOG. We deal extensively with the mathematical foundations ofall three ofthese subjects. In addition, we include two chapters on nonclassical logics - modal and intuitionistic - that are becoming increasingly important in computer sci ence. We develop the basic material on the syntax and semantics (via Kripke frames) for each of these logics. In both cases, our approach to formal proofs, soundness and completeness uses modifications of the same tableau method in troduced for classical logic. We indicate how it can easily be adapted to various other special types of modal logics. A number of more advanced topics (includ ing nonmonotonic logic) are also briefly introduced both in the nonclassical logic chapters and in the material on Logic Programming and PROLOG.
This book is a compilation of research accomplishments in the fields of modeling, simulation, and their applications, as presented at AsiaSim 2011 (Asia Simulation Conference 2011). The conference, held in Seoul, Korea, November 16-18, was organized by ASIASIM (Federation of Asian Simulation Societies), KSS (Korea Society for Simulation), CASS (Chinese Association for System Simulation), and JSST (Japan Society for Simulation Technology). AsiaSim 2011 provided a forum for scientists, academicians, and professionals from the Asia-Pacific region and other parts of the world to share their latest exciting research findings in modeling and simulation methodologies, techniques, and their tools and applications in military, communication network, industry, and general engineering problems. |
![]() ![]() You may like...
Modeling and Simulation: Theory and…
George A. Bekey, Boris Ja Kogan
Hardcover
R4,528
Discovery Miles 45 280
Essentials of Time Series for Financial…
Massimo Guidolin, Manuela Pedio
Paperback
R2,244
Discovery Miles 22 440
Robust Control of Uncertain Dynamic…
Rama K. Yedavalli
Hardcover
Internal Combustion Engines…
Institution of Mechanical Engineers
Paperback
R5,216
Discovery Miles 52 160
Solid State Physics, Volume 73
Robert L Stamps, Robert E Camley, …
Hardcover
|