Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > General theory of computing > Systems analysis & design
This book constitutes the refereed proceedings of the 16th National Conference on Computer Engineering and Technology, NCCET 2012, held in Shanghai, China, in August 2012. The 27 papers presented were carefully reviewed and selected from 108 submissions. They are organized in topical sections named: microprocessor and implementation; design of integration circuit; I/O interconnect; and measurement, verification, and others.
This book constitutes the refereed proceedings of the International Conference on Modern Probabilistic Methods for Analysis of Telecommunication Networks, Belarusian Winter Workshop in Queueing Theory, BWWQT 2013, held in Minsk, Belarus, in January 2013. The 23 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers present new results in study and optimization of information transmission models in telecommunication networks using different approaches, mainly based on theories of queueing systems and queueing networks.
This book constitutes the refereed proceedings of the 4th TPC Technology Conference, TPCTC 2012, held in Istanbul, Turkey, in August 2012. It contains 10 selected peer-reviewed papers, 2 invited talks, a report from the TPC Public Relations Committee, and a report from the workshop on Big Data Benchmarking, WBDB 2012. The papers present novel ideas and methodologies in performance evaluation, measurement, and characterization.
This book constitutes the refereed post-proceedings of the 9th European Performance Engineering Workshop, EPEW 2012, held in Munich, Germany, and the 28th UK Performance Engineering Workshop, UKPEW 2012, held in Edinburgh, UK, in July 2012. The 15 regular papers and one poster presentation paper presented together with 2 invited talks were carefully reviewed and selected from numerous submissions. The papers cover a wide range of topics from classical performance modeling areas such as wireless network protocols and parallel execution of scientific codes to hot topics such as energy-aware computing to unexpected ventures into ranking professional tennis players. In addition to new case studies, the papers also present new techniques for dealing with the modeling challenges brought about by the increasing complexity and scale of systems today.
Readership This book is devoted to the study of compiler transformations that are needed to expose the parallelism hiddenin a program. This book is notan introductory book to parallel processing, nor is it an introductory book to parallelizing compilers. Weassume thatreaders are familiar withthebooks High Performance Compilers for Parallel Computingby Wolfe [121] and Super- compilers for Parallel and Vector Computers by Zima and Chapman [125], and that they want to know more about scheduling transformations. In this book we describe both task graph scheduling and loop nest scheduling. Taskgraphschedulingaims atexecuting tasks linked by prece- dence constraints; it is a run-time activity. Loop nest scheduling aims at ex- ecutingstatementinstances linked bydata dependences;it is a compile-time activity. We are mostly interested in loop nestscheduling,butwe also deal with task graph scheduling for two main reasons: (i) Beautiful algorithms and heuristics have been reported in the literature recently; and (ii) Several graphscheduling, like list scheduling, are the basis techniques used in task ofthe loop transformations implemented in loop nest scheduling. As for loop nest scheduling our goal is to capture in a single place the fantastic developments of the last decade or so. Dozens of loop trans- formations have been introduced (loop interchange, skewing, fusion, dis- tribution, etc.) before a unifying theory emerged. The theory builds upon the pioneering papers of Karp, Miller, and Winograd [65] and of Lam- port [75], and it relies on sophisticated mathematical tools (unimodular transformations, parametric integer linear programming, Hermite decom- position, Smithdecomposition, etc.).
This state-of-the-art survey features topics related to the impact of multicore, manycore, and coprocessor technologies in science and large-scale applications in an interdisciplinary environment. The papers included in this survey cover research in mathematical modeling, design of parallel algorithms, aspects of microprocessor architecture, parallel programming languages, hardware-aware computing, heterogeneous platforms, manycore technologies, performance tuning, and requirements for large-scale applications. The contributions presented in this volume are an outcome of an inspiring conference conceived and organized by the editors at the University of Applied Sciences (HfT) in Stuttgart, Germany, in September 2012. The 10 revised full papers selected from 21 submissions are presented together with the twelve poster abstracts and focus on combination of new aspects of microprocessor technologies, parallel applications, numerical simulation, and software development; thus they clearly show the potential of emerging technologies in the area of multicore and manycore processors that are paving the way towards personal supercomputing and very likely towards exascale computing.
Formal Methods for Open Object-Based Distributed Systems presents the leading edge in several related fields, specifically object-orientated programming, open distributed systems and formal methods for object-oriented systems. With increased support within industry regarding these areas, this book captures the most up-to-date information on the subject. Many topics are discussed, including the following important areas: object-oriented design and programming; formal specification of distributed systems; open distributed platforms; types, interfaces and behaviour; formalisation of object-oriented methods. This volume comprises the proceedings of the International Workshop on Formal Methods for Open Object-based Distributed Systems (FMOODS), sponsored by the International Federation for Information Processing (IFIP) which was held in Florence, Italy, in February 1999. Formal Methods for Open Object-Based Distributed Systems is suitable as a secondary text for graduate-level courses in computer science and telecommunications, and as a reference for researchers and practitioners in industry, commerce and government.
Semiotics, the science of signs, has long been recognised as an important discipline for understanding information and communications. Moreover it has found wide application in other areas of computer science, as it offers an effective insight into organisations and the computer systems that support them. An organisation may be viewed as a system of information and communication in which human actors, with the assistance of information technology, are able to process, represent, store and consume information. Computer systems that fit into an organisation and that support and enhance its performance and competitiveness, can be better delivered if semiotic principles are understood and applied. In this book, first published in 2000, semiotic methods are introduced and illustrated through three major case studies, which demonstrate how information systems can be developed to meet business requirements and support business objectives. It will appeal to academics, systems developers and analysts.
Parallel processing is seen today as the means to improve the power of computing facilities by breaking the Von Neumann bottleneck of conventional sequential computer architectures. By defining appropriate parallel computation models definite advantages can be obtained. Parallel processing is the center of the research in Europe in the field of Information Processing Systems so the CEC has funded the ESPRIT Supemode project to develop a low cost, high performance, multiprocessor machine. The result of this project is a modular, reconfigurable architecture based on !NMOS transputers: T.Node. This machine can be considered as a research, industrial and commercial success. The CEC has decided to continue to encourage manufacturers as well as research and end-users of transputers by funding other projects in this field. This book presents course papers of the Eurocourse given at the Joint Research Centre in ISPRA (Italy) from the 4th to 8 of November 1991. First we present an overview of various trends in the design of parallel architectures and specially of the T.Node with it's software development environments, new distributed system aspects and also new hardware extensions based on the !NMOS T9000 processor. In a second part, we review some real case applications in the field of image synthesis, image processing, signal processing, terrain modeling, particle physics simulation and also enhanced parallel and distributed numerical methods on T.Node.
This book contains a selection of thoroughly refereed and revised papers from the Third International ICST Conference on Digital Forensics and Cyber Crime, ICDF2C 2011, held October 26-28 in Dublin, Ireland. The field of digital forensics is becoming increasingly important for law enforcement, network security, and information assurance. It is a multidisciplinary area that encompasses a number of fields, including law, computer science, finance, networking, data mining, and criminal justice. The 24 papers in this volume cover a variety of topics ranging from tactics of cyber crime investigations to digital forensic education, network forensics, and the use of formal methods in digital investigations. There is a large section addressing forensics of mobile digital devices.
This book constitutes the proceedings of the 8th International ICST Conference, TridentCom 2012, held in Thessanoliki, Greece, in June 2012. Out of numerous submissions the Program Committee finally selected 51 full papers. These papers cover topics such as future Internet testbeds, wireless testbeds, federated and large scale testbeds, network and resource virtualization, overlay network testbeds, management provisioning and tools for networking research, and experimentally driven research and user experience evaluation.
High Performance Computing Systems and Applications contains a selection of fully refereed papers presented at the 14th International Conference on High Performance Computing Systems and Applications held in Victoria, Canada, in June 2000. This book presents the latest research in HPC Systems and Applications, including distributed systems and architecture, numerical methods and simulation, network algorithms and protocols, computer architecture, distributed memory, and parallel algorithms. It also covers such topics as applications in astrophysics and space physics, cluster computing, numerical simulations for fluid dynamics, electromagnetics and crystal growth, networks and the Grid, and biology and Monte Carlo techniques. High Performance Computing Systems and Applications is suitable as a secondary text for graduate level courses, and as a reference for researchers and practitioners in industry.
Research into grid computing has been driven by the need to solve large-scale, increasingly complex problems for scientific applications. Yet the applications of grid computing for business and casual users did not begin to emerge until the development of the concept of cloud computing, fueled by advances in virtualization techniques, coupled with the increased availability of ever-greater Internet bandwidth. The appeal of this new paradigm is mainly based on its simplicity, and the affordable price for seamless access to both computational and storage resources. This timely text/reference introduces the fundamental principles and techniques underlying grids, clouds and virtualization technologies, as well as reviewing the latest research and expected future developments in the field. Readers are guided through the key topics by internationally recognized experts, enabling them to develop their understanding of an area likely to play an ever more significant role in coming years. Topics and features: presents contributions from an international selection of experts in the field; provides a thorough introduction and overview of existing technologies in grids, clouds and virtualization, including a brief history of the field; examines the basic requirements for performance isolation of virtual machines on multi-core servers, analyzing a selection of system virtualization technologies; examines both business and scientific applications of grids and clouds, including their use in the life sciences and for high-performance computing; explores cloud building technologies, architectures for enhancing grid infrastructures with cloud computing, and cloud performance; discusses energy aware grids and clouds, workflows on grids and clouds, and cloud and grid programming models. This useful text will enable interested readers to familiarize themselves with the key topics of grids, clouds and virtualization, and to contribute to new advances in the field. Researchers, undergraduate and graduate students, system designers and programmers, and IT policy makers will all benefit from the material covered.
"Analysis of Turbulent Flows" is written by one of the most prolific authors in the field of CFD. Professor of Aerodynamics at SUPAERO and Director of DMAE at ONERA, Professor Tuncer Cebeci calls on both his academic and industrial experience when presenting this work. Each chapter has been specifically constructed to provide a comprehensive overview of turbulent flow and its measurement. "Analysis of Turbulent Flows" serves as an advanced textbook for PhD candidates working in the field of CFD and is essential reading for researchers, practitioners in industry and MSc and MEng students. The field of CFD is strongly represented by the following
corporate organizations: Boeing, Airbus, Thales, United
Technologies and General Electric. Government bodies and academic
institutions also have a strong interest in this exciting
field.
Database and Application Security XV provides a forum for original research results, practical experiences, and innovative ideas in database and application security. With the rapid growth of large databases and the application systems that manage them, security issues have become a primary concern in business, industry, government and society. These concerns are compounded by the expanding use of the Internet and wireless communication technologies. This volume covers a wide variety of topics related to security and privacy of information in systems and applications, including: * Access control models; * Role and constraint-based access control; * Distributed systems; * Information warfare and intrusion detection; * Relational databases; * Implementation issues; * Multilevel systems; * New application areas including XML. Database and Application Security XV contains papers, keynote addresses, and panel discussions from the Fifteenth Annual Working Conference on Database and Application Security, organized by the International Federation for Information Processing (IFIP) Working Group 11.3 and held July 15-18, 2001 in Niagara on the Lake, Ontario, Canada.
This book constitutes the refereed proceedings of the 9th International Workshop on Economics of Grids, Clouds, Systems, and Services, GECON 2012, held in Berlin, Germany, in November 2012. The 12 revised full papers presented together with 6 work in progress papers were carefully reviewed and selected from more than 36 submissions. The papers are organized in the following topical sections: market mechanisms, pricing and negotiation; resource allocation, scheduling and admission control; work in progress on tools and techniques for cost-efficient service selection; market modeling; trust; cloud computing in education; and work in progress on cloud adoption and business models.
This book constitutes the proceedings of the Second International Conference on Network Computing and Information Security, NCIS 2012, held in Shanghai, China, in December 2012. The 104 revised papers presented in this volume were carefully reviewed and selected from 517 submissions. They are organized in topical sections named: applications of cryptography; authentication and non-repudiation; cloud computing; communication and information systems; design and analysis of cryptographic algorithms; information hiding and watermarking; intelligent networked systems; multimedia computing and intelligence; network and wireless network security; network communication; parallel and distributed systems; security modeling and architectures; sensor network; signal and information processing; virtualization techniques and applications; and wireless network.
This state-of-the-art survey gives a systematic presentation of recent advances in the design and validation of computer architectures. The book covers a comprehensive range of architecture design and validation methods, from computer aided high-level design of VLSI circuits and systems to layout and testable design, including the modeling and synthesis of behavior and dataflow, cell-based logic optimization, machine assisted verification, and virtual machine design.
Motivation It is now possible to build powerful single-processor and multiprocessor systems and use them efficiently for data processing, which has seen an explosive ex pansion in many areas of computer science and engineering. One approach to meeting the performance requirements of the applications has been to utilize the most powerful single-processor system that is available. When such a system does not provide the performance requirements, pipelined and parallel process ing structures can be employed. The concept of parallel processing is a depar ture from sequential processing. In sequential computation one processor is in volved and performs one operation at a time. On the other hand, in parallel computation several processors cooperate to solve a problem, which reduces computing time because several operations can be carried out simultaneously. Using several processors that work together on a given computation illustrates a new paradigm in computer problem solving which is completely different from sequential processing. From the practical point of view, this provides sufficient justification to investigate the concept of parallel processing and related issues, such as parallel algorithms. Parallel processing involves utilizing several factors, such as parallel architectures, parallel algorithms, parallel programming lan guages and performance analysis, which are strongly interrelated. In general, four steps are involved in performing a computational problem in parallel. The first step is to understand the nature of computations in the specific application domain.
The Marktoberdorf Summer School 1995 'Logic of Computation' was the 16th in a series of Advanced Study Institutes under the sponsorship of the NATO Scientific Affairs Division held in Marktoberdorf. Its scientific goal was to survey recent progress on the impact of logical methods in software development. The courses dealt with many different aspects of this interplay, where major progress has been made. Of particular importance were the following. * The proofs-as-programs paradigm, which makes it possible to extract verified programs directly from proofs. Here a higher order logic or type theoretic setup of the underlying language has developed into a standard. * Extensions of logic programming, e.g. by allowing more general formulas and/or higher order languages. * Proof theoretic methods, which provide tools to deal with questions of feasibility of computations and also to develop a general mathematical understanding of complexity questions. * Rewrite systems and unification, again in a higher order context. Closely related is the now well-established Grabner basis theory, which recently has found interesting applications. * Category theoretic and more generally algebraic methods and techniques to analyze the semantics of programming languages. All these issues were covered by a team of leading researchers. Their courses were grouped under the following headings.
The success of VHDL since it has been balloted in 1987 as an IEEE standard may look incomprehensible to the large population of hardware designers, who had never heared of Hardware Description Languages before (for at least 90% of them), as well as to the few hundreds of specialists who had been working on these languages for a long time (25 years for some of them). Until 1988, only a very small subset of designers, in a few large companies, were used to describe their designs using a proprietary HDL, or sometimes a HDL inherited from a University when some software environment happened to be developped around it, allowing usability by third parties. A number of benefits were definitely recognized to this practice, such as functional verification of a specification through simulation, first performance evaluation of a tentative design, and sometimes automatic microprogram generation or even automatic high level synthesis. As there was apparently no market for HDL's, the ECAD vendors did not care about them, start-up companies were seldom able to survive in this area, and large users of proprietary tools were spending more and more people and money just to maintain their internal system.
Developing correct and efficient software is far more complex for parallel and distributed systems than it is for sequential processors. Some of the reasons for this added complexity are: the lack of a universally acceptable parallel and distributed programming paradigm, the criticality of achieving high performance, and the difficulty of writing correct parallel and distributed programs. These factors collectively influence the current status of parallel and distributed software development tools efforts. Tools and Environments for Parallel and Distributed Systems addresses the above issues by describing working tools and environments, and gives a solid overview of some of the fundamental research being done worldwide. Topics covered in this collection are: mainstream program development tools, performance prediction tools and studies; debugging tools and research; and nontraditional tools. Audience: Suitable as a secondary text for graduate level courses in software engineering and parallel and distributed systems, and as a reference for researchers and practitioners in industry.
Domain Modelling for Interactive Systems Design brings together in one place important contributions and up-to-date research results in this fast moving area. Domain Modelling for Interactive Systems Design serves as an excellent reference, providing insight into some of the most challenging research issues in the field.
Logic and Complexity looks at basic logic as it is used in Computer Science, and provides students with a logical approach to Complexity theory. With plenty of exercises, this book presents classical notions of mathematical logic, such as decidability, completeness and incompleteness, as well as new ideas brought by complexity theory such as NP-completeness, randomness and approximations, providing a better understanding for efficient algorithmic solutions to problems. Divided into three parts, it covers: - Model Theory and Recursive Functions - introducing the basic model theory of propositional, 1st order, inductive definitions and 2nd order logic. Recursive functions, Turing computability and decidability are also examined. - Descriptive Complexity - looking at the relationship between definitions of problems, queries, properties of programs and their computational complexity. - Approximation - explaining how some optimization problems and counting problems can be approximated according to their logical form. Logic is important in Computer Science, particularly for verification problems and database query languages such as SQL. Students and researchers in this field will find this book of great interest.
Multithreaded computer architecture has emerged as one of the most promising and exciting avenues for the exploitation of parallelism. This new field represents the confluence of several independent research directions which have united over a common set of issues and techniques. Multithreading draws on recent advances in dataflow, RISC, compiling for fine-grained parallel execution, and dynamic resource management. It offers the hope of dramatic performance increases through parallel execution for a broad spectrum of significant applications based on extensions to `traditional' approaches. Multithreaded Computer Architecture is divided into four parts, reflecting four major perspectives on the topic. Part I provides the reader with basic background information, definitions, and surveys of work which have in one way or another been pivotal in defining and shaping multithreading as an architectural discipline. Part II examines key elements of multithreading, highlighting the fundamental nature of latency and synchronization. This section presents clever techniques for hiding latency and supporting large synchronization name spaces. Part III looks at three major multithreaded systems, considering issues of machine organization and compilation strategy. Part IV concludes the volume with an analysis of multithreaded architectures, showcasing methodologies and actual measurements. Multithreaded Computer Architecture: A Summary of the State of the Art is an excellent reference source and may be used as a text for advanced courses on the subject. |
You may like...
Research Anthology on Usage and…
Information R Management Association
Hardcover
R18,648
Discovery Miles 186 480
Information Systems, International…
Ralph Stair, George Reynolds
Paperback
Systems Analysis And Design In A…
John Satzinger, Robert Jackson, …
Hardcover
(1)
Computer Systems and Software…
Information Reso Management Association
Hardcover
R9,450
Discovery Miles 94 500
|