Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > General theory of computing > Systems analysis & design
This book constitutes the thoroughly refereed proceedings of the 22st International Conference on Computer Networks, CN 2015, held in Brunow, Poland, in June 2015. The 42 revised full papers presented were carefully reviewed and selected from 79 submissions. The papers in these proceedings cover the following topics: computer networks, distributed computer systems, communications and teleinformatics.
This proceedings set contains 85 selected full papers presented at the 3rd International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences - MCO 2015, held on May 11-13, 2015 at Lorraine University, France. The present part I of the 2 volume set includes articles devoted to Combinatorial optimization and applications, DC programming and DCA: thirty years of Developments, Dynamic Optimization, Modelling and Optimization in financial engineering, Multiobjective programming, Numerical Optimization, Spline Approximation and Optimization, as well as Variational Principles and Applications.
Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting potentials can be identified. The book presents the relevant theoretical background and measuring methods as well as proposed solutions. An evaluation of network monitors and checklists rounds out the work.
This textbook serves as an introduction to fault-tolerance, intended for upper-division undergraduate students, graduate-level students and practicing engineers in need of an overview of the field. Readers will develop skills in modeling and evaluating fault-tolerant architectures in terms of reliability, availability and safety. They will gain a thorough understanding of fault tolerant computers, including both the theory of how to design and evaluate them and the practical knowledge of achieving fault-tolerance in electronic, communication and software systems. Coverage includes fault-tolerance techniques through hardware, software, information and time redundancy. The content is designed to be highly accessible, including numerous examples and exercises. Solutions and powerpoint slides are available for instructors.
This book constitutes the refereed proceedings of the 18th National Conference on Computer Engineering and Technology, NCCET 2014, held in Guiyang, China, during July/August 2014. The 18 papers presented were carefully reviewed and selected from 85 submissions. They are organized in topical sections on processor architecture; computer application and software optimization; technology on the horizon.
This book offers readers a quick, comprehensive and up-to-date overview of the most important methodologies, technologies, APIs and standards related to the portability and interoperability of cloud applications and services, illustrated by a number of use cases representing a variety of interoperability and portability scenarios. The lack of portability and interoperability between cloud platforms at different service levels is the main issue affecting cloud-based services today. The brokering, negotiation, management, monitoring and reconfiguration of cloud resources are challenging tasks for developers and users of cloud applications due to the different business models associated with resource consumption, and to the variety of services and features offered by different cloud providers. In chapter 1 the concepts of cloud portability and interoperability are introduced, together with the issues and limitations arising when such features are lacking or ignored. Subsequently, chapter 2 provides an overview of the state-of-the-art methodologies and technologies that are currently used or being explored to enable cloud portability and interoperability. Chapter 3 illustrates the main cross-platform cloud APIs and how they can solve interoperability and portability issues. In turn, chapter 4 presents a set of ready-to-use solutions which, either because of their broad-scale use in cloud computing scenarios or because they utilize established or emerging standards, play a fundamental part in providing interoperable and portable solutions. Lastly, chapter 5 presents an overview of emerging standards for cloud Interoperability and portability. Researchers and developers of cloud-based services will find here a brief survey of the relevant methodologies, APIs and standards, illustrated by case studies and complemented by an extensive reference list for more detailed descriptions of every topic covered.
This book constitutes the refereed proceedings of the 12 European Conference on Wireless Sensor Networks, EWSN 2015, held in Porto, Portugal, in February 2015. The 14 full papers and 9 short papers presented were carefully reviewed and selected from 85 submissions. They cover a wide range of topics grouped into five sessions: services and applications, mobility and delay-tolerance, routing and data dissemination, and human-centric sensing.
The demand for large-scale dependable, systems, such as Air Traffic Management, industrial plants and space systems, is attracting efforts of many word-leading European companies and SMEs in the area, and is expected to increase in the near future. The adoption of Off-The-Shelf (OTS) items plays a key role in such a scenario. OTS items allow mastering complexity and reducing costs and time-to-market; however, achieving these goals by ensuring dependability requirements at the same time is challenging. CRITICAL STEP project establishes a strategic collaboration between academic and industrial partners, and proposes a framework to support the development of dependable, OTS-based, critical systems. The book introduces methods and tools adopted by the critical systems industry, and surveys key achievements of the CRITICAL STEP project along four directions: fault injection tools, V&V of critical systems, runtime monitoring and evaluation techniques, and security assessment.
This volume constitutes the refereed proceedings of the 10th International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR 2015, held in Hong Kong, China, in January 2015. The 36 revised full papers were carefully reviewed and selected from 45 submissions. The papers are organized in topical sections on discrete and continuous optimization; image restoration and inpainting; segmentation; PDE and variational methods; motion, tracking and multiview reconstruction; statistical methods and learning; and medical image analysis.
This book constitutes the refereed proceedings of the 14th International Conference on Systems Simulation, Asia Simulation 2014, held in Kitakyushu, Japan, in October 2014. The 32 revised full papers presented were carefully reviewed and selected from 69 submissions. The papers are organized in topical sections on modeling and simulation technology; network simulation; high performance computing and cloud simulation; numerical simulation and visualization; simulation of instrumentation and control application; simulation technology in diversified higher education; general purpose simulation.
The two volumes LNCS 8805 and 8806 constitute the thoroughly refereed post-conference proceedings of 18 workshops held at the 20th International Conference on Parallel Computing, Euro-Par 2014, in Porto, Portugal, in August 2014. The 100 revised full papers presented were carefully reviewed and selected from 173 submissions. The volumes include papers from the following workshops: APCI&E (First Workshop on Applications of Parallel Computation in Industry and Engineering - BigDataCloud (Third Workshop on Big Data Management in Clouds) - DIHC (Second Workshop on Dependability and Interoperability in Heterogeneous Clouds) - FedICI (Second Workshop on Federative and Interoperable Cloud Infrastructures) - Hetero Par (12th International Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms) - HiBB (5th Workshop on High Performance Bioinformatics and Biomedicine) - LSDVE (Second Workshop on Large Scale Distributed Virtual Environments on Clouds and P2P) - MuCoCoS (7th International Workshop on Multi-/Many-core Computing Systems) - OMHI (Third Workshop on On-chip Memory Hierarchies and Interconnects) - PADAPS (Second Workshop on Parallel and Distributed Agent-Based Simulations) - PROPER (7th Workshop on Productivity and Performance) - Resilience (7th Workshop on Resiliency in High Performance Computing with Clusters, Clouds, and Grids) - REPPAR (First International Workshop on Reproducibility in Parallel Computing) - ROME (Second Workshop on Runtime and Operating Systems for the Many Core Era) - SPPEXA (Workshop on Software for Exascale Computing) - TASUS (First Workshop on Techniques and Applications for Sustainable Ultrascale Computing Systems) - UCHPC (7th Workshop on Un Conventional High Performance Computing) and VHPC (9th Workshop on Virtualization in High-Performance Cloud Computing.
Enterprise developers face several challenges when it comes to building serverless applications, such as integrating applications and building container images from source. With more than 60 practical recipes, this cookbook helps you solve these issues with Knative--the first serverless platform natively designed for Kubernetes. Each recipe contains detailed examples and exercises, along with a discussion of how and why it works. If you have a good understanding of serverless computing and Kubernetes core resources such as deployment, services, routes, and replicas, the recipes in this cookbook show you how to apply Knative in real enterprise application development. Authors Kamesh Sampath and Burr Sutter include chapters on autoscaling, build and eventing, observability, Knative on OpenShift, and more. With this cookbook, you'll learn how to: Efficiently build, deploy, and manage modern serverless workloads Apply Knative in real enterprise scenarios, including advanced eventing Monitor your Knative serverless applications effectively Integrate Knative with CI/CD principles, such as using pipelines for faster, more successful production deployments Deploy a rich ecosystem of enterprise integration patterns and connectors in Apache Camel K as Kubernetes and Knative components
To solve performance problems in modern computing infrastructures, often comprising thousands of servers running hundreds of applications, spanning multiple tiers, you need tools that go beyond mere reporting. You need tools that enable performance analysis of application workflow across the entire enterprise. That's what PDQ (Pretty Damn Quick) provides. PDQ is an open-source performance analyzer based on the paradigm of queues. Queues are ubiquitous in every computing environment as buffers, and since any application architecture can be represented as a circuit of queueing delays, PDQ is a natural fit for analyzing system performance. Building on the success of the first edition, this considerably expanded second edition now comprises four parts. Part I contains the foundational concepts, as well as a new first chapter that explains the central role of queues in successful performance analysis. Part II provides the basics of queueing theory in a highly intelligible style for the non-mathematician; little more than high-school algebra being required. Part III presents many practical examples of how PDQ can be applied. The PDQ manual has been relegated to an appendix in Part IV, along with solutions to the exercises contained in each chapter. Throughout, the Perl code listings have been newly formatted to improve readability. The PDQ code and updates to the PDQ manual are available from the author's web site at www.perfdynamics.com
Data Mining is the process of posing queries and extracting useful information, patterns and trends previously unknown from large quantities of data [Thu, 00]. It is the process where intelligent tools are applied in order to extract data patterns [JM, 01]. This encompasses a number of different technical approaches, such as cluster analysis, learning classification and association rules, and finding dependencies. Agents are defined as software entities that perform some set of tasks on behalf of users with some degree of autonomy. This research work deals about developing a automated data mining system which encompasses the familiar data mining algorithms using intelligent agents in object oriented databases and proposing a framework. Because the data mining system uses the intelligent agents, a new user will be able to interact with the data mining system without much data mining technical knowledge. This system will automatically select the appropriate data mining technique and select the necessary field needed from the database at the appropriate time without expecting the users to specify the specific technique and the parameters. Also a new framework is proposed for incorporating intelligent agents with automated data mining. One of the major goals in developing this system is to give the control to the computer for learning automatically by using intelligent agents.
Architecture Description Languages is an essential reference for both academic and professional researchers in the field of system engineering and design. The papers presented in this volume were selected from the workshop of the same name that was held as part of the World Computer Congress 2004 Conference, held in Toulouse, France in August 2004. This collection presents significant research and innovative developments and applications from both academic researchers and industry practitioners on topics ranging from Semantics to Tool and Development Environments. The aim of an ADL is to formally describe software and hardware architectures. Usually, an ADL describes components, their interfaces, their structures, their interactions (structure of data flow and control flow) and the mappings to hardware systems. A major goal of such description is to allow analysis with respect to several aspects like timing, safety, reliability. The papers in this state-of-the-art volume cover such topics of interest as components, connectors, composition; semantics and formalization; verification, simulation and test; tools and development environments; standardization; industrial projects. To encourage closer interaction between academic and industrial networking research communities, the workshop welcomed academic research papers as well as industrial contributions, and both are included here. Which makes this collection important not only for ADL experts and researchers, but also for all teachers and administrators interested in ADL.
Making the most ef?cient use of computer systems has rapidly become a leading topic of interest for the computer industry and its customers alike. However, the focus of these discussions is often on single, isolated, and speci?c architectural and technological improvements for power reduction and conservation, while ignoring the fact that power ef?ciency as a ratio of performance to power consumption is equally in?uenced by performance improvements and architectural power red- tion. Furthermore, ef?ciency can be in?uenced on all levels of today's system hi- archies from single cores all the way to distributed Grid environments. To improve execution and power ef?ciency requires progress in such diverse ?elds as program optimization, optimization of program scheduling, and power reduction of idling system components for all levels of the system hierarchy. Improving computer system ef?ciency requires improving system performance and reducing system power consumption. To research and reach reasonable conc- sions about system performance we need to not only understand the architectures of our computer systems and the available array of code transformations for p- formance optimizations, but we also need to be able to express this understanding in performance models good enough to guide decisions about code optimizations for speci?c systems. This understanding is necessary on all levels of the system hierarchy from single cores to nodes to full high performance computing (HPC) systems, and eventually to Grid environments with multiple systems and resources.
Practical Programming in the Cell Broadband Engine offers a unique programming guide for the Cell Broadband Engine, demonstrating a large number of real-life programs to identify and solve problems in engineering, logic design, VLSI CAD, number-theory, graph-theory, computational geometry, image processing, and other subjects. Key features include:
At the dawn of the 21st century and the information age, communication and c- puting power are becoming ever increasingly available, virtually pervading almost every aspect of modern socio-economical interactions. Consequently, the potential for realizing a signi?cantly greater number of technology-mediated activities has emerged. Indeed, many of our modern activity ?elds are heavily dependant upon various underlying systems and software-intensive platforms. Such technologies are commonly used in everyday activities such as commuting, traf?c control and m- agement, mobile computing, navigation, mobile communication. Thus, the correct function of the forenamed computing systems becomes a major concern. This is all the more important since, in spite of the numerous updates, patches and ?rmware revisions being constantly issued, newly discovered logical bugs in a wide range of modern software platforms (e. g. , operating systems) and software-intensive systems (e. g. , embedded systems) are just as frequently being reported. In addition, many of today's products and services are presently being deployed in a highly competitive environment wherein a product or service is succeeding in most of the cases thanks to its quality to price ratio for a given set of features. Accordingly, a number of critical aspects have to be considered, such as the ab- ity to pack as many features as needed in a given product or service while c- currently maintaining high quality, reasonable price, and short time -to- market.
GERAD celebrates this year its 25th anniversary. The Center was created in 1980 by a small group of professors and researchers of HEC Montreal, McGill University and of the Ecole Polytechnique de Montreal. GERAD's activities achieved sufficient scope to justify its conversion in June 1988 into a Joint Research Centre of HEC Montreal, the Ecole Polytechnique de Montreal and McGill University. In 1996, the U- versite du Quebec a Montreal joined these three institutions. GERAD has fifty members (professors), more than twenty research associates and post doctoral students and more than two hundreds master and Ph.D. students. GERAD is a multi-university center and a vital forum for the devel- ment of operations research. Its mission is defined around the following four complementarily objectives: * The original and expert contribution to all research fields in GERAD's area of expertise; * The dissemination of research results in the best scientific outlets as well as in the society in general; * The training of graduate students and post doctoral researchers; * The contribution to the economic community by solving important problems and providing transferable tools.
The two volumes LNCS 8805 and 8806 constitute the thoroughly refereed post-conference proceedings of 18 workshops held at the 20th International Conference on Parallel Computing, Euro-Par 2014, in Porto, Portugal, in August 2014. The 100 revised full papers presented were carefully reviewed and selected from 173 submissions. The volumes include papers from the following workshops: APCI&E (First Workshop on Applications of Parallel Computation in Industry and Engineering - BigDataCloud (Third Workshop on Big Data Management in Clouds) - DIHC (Second Workshop on Dependability and Interoperability in Heterogeneous Clouds) - FedICI (Second Workshop on Federative and Interoperable Cloud Infrastructures) - Hetero Par (12th International Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms) - HiBB (5th Workshop on High Performance Bioinformatics and Biomedicine) - LSDVE (Second Workshop on Large Scale Distributed Virtual Environments on Clouds and P2P) - MuCoCoS (7th International Workshop on Multi-/Many-core Computing Systems) - OMHI (Third Workshop on On-chip Memory Hierarchies and Interconnects) - PADAPS (Second Workshop on Parallel and Distributed Agent-Based Simulations) - PROPER (7th Workshop on Productivity and Performance) - Resilience (7th Workshop on Resiliency in High Performance Computing with Clusters, Clouds, and Grids) - REPPAR (First International Workshop on Reproducibility in Parallel Computing) - ROME (Second Workshop on Runtime and Operating Systems for the Many Core Era) - SPPEXA (Workshop on Software for Exascale Computing) - TASUS (First Workshop on Techniques and Applications for Sustainable Ultrascale Computing Systems) - UCHPC (7th Workshop on Un Conventional High Performance Computing) and VHPC (9th Workshop on Virtualization in High-Performance Cloud Computing.
The resilience of computing systems includes their dependability as well as their fault tolerance and security. It defines the ability of a computing system to perform properly in the presence of various kinds of disturbances and to recover from any service degradation. These properties are immensely important in a world where many aspects of our daily life depend on the correct, reliable and secure operation of often large-scale distributed computing systems. Wolter and her co-editors grouped the 20 chapters from leading researchers into seven parts: an introduction and motivating examples, modeling techniques, model-driven prediction, measurement and metrics, testing techniques, case studies, and conclusions. The core is formed by 12 technical papers, which are framed by motivating real-world examples and case studies, thus illustrating the necessity and the application of the presented methods. While the technical chapters are independent of each other and can be read in any order, the reader will benefit more from the case studies if he or she reads them together with the related techniques. The papers combine topics like modeling, benchmarking, testing, performance evaluation, and dependability, and aim at academic and industrial researchers in these areas as well as graduate students and lecturers in related fields. In this volume, they will find a comprehensive overview of the state of the art in a field of continuously growing practical importance.
This book constitutes the proceedings of the Third International Workshop on Foundational and Practical Aspects of Resource Analysis, FOPARA 2013, held in Bertinoro, Italy, in August 2013. The 9 papers presented in this volume were carefully reviewed and selected from 12 submissions. They deal with traditional approaches to complexity analysis, differential privacy, and probabilistic analysis of programs.
This is a new type of edited volume in the Frontiers in Electronic Testing book series devoted to recent advances in electronic circuits testing. The book is a comprehensive elaboration on important topics which capture major research and development efforts today. "Hot" topics of current interest to test technology community have been selected, and the authors are key contributors in the corresponding topics.
It is widely acknowledged that the cost of validation and testing comprises a s- nificant percentage of the overall development costs for electronic systems today, and is expected to escalate sharply in the future. Many studies have shown that up to 70% of the design development time and resources are spent on functional verification. Functional errors manifest themselves very early in the design flow, and unless they are detected up front, they can result in severe consequence- both financially and from a safety viewpoint. Indeed, several recent instances of high-profile functional errors (e. g. , the Pentium FDIV bug) have resulted in - creased attention paid to verifying the functional correctness of designs. Recent efforts have proposed augmenting the traditional RTL simulation-based validation methodology with formal techniques in an attempt to uncover hard-to-find c- ner cases, with the goal of trying to reach RTL functional verification closure. However, what is often not highlighted is the fact that in spite of the tremendous time and effort put into such efforts at the RTL and lower levels of abstraction, the complexity of contemporary embedded systems makes it difficult to guarantee functional correctness at the system level under all possible operational scenarios. The problem is exacerbated in current System-on-Chip (SOC) design meth- ologies that employ Intellectual Property (IP) blocks composed of processor cores, coprocessors, and memory subsystems. Functional verification becomes one of the major bottlenecks in the design of such systems.
Inthe?eldofformalmethodsincomputerscience,concurrencytheoryisreceivinga constantlyincreasinginterest.Thisisespeciallytrueforprocessalgebra.Althoughit had been originally conceived as a means for reasoning about the semantics of c- current programs, process algebraic formalisms like CCS, CSP, ACP, ?-calculus, and their extensions (see, e.g., [154,119,112,22,155,181,30]) were soon used also for comprehendingfunctionaland nonfunctionalaspects of the behaviorof com- nicating concurrent systems. The scienti?c impact of process calculi and behavioral equivalences at the base of process algebra is witnessed not only by a very rich literature. It is in fact worth mentioningthe standardizationprocedurethat led to the developmentof the process algebraic language LOTOS [49], as well as the implementation of several modeling and analysis tools based on process algebra, like CWB [70] and CADP [93], some of which have been used in industrial case studies. Furthermore, process calculi and behavioral equivalencesare by now adopted in university-levelcourses to teach the foundations of concurrent programming as well as the model-driven design of concurrent, distributed, and mobile systems. Nevertheless, after 30 years since its introduction, process algebra is rarely adopted in the practice of software development. On the one hand, its technica- ties often obfuscate the way in which systems are modeled. As an example, if a process term comprises numerous occurrences of the parallel composition operator, it is hard to understand the communicationscheme among the varioussubterms. On the other hand, process algebra is perceived as being dif?cult to learn and use by practitioners, as it is not close enough to the way they think of software systems. |
You may like...
Tools for High Performance Computing…
Hartmut Mix, Christoph Niethammer, …
Hardcover
R2,815
Discovery Miles 28 150
Handbook of Research on Modern Systems…
Mahbubur Rahman Syed, Sharifun Nessa Syed
Hardcover
R6,992
Discovery Miles 69 920
Large-Scale Fuzzy Interconnected Control…
Zhixiong Zhong, Chih-Min Lin
Hardcover
R4,591
Discovery Miles 45 910
Cases on Lean Thinking Applications in…
Eduardo Guilherme Satolo, Robisom Damasceno Calado
Hardcover
R6,281
Discovery Miles 62 810
Handbook of Research on 5G Networks and…
Augustine O Nwajana, Isibor Kennedy Ihianle
Hardcover
R8,415
Discovery Miles 84 150
Information Systems, International…
Ralph Stair, George Reynolds
Paperback
|