![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
This guide demonstrates how virtual build and test can be supported by the Discrete Event Systems Specification (DEVS) simulation modeling formalism, and the System Entity Structure (SES) simulation model ontology. The book examines a wide variety of Systems of Systems (SoS) problems, ranging from cloud computing systems to biological systems in agricultural food crops. Features: includes numerous exercises, examples and case studies throughout the text; presents a step-by-step introduction to DEVS concepts, encouraging hands-on practice to building sophisticated SoS models; illustrates virtual build and test for a variety of SoS applications using both commercial and open source DEVS simulation environments; introduces an approach based on activity concepts intrinsic to DEVS-based system design, that integrates both energy and information processing requirements; describes co-design modeling concepts and methods to capture separate and integrated software and hardware systems.
This timely text presents a comprehensive overview of fault tolerance techniques for high-performance computing (HPC). The text opens with a detailed introduction to the concepts of checkpoint protocols and scheduling algorithms, prediction, replication, silent error detection and correction, together with some application-specific techniques such as ABFT. Emphasis is placed on analytical performance models. This is then followed by a review of general-purpose techniques, including several checkpoint and rollback recovery protocols. Relevant execution scenarios are also evaluated and compared through quantitative models. Features: provides a survey of resilience methods and performance models; examines the various sources for errors and faults in large-scale systems; reviews the spectrum of techniques that can be applied to design a fault-tolerant MPI; investigates different approaches to replication; discusses the challenge of energy consumption of fault-tolerance methods in extreme-scale systems.
Written in a unique style, this book is a valuable resource for faculty, graduate students, and researchers in the communications and networking area whose work interfaces with optimization. It teaches you how various optimization methods can be applied to solve complex problems in wireless networks. Each chapter reviews a specific optimization method and then demonstrates how to apply the theory in practice through a detailed case study taken from state-of-the-art research. You will learn various tips and step-by-step instructions for developing optimization models, reformulations, and transformations, particularly in the context of cross-layer optimization problems in wireless networks involving flow routing (network layer), scheduling (link layer), and power control (physical layer). Throughout, a combination of techniques from both operations research and computer science disciplines provides a holistic treatment of optimization methods and their applications. Each chapter includes homework exercises, with PowerPoint slides and a solutions manual for instructors available online.
This proceedings set contains 85 selected full papers presented at the 3rd International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences - MCO 2015, held on May 11-13, 2015 at Lorraine University, France. The present part I of the 2 volume set includes articles devoted to Combinatorial optimization and applications, DC programming and DCA: thirty years of Developments, Dynamic Optimization, Modelling and Optimization in financial engineering, Multiobjective programming, Numerical Optimization, Spline Approximation and Optimization, as well as Variational Principles and Applications.
This book constitutes the refereed proceedings of the 15th International Scientific Conference on Information Technologies and Mathematical Modeling, named after A. F. Terpugov, ITMM 2016, held in Katun, Russia, in September 2016. The 33 full papers presented together with 4 short papers were carefully reviewed and selected from 96 submissions. They are devoted to new results in the queueing theory and its applications, addressing specialists in probability theory, random processes, mathematical modeling as well as engineers dealing with logical and technical design and operational management of telecommunication and computer networks.
Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting potentials can be identified. The book presents the relevant theoretical background and measuring methods as well as proposed solutions. An evaluation of network monitors and checklists rounds out the work.
This book offers readers a quick, comprehensive and up-to-date overview of the most important methodologies, technologies, APIs and standards related to the portability and interoperability of cloud applications and services, illustrated by a number of use cases representing a variety of interoperability and portability scenarios. The lack of portability and interoperability between cloud platforms at different service levels is the main issue affecting cloud-based services today. The brokering, negotiation, management, monitoring and reconfiguration of cloud resources are challenging tasks for developers and users of cloud applications due to the different business models associated with resource consumption, and to the variety of services and features offered by different cloud providers. In chapter 1 the concepts of cloud portability and interoperability are introduced, together with the issues and limitations arising when such features are lacking or ignored. Subsequently, chapter 2 provides an overview of the state-of-the-art methodologies and technologies that are currently used or being explored to enable cloud portability and interoperability. Chapter 3 illustrates the main cross-platform cloud APIs and how they can solve interoperability and portability issues. In turn, chapter 4 presents a set of ready-to-use solutions which, either because of their broad-scale use in cloud computing scenarios or because they utilize established or emerging standards, play a fundamental part in providing interoperable and portable solutions. Lastly, chapter 5 presents an overview of emerging standards for cloud Interoperability and portability. Researchers and developers of cloud-based services will find here a brief survey of the relevant methodologies, APIs and standards, illustrated by case studies and complemented by an extensive reference list for more detailed descriptions of every topic covered.
This book constitutes the refereed proceedings of the 18th National Conference on Computer Engineering and Technology, NCCET 2014, held in Guiyang, China, during July/August 2014. The 18 papers presented were carefully reviewed and selected from 85 submissions. They are organized in topical sections on processor architecture; computer application and software optimization; technology on the horizon.
This comprehensive summary of the state of the art in Ultra Wideband (UWB) system engineering takes you through all aspects of UWB design, from components through the propagation channel to system engineering aspects. Mathematical tools and basics are covered, allowing for a complete characterisation and description of the UWB scenario, in both the time and the frequency domains. UWB MMICs, antennas, antenna arrays, and filters are described, as well as quality measurement parameters and design methods for specific applications. The UWB propagation channel is discussed, including a complete mathematical description together with modeling tools. A system analysis is offered, addressing both radio and radar systems, and techniques for optimization and calibration. Finally, an overview of future applications of UWB technology is presented. Ideal for scientists as well as RF system and component engineers working in short range wireless technologies.
This volume presents selected papers from the International Conference on Reliability, Safety, and Hazard. It presents the latest developments in reliability engineering and probabilistic safety assessment, and brings together contributions from a diverse international community and covers all aspects of safety, reliability, and hazard assessment across a host of interdisciplinary applications. This book will be of interest to researchers in both academia and the industry.
This book constitutes the refereed proceedings of the 14th International Conference on Systems Simulation, Asia Simulation 2014, held in Kitakyushu, Japan, in October 2014. The 32 revised full papers presented were carefully reviewed and selected from 69 submissions. The papers are organized in topical sections on modeling and simulation technology; network simulation; high performance computing and cloud simulation; numerical simulation and visualization; simulation of instrumentation and control application; simulation technology in diversified higher education; general purpose simulation.
This book constitutes the refereed proceedings of the 13th National Conference on Embedded System Technology, ESTC 2015, held in Beijing, China, in October 2015. The 18 revised full papers presented were carefully reviewed and selected from 63 papers. The topics cover a broad range of fields focusing on research about embedded system technologies, such as smart hardware, system and network, applications and algorithm.
To solve performance problems in modern computing infrastructures, often comprising thousands of servers running hundreds of applications, spanning multiple tiers, you need tools that go beyond mere reporting. You need tools that enable performance analysis of application workflow across the entire enterprise. That's what PDQ (Pretty Damn Quick) provides. PDQ is an open-source performance analyzer based on the paradigm of queues. Queues are ubiquitous in every computing environment as buffers, and since any application architecture can be represented as a circuit of queueing delays, PDQ is a natural fit for analyzing system performance. Building on the success of the first edition, this considerably expanded second edition now comprises four parts. Part I contains the foundational concepts, as well as a new first chapter that explains the central role of queues in successful performance analysis. Part II provides the basics of queueing theory in a highly intelligible style for the non-mathematician; little more than high-school algebra being required. Part III presents many practical examples of how PDQ can be applied. The PDQ manual has been relegated to an appendix in Part IV, along with solutions to the exercises contained in each chapter. Throughout, the Perl code listings have been newly formatted to improve readability. The PDQ code and updates to the PDQ manual are available from the author's web site at www.perfdynamics.com
Data Mining is the process of posing queries and extracting useful information, patterns and trends previously unknown from large quantities of data [Thu, 00]. It is the process where intelligent tools are applied in order to extract data patterns [JM, 01]. This encompasses a number of different technical approaches, such as cluster analysis, learning classification and association rules, and finding dependencies. Agents are defined as software entities that perform some set of tasks on behalf of users with some degree of autonomy. This research work deals about developing a automated data mining system which encompasses the familiar data mining algorithms using intelligent agents in object oriented databases and proposing a framework. Because the data mining system uses the intelligent agents, a new user will be able to interact with the data mining system without much data mining technical knowledge. This system will automatically select the appropriate data mining technique and select the necessary field needed from the database at the appropriate time without expecting the users to specify the specific technique and the parameters. Also a new framework is proposed for incorporating intelligent agents with automated data mining. One of the major goals in developing this system is to give the control to the computer for learning automatically by using intelligent agents.
This book constitutes the proceedings of the 22nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2016, which took place in Eindhoven, The Netherlands, in April 2016, held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016. The 44 full papers presented in this volume were carefully reviewed and selected from 175 submissions. They were organized in topical sections named: abstraction and verification; probabilistic and stochastic systems; synthesis; tool papers; concurrency; tool demos; languages and automata; security; optimization; and competition on software verification - SV-COMP.
Making the most ef?cient use of computer systems has rapidly become a leading topic of interest for the computer industry and its customers alike. However, the focus of these discussions is often on single, isolated, and speci?c architectural and technological improvements for power reduction and conservation, while ignoring the fact that power ef?ciency as a ratio of performance to power consumption is equally in?uenced by performance improvements and architectural power red- tion. Furthermore, ef?ciency can be in?uenced on all levels of today's system hi- archies from single cores all the way to distributed Grid environments. To improve execution and power ef?ciency requires progress in such diverse ?elds as program optimization, optimization of program scheduling, and power reduction of idling system components for all levels of the system hierarchy. Improving computer system ef?ciency requires improving system performance and reducing system power consumption. To research and reach reasonable conc- sions about system performance we need to not only understand the architectures of our computer systems and the available array of code transformations for p- formance optimizations, but we also need to be able to express this understanding in performance models good enough to guide decisions about code optimizations for speci?c systems. This understanding is necessary on all levels of the system hierarchy from single cores to nodes to full high performance computing (HPC) systems, and eventually to Grid environments with multiple systems and resources.
This work presents link prediction similarity measures for social networks that exploit the degree distribution of the networks. In the context of link prediction in dense networks, the text proposes similarity measures based on Markov inequality degree thresholding (MIDTs), which only consider nodes whose degree is above a threshold for a possible link. Also presented are similarity measures based on cliques (CNC, AAC, RAC), which assign extra weight between nodes sharing a greater number of cliques. Additionally, a locally adaptive (LA) similarity measure is proposed that assigns different weights to common nodes based on the degree distribution of the local neighborhood and the degree distribution of the network. In the context of link prediction in dense networks, the text introduces a novel two-phase framework that adds edges to the sparse graph to forma boost graph.
This book constitutes the thoroughly refereed proceedings of the 22st International Conference on Computer Networks, CN 2015, held in Brunow, Poland, in June 2015. The 42 revised full papers presented were carefully reviewed and selected from 79 submissions. The papers in these proceedings cover the following topics: computer networks, distributed computer systems, communications and teleinformatics.
At the dawn of the 21st century and the information age, communication and c- puting power are becoming ever increasingly available, virtually pervading almost every aspect of modern socio-economical interactions. Consequently, the potential for realizing a signi?cantly greater number of technology-mediated activities has emerged. Indeed, many of our modern activity ?elds are heavily dependant upon various underlying systems and software-intensive platforms. Such technologies are commonly used in everyday activities such as commuting, traf?c control and m- agement, mobile computing, navigation, mobile communication. Thus, the correct function of the forenamed computing systems becomes a major concern. This is all the more important since, in spite of the numerous updates, patches and ?rmware revisions being constantly issued, newly discovered logical bugs in a wide range of modern software platforms (e. g. , operating systems) and software-intensive systems (e. g. , embedded systems) are just as frequently being reported. In addition, many of today's products and services are presently being deployed in a highly competitive environment wherein a product or service is succeeding in most of the cases thanks to its quality to price ratio for a given set of features. Accordingly, a number of critical aspects have to be considered, such as the ab- ity to pack as many features as needed in a given product or service while c- currently maintaining high quality, reasonable price, and short time -to- market.
GERAD celebrates this year its 25th anniversary. The Center was created in 1980 by a small group of professors and researchers of HEC Montreal, McGill University and of the Ecole Polytechnique de Montreal. GERAD's activities achieved sufficient scope to justify its conversion in June 1988 into a Joint Research Centre of HEC Montreal, the Ecole Polytechnique de Montreal and McGill University. In 1996, the U- versite du Quebec a Montreal joined these three institutions. GERAD has fifty members (professors), more than twenty research associates and post doctoral students and more than two hundreds master and Ph.D. students. GERAD is a multi-university center and a vital forum for the devel- ment of operations research. Its mission is defined around the following four complementarily objectives: * The original and expert contribution to all research fields in GERAD's area of expertise; * The dissemination of research results in the best scientific outlets as well as in the society in general; * The training of graduate students and post doctoral researchers; * The contribution to the economic community by solving important problems and providing transferable tools.
This book constitutes the proceedings of the Third International Workshop on Foundational and Practical Aspects of Resource Analysis, FOPARA 2013, held in Bertinoro, Italy, in August 2013. The 9 papers presented in this volume were carefully reviewed and selected from 12 submissions. They deal with traditional approaches to complexity analysis, differential privacy, and probabilistic analysis of programs.
It is widely acknowledged that the cost of validation and testing comprises a s- nificant percentage of the overall development costs for electronic systems today, and is expected to escalate sharply in the future. Many studies have shown that up to 70% of the design development time and resources are spent on functional verification. Functional errors manifest themselves very early in the design flow, and unless they are detected up front, they can result in severe consequence- both financially and from a safety viewpoint. Indeed, several recent instances of high-profile functional errors (e. g. , the Pentium FDIV bug) have resulted in - creased attention paid to verifying the functional correctness of designs. Recent efforts have proposed augmenting the traditional RTL simulation-based validation methodology with formal techniques in an attempt to uncover hard-to-find c- ner cases, with the goal of trying to reach RTL functional verification closure. However, what is often not highlighted is the fact that in spite of the tremendous time and effort put into such efforts at the RTL and lower levels of abstraction, the complexity of contemporary embedded systems makes it difficult to guarantee functional correctness at the system level under all possible operational scenarios. The problem is exacerbated in current System-on-Chip (SOC) design meth- ologies that employ Intellectual Property (IP) blocks composed of processor cores, coprocessors, and memory subsystems. Functional verification becomes one of the major bottlenecks in the design of such systems.
This is a new type of edited volume in the Frontiers in Electronic Testing book series devoted to recent advances in electronic circuits testing. The book is a comprehensive elaboration on important topics which capture major research and development efforts today. "Hot" topics of current interest to test technology community have been selected, and the authors are key contributors in the corresponding topics.
This book constitutes the refereed proceedings of the 18th International Conference on Distributed and Computer and Communication Networks, DCCN 2015, held in Moscow, Russia, in October 2015. The 38 revised full papers presented were carefully reviewed and selected from 94 submissions. The papers cover the following topics: computer and communication networks architecture optimization; control in computer and communication networks; performance and QoS evaluation in wireless networks; modeling and simulation of network protocols; queuing and reliability theory; wireless IEEE 802.11, IEEE 802.15, IEEE 802.16, and UMTS (LTE) networks; FRID technology and its application in intellectual transportation networks; protocols design (MAC, Routing) for centimeter and millimeter wave mesh networks; internet and web applications and services; application integration in distributed information systems; big data in communication networks.
Inthe?eldofformalmethodsincomputerscience,concurrencytheoryisreceivinga constantlyincreasinginterest.Thisisespeciallytrueforprocessalgebra.Althoughit had been originally conceived as a means for reasoning about the semantics of c- current programs, process algebraic formalisms like CCS, CSP, ACP, ?-calculus, and their extensions (see, e.g., [154,119,112,22,155,181,30]) were soon used also for comprehendingfunctionaland nonfunctionalaspects of the behaviorof com- nicating concurrent systems. The scienti?c impact of process calculi and behavioral equivalences at the base of process algebra is witnessed not only by a very rich literature. It is in fact worth mentioningthe standardizationprocedurethat led to the developmentof the process algebraic language LOTOS [49], as well as the implementation of several modeling and analysis tools based on process algebra, like CWB [70] and CADP [93], some of which have been used in industrial case studies. Furthermore, process calculi and behavioral equivalencesare by now adopted in university-levelcourses to teach the foundations of concurrent programming as well as the model-driven design of concurrent, distributed, and mobile systems. Nevertheless, after 30 years since its introduction, process algebra is rarely adopted in the practice of software development. On the one hand, its technica- ties often obfuscate the way in which systems are modeled. As an example, if a process term comprises numerous occurrences of the parallel composition operator, it is hard to understand the communicationscheme among the varioussubterms. On the other hand, process algebra is perceived as being dif?cult to learn and use by practitioners, as it is not close enough to the way they think of software systems. |
You may like...
Dynamics and Design of Space Nets for…
Leping Yang, Qingbin Zhang, …
Hardcover
R3,285
Discovery Miles 32 850
Big Data Processing Using Spark in Cloud
Mamta Mittal, Valentina E. Balas, …
Hardcover
R2,677
Discovery Miles 26 770
Introducing Delphi Programming - Theory…
John Barrow, Linda Miller, …
Paperback
(1)R751 Discovery Miles 7 510
From Local to Global Optimization
A. Migdalas, Panos M. Pardalos, …
Hardcover
R4,204
Discovery Miles 42 040
Reference for Modern Instrumentation…
R.N. Thurston, Allan D. Pierce
Hardcover
R4,086
Discovery Miles 40 860
Cyclostationarity: Theory and Methods…
Fakher Chaari, Jacek Leskow, …
Hardcover
R4,601
Discovery Miles 46 010
|