![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting potentials can be identified. The book presents the relevant theoretical background and measuring methods as well as proposed solutions. An evaluation of network monitors and checklists rounds out the work.
This book offers readers a quick, comprehensive and up-to-date overview of the most important methodologies, technologies, APIs and standards related to the portability and interoperability of cloud applications and services, illustrated by a number of use cases representing a variety of interoperability and portability scenarios. The lack of portability and interoperability between cloud platforms at different service levels is the main issue affecting cloud-based services today. The brokering, negotiation, management, monitoring and reconfiguration of cloud resources are challenging tasks for developers and users of cloud applications due to the different business models associated with resource consumption, and to the variety of services and features offered by different cloud providers. In chapter 1 the concepts of cloud portability and interoperability are introduced, together with the issues and limitations arising when such features are lacking or ignored. Subsequently, chapter 2 provides an overview of the state-of-the-art methodologies and technologies that are currently used or being explored to enable cloud portability and interoperability. Chapter 3 illustrates the main cross-platform cloud APIs and how they can solve interoperability and portability issues. In turn, chapter 4 presents a set of ready-to-use solutions which, either because of their broad-scale use in cloud computing scenarios or because they utilize established or emerging standards, play a fundamental part in providing interoperable and portable solutions. Lastly, chapter 5 presents an overview of emerging standards for cloud Interoperability and portability. Researchers and developers of cloud-based services will find here a brief survey of the relevant methodologies, APIs and standards, illustrated by case studies and complemented by an extensive reference list for more detailed descriptions of every topic covered.
This book constitutes the refereed proceedings of the 18th National Conference on Computer Engineering and Technology, NCCET 2014, held in Guiyang, China, during July/August 2014. The 18 papers presented were carefully reviewed and selected from 85 submissions. They are organized in topical sections on processor architecture; computer application and software optimization; technology on the horizon.
This volume presents selected papers from the International Conference on Reliability, Safety, and Hazard. It presents the latest developments in reliability engineering and probabilistic safety assessment, and brings together contributions from a diverse international community and covers all aspects of safety, reliability, and hazard assessment across a host of interdisciplinary applications. This book will be of interest to researchers in both academia and the industry.
This book constitutes the refereed proceedings of the 14th International Conference on Systems Simulation, Asia Simulation 2014, held in Kitakyushu, Japan, in October 2014. The 32 revised full papers presented were carefully reviewed and selected from 69 submissions. The papers are organized in topical sections on modeling and simulation technology; network simulation; high performance computing and cloud simulation; numerical simulation and visualization; simulation of instrumentation and control application; simulation technology in diversified higher education; general purpose simulation.
This book constitutes the refereed proceedings of the 13th National Conference on Embedded System Technology, ESTC 2015, held in Beijing, China, in October 2015. The 18 revised full papers presented were carefully reviewed and selected from 63 papers. The topics cover a broad range of fields focusing on research about embedded system technologies, such as smart hardware, system and network, applications and algorithm.
To solve performance problems in modern computing infrastructures, often comprising thousands of servers running hundreds of applications, spanning multiple tiers, you need tools that go beyond mere reporting. You need tools that enable performance analysis of application workflow across the entire enterprise. That's what PDQ (Pretty Damn Quick) provides. PDQ is an open-source performance analyzer based on the paradigm of queues. Queues are ubiquitous in every computing environment as buffers, and since any application architecture can be represented as a circuit of queueing delays, PDQ is a natural fit for analyzing system performance. Building on the success of the first edition, this considerably expanded second edition now comprises four parts. Part I contains the foundational concepts, as well as a new first chapter that explains the central role of queues in successful performance analysis. Part II provides the basics of queueing theory in a highly intelligible style for the non-mathematician; little more than high-school algebra being required. Part III presents many practical examples of how PDQ can be applied. The PDQ manual has been relegated to an appendix in Part IV, along with solutions to the exercises contained in each chapter. Throughout, the Perl code listings have been newly formatted to improve readability. The PDQ code and updates to the PDQ manual are available from the author's web site at www.perfdynamics.com
Data Mining is the process of posing queries and extracting useful information, patterns and trends previously unknown from large quantities of data [Thu, 00]. It is the process where intelligent tools are applied in order to extract data patterns [JM, 01]. This encompasses a number of different technical approaches, such as cluster analysis, learning classification and association rules, and finding dependencies. Agents are defined as software entities that perform some set of tasks on behalf of users with some degree of autonomy. This research work deals about developing a automated data mining system which encompasses the familiar data mining algorithms using intelligent agents in object oriented databases and proposing a framework. Because the data mining system uses the intelligent agents, a new user will be able to interact with the data mining system without much data mining technical knowledge. This system will automatically select the appropriate data mining technique and select the necessary field needed from the database at the appropriate time without expecting the users to specify the specific technique and the parameters. Also a new framework is proposed for incorporating intelligent agents with automated data mining. One of the major goals in developing this system is to give the control to the computer for learning automatically by using intelligent agents.
This book constitutes the proceedings of the 22nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2016, which took place in Eindhoven, The Netherlands, in April 2016, held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016. The 44 full papers presented in this volume were carefully reviewed and selected from 175 submissions. They were organized in topical sections named: abstraction and verification; probabilistic and stochastic systems; synthesis; tool papers; concurrency; tool demos; languages and automata; security; optimization; and competition on software verification - SV-COMP.
Making the most ef?cient use of computer systems has rapidly become a leading topic of interest for the computer industry and its customers alike. However, the focus of these discussions is often on single, isolated, and speci?c architectural and technological improvements for power reduction and conservation, while ignoring the fact that power ef?ciency as a ratio of performance to power consumption is equally in?uenced by performance improvements and architectural power red- tion. Furthermore, ef?ciency can be in?uenced on all levels of today's system hi- archies from single cores all the way to distributed Grid environments. To improve execution and power ef?ciency requires progress in such diverse ?elds as program optimization, optimization of program scheduling, and power reduction of idling system components for all levels of the system hierarchy. Improving computer system ef?ciency requires improving system performance and reducing system power consumption. To research and reach reasonable conc- sions about system performance we need to not only understand the architectures of our computer systems and the available array of code transformations for p- formance optimizations, but we also need to be able to express this understanding in performance models good enough to guide decisions about code optimizations for speci?c systems. This understanding is necessary on all levels of the system hierarchy from single cores to nodes to full high performance computing (HPC) systems, and eventually to Grid environments with multiple systems and resources.
This work presents link prediction similarity measures for social networks that exploit the degree distribution of the networks. In the context of link prediction in dense networks, the text proposes similarity measures based on Markov inequality degree thresholding (MIDTs), which only consider nodes whose degree is above a threshold for a possible link. Also presented are similarity measures based on cliques (CNC, AAC, RAC), which assign extra weight between nodes sharing a greater number of cliques. Additionally, a locally adaptive (LA) similarity measure is proposed that assigns different weights to common nodes based on the degree distribution of the local neighborhood and the degree distribution of the network. In the context of link prediction in dense networks, the text introduces a novel two-phase framework that adds edges to the sparse graph to forma boost graph.
This book constitutes the thoroughly refereed proceedings of the 22st International Conference on Computer Networks, CN 2015, held in Brunow, Poland, in June 2015. The 42 revised full papers presented were carefully reviewed and selected from 79 submissions. The papers in these proceedings cover the following topics: computer networks, distributed computer systems, communications and teleinformatics.
This comprehensive summary of the state of the art in Ultra Wideband (UWB) system engineering takes you through all aspects of UWB design, from components through the propagation channel to system engineering aspects. Mathematical tools and basics are covered, allowing for a complete characterisation and description of the UWB scenario, in both the time and the frequency domains. UWB MMICs, antennas, antenna arrays, and filters are described, as well as quality measurement parameters and design methods for specific applications. The UWB propagation channel is discussed, including a complete mathematical description together with modeling tools. A system analysis is offered, addressing both radio and radar systems, and techniques for optimization and calibration. Finally, an overview of future applications of UWB technology is presented. Ideal for scientists as well as RF system and component engineers working in short range wireless technologies.
At the dawn of the 21st century and the information age, communication and c- puting power are becoming ever increasingly available, virtually pervading almost every aspect of modern socio-economical interactions. Consequently, the potential for realizing a signi?cantly greater number of technology-mediated activities has emerged. Indeed, many of our modern activity ?elds are heavily dependant upon various underlying systems and software-intensive platforms. Such technologies are commonly used in everyday activities such as commuting, traf?c control and m- agement, mobile computing, navigation, mobile communication. Thus, the correct function of the forenamed computing systems becomes a major concern. This is all the more important since, in spite of the numerous updates, patches and ?rmware revisions being constantly issued, newly discovered logical bugs in a wide range of modern software platforms (e. g. , operating systems) and software-intensive systems (e. g. , embedded systems) are just as frequently being reported. In addition, many of today's products and services are presently being deployed in a highly competitive environment wherein a product or service is succeeding in most of the cases thanks to its quality to price ratio for a given set of features. Accordingly, a number of critical aspects have to be considered, such as the ab- ity to pack as many features as needed in a given product or service while c- currently maintaining high quality, reasonable price, and short time -to- market.
This book constitutes the proceedings of the Third International Workshop on Foundational and Practical Aspects of Resource Analysis, FOPARA 2013, held in Bertinoro, Italy, in August 2013. The 9 papers presented in this volume were carefully reviewed and selected from 12 submissions. They deal with traditional approaches to complexity analysis, differential privacy, and probabilistic analysis of programs.
It is widely acknowledged that the cost of validation and testing comprises a s- nificant percentage of the overall development costs for electronic systems today, and is expected to escalate sharply in the future. Many studies have shown that up to 70% of the design development time and resources are spent on functional verification. Functional errors manifest themselves very early in the design flow, and unless they are detected up front, they can result in severe consequence- both financially and from a safety viewpoint. Indeed, several recent instances of high-profile functional errors (e. g. , the Pentium FDIV bug) have resulted in - creased attention paid to verifying the functional correctness of designs. Recent efforts have proposed augmenting the traditional RTL simulation-based validation methodology with formal techniques in an attempt to uncover hard-to-find c- ner cases, with the goal of trying to reach RTL functional verification closure. However, what is often not highlighted is the fact that in spite of the tremendous time and effort put into such efforts at the RTL and lower levels of abstraction, the complexity of contemporary embedded systems makes it difficult to guarantee functional correctness at the system level under all possible operational scenarios. The problem is exacerbated in current System-on-Chip (SOC) design meth- ologies that employ Intellectual Property (IP) blocks composed of processor cores, coprocessors, and memory subsystems. Functional verification becomes one of the major bottlenecks in the design of such systems.
This is a new type of edited volume in the Frontiers in Electronic Testing book series devoted to recent advances in electronic circuits testing. The book is a comprehensive elaboration on important topics which capture major research and development efforts today. "Hot" topics of current interest to test technology community have been selected, and the authors are key contributors in the corresponding topics.
This book constitutes the refereed proceedings of the 18th International Conference on Distributed and Computer and Communication Networks, DCCN 2015, held in Moscow, Russia, in October 2015. The 38 revised full papers presented were carefully reviewed and selected from 94 submissions. The papers cover the following topics: computer and communication networks architecture optimization; control in computer and communication networks; performance and QoS evaluation in wireless networks; modeling and simulation of network protocols; queuing and reliability theory; wireless IEEE 802.11, IEEE 802.15, IEEE 802.16, and UMTS (LTE) networks; FRID technology and its application in intellectual transportation networks; protocols design (MAC, Routing) for centimeter and millimeter wave mesh networks; internet and web applications and services; application integration in distributed information systems; big data in communication networks.
Inthe?eldofformalmethodsincomputerscience,concurrencytheoryisreceivinga constantlyincreasinginterest.Thisisespeciallytrueforprocessalgebra.Althoughit had been originally conceived as a means for reasoning about the semantics of c- current programs, process algebraic formalisms like CCS, CSP, ACP, ?-calculus, and their extensions (see, e.g., [154,119,112,22,155,181,30]) were soon used also for comprehendingfunctionaland nonfunctionalaspects of the behaviorof com- nicating concurrent systems. The scienti?c impact of process calculi and behavioral equivalences at the base of process algebra is witnessed not only by a very rich literature. It is in fact worth mentioningthe standardizationprocedurethat led to the developmentof the process algebraic language LOTOS [49], as well as the implementation of several modeling and analysis tools based on process algebra, like CWB [70] and CADP [93], some of which have been used in industrial case studies. Furthermore, process calculi and behavioral equivalencesare by now adopted in university-levelcourses to teach the foundations of concurrent programming as well as the model-driven design of concurrent, distributed, and mobile systems. Nevertheless, after 30 years since its introduction, process algebra is rarely adopted in the practice of software development. On the one hand, its technica- ties often obfuscate the way in which systems are modeled. As an example, if a process term comprises numerous occurrences of the parallel composition operator, it is hard to understand the communicationscheme among the varioussubterms. On the other hand, process algebra is perceived as being dif?cult to learn and use by practitioners, as it is not close enough to the way they think of software systems.
This book constitutes the proceedings of the First International Conference on Future Access Enablers for Ubiquitous and Intelligent Infrastructures, FABULOUS 2015, held in Ohrid, Republic of Macedonia, in September 2015. The 39 revised papers cover the broad areas of future wireless networks, ambient and assisted living, smart infrastructures and security and reflect the fast developing and vibrant penetration of IoT technologies in diverse areas of human live.
A quality-driven design and verification flow for digital systems is developed and presented in Quality-Driven SystemC Design. Two major enhancements characterize the new flow: First, dedicated verification techniques are integrated which target the different levels of abstraction. Second, each verification technique is complemented by an approach to measure the achieved verification quality. The new flow distinguishes three levels of abstraction (namely system level, top level and block level) and can be incorporated in existing approaches. After reviewing the preliminary concepts, in the following chapters the three levels for modeling and verification are considered in detail. At each level the verification quality is measured. In summary, following the new design and verification flow a high overall quality results.
Promptly growing demand for telecommunication services and information interchange has led to the fact that communication became one of the most dynamical branches of an infrastructure of a modern society. The book introduces to the bases of classical MDP theory; problems of a finding optimal in models are investigated and various problems of improvement of characteristics of traditional and multimedia wireless communication networks are considered together with both classical and new methods of theory MDP which allow defining optimal strategy of access in teletraffic systems. The book will be useful to specialists in the field of telecommunication systems and also to students and post-graduate students of corresponding specialties.
This book constitutes the second part of the refereed proceedings of the International Conference on Life System Modeling and Simulation, LSMS 2014, and of the International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2014, held in Shanghai, China, in September 2014. The 159 revised full papers presented in the three volumes of CCIS 461-463 were carefully reviewed and selected from 572 submissions. The papers of this volume are organized in topical sections on advanced neural network theory and algorithms; advanced evolutionary computing theory and algorithms, such as particle swarm optimization, differential evolution, ant colonies, artificial life, artificial immune systems and genetic algorithm; fuzzy, neural, and fuzzy-neuro hybrids; intelligent modeling, monitoring, and control of complex nonlinear systems; intelligent modeling and simulation of climate change; communication and control for distributed networked systems.
As more and more hardware platforms support parallelism, parallel programming is gaining momentum. Applications can only leverage the performance of multi-core processors or graphics processing units if they are able to split a problem into smaller ones that can be solved in parallel. The challenges emerging from the development of parallel applications have led to the development of a great number of tools for debugging, performance analysis and other tasks. The proceedings of the 3rd International Workshop on Parallel Tools for High Performance Computing provide a technical overview in order to help engineers, developers and computer scientists decide which tools are best suited to enhancing their current development processes.
Written for those who want to develop their knowledge of requirements engineering process, whether practitioners or students. Using the latest research and driven by practical experience from industry, Requirements Engineering gives useful hints to practitioners on how to write and structure requirements. It explains the importance of Systems Engineering and the creation of effective solutions to problems. It describes the underlying representations used in system modeling and introduces the UML2, and considers the relationship between requirements and modeling. Covering a generic multi-layer requirements process, the book discusses the key elements of effective requirements management. The latest version of DOORS (Version 7) - a software tool which serves as an enabler of a requirements management process - is also introduced to the reader here. Additional material and links are available at: http://www.requirementsengineering.info |
You may like...
Classical Potential Theory
David H. Armitage, Stephen J. Gardiner
Hardcover
R2,841
Discovery Miles 28 410
From Holomorphic Functions to Complex…
Klaus Fritzsche, Hans Grauert
Hardcover
R3,077
Discovery Miles 30 770
The Arithmetic of Hyperbolic 3-Manifolds
Colin MacLachlan, Alan W. Reid
Hardcover
R2,406
Discovery Miles 24 060
Management and Applications of Complex…
G. Rzevski, S. Syngellakis
Hardcover
R2,290
Discovery Miles 22 900
Advances in the Complex Variable…
Theodore V Hromadka, Robert J Whitley
Hardcover
R4,228
Discovery Miles 42 280
|