![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
Introductory textbook covering all the main features of the 'web programming' languages PHP and MySQL together with detailed examples that will enable readers (whether students on a taught course or independent learners) to use them to create their own applications or understand existing ones. A particular focus is the use of PHP to generate MySQL commands from a script as it is executed. Each chapter includes aims, a summary and practical exercises (with solutions) to support learning. Chapters are designed to stand alone as far as possible, so that they can be studied independently of the rest of the text by those with some previous knowledge of the languages. There is a comprehensive glossary of technical terms, together with extensive appendices for quick reference of language features.
This book constitutes the refereed proceedings fo the 14th International Scientific Conference on Information Technologies and Mathematical Modeling, named after A. F. Terpugov, ITMM 2015, held in Anzhero-Sudzhensk, Russia, in November 2015. The 35 full papers included in this volume were carefully reviewed and selected from 89 submissions. They are devoted to new results in the queueing theory and its applications, addressing specialists in probability theory, random processes, mathematical modeling as well as engineers dealing with logical and technical design and operational management of telecommunication and computer networks.
This book constitutes the refereed proceedings of the 15th International Scientific Conference on Information Technologies and Mathematical Modeling, named after A. F. Terpugov, ITMM 2016, held in Katun, Russia, in September 2016. The 33 full papers presented together with 4 short papers were carefully reviewed and selected from 96 submissions. They are devoted to new results in the queueing theory and its applications, addressing specialists in probability theory, random processes, mathematical modeling as well as engineers dealing with logical and technical design and operational management of telecommunication and computer networks.
This important text addresses the latest issues in end-to-end resilient routing in communication networks. The work highlights the main causes of failures of network nodes and links, and presents an overview of resilient routing mechanisms, covering issues related to the Future Internet (FI), wireless mesh networks (WMNs), and vehicular ad-hoc networks (VANETs). Features: discusses FI architecture for network virtualization; introduces proposals for dedicated and shared protection in random failure scenarios and against malicious activities; describes measures for WMN survivability that allow for evaluation of performance under multiple failures; proposes a new scheme to enable proactive updates of WMN antenna alignment; includes a detailed analysis of the differentiated reliability requirements for VANET applications, with a focus on issues of multi-hop data delivery; reviews techniques for improving the stability of end-to-end VANET communication paths based on multipath routing and anycast forwarding.
Systems for Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) are currently separate. The potential of the latest technologies and changes in operational and analytical applications over the last decade have given rise to the unification of these systems, which can be of benefit for both workloads. Research and industry have reacted and prototypes of hybrid database systems are now appearing. Benchmarks are the standard method for evaluating, comparing and supporting the development of new database systems. Because of the separation of OLTP and OLAP systems, existing benchmarks are only focused on one or the other. With the rise of hybrid database systems, benchmarks to assess these systems will be needed as well. Based on the examination of existing benchmarks, a new benchmark for hybrid database systems is introduced in this book. It is furthermore used to determine the effect of adding OLAP to an OLTP workload and is applied to analyze the impact of typically used optimizations in the historically separate OLTP and OLAP domains in mixed-workload scenarios.
Innovations in hardware architecture, like hyper-threading or multicore processors, mean that parallel computing resources are available for inexpensive desktop computers. In only a few years, many standard software products will be based on concepts of parallel programming implemented on such hardware, and the range of applications will be much broader than that of scientific computing, up to now the main application area for parallel computing. Rauber and Runger take up these recent developments in processor architecture by giving detailed descriptions of parallel programming techniques that are necessary for developing efficient programs for multicore processors as well as for parallel cluster systems and supercomputers. Their book is structured in three main parts, covering all areas of parallel computing: the architecture of parallel systems, parallel programming models and environments, and the implementation of efficient application algorithms. The emphasis lies on parallel programming techniques needed for different architectures. For this second edition, all chapters have been carefully revised. The chapter on architecture of parallel systems has been updated considerably, with a greater emphasis on the architecture of multicore systems and adding new material on the latest developments in computer architecture. Lastly, a completely new chapter on general-purpose GPUs and the corresponding programming techniques has been added. The main goal of the book is to present parallel programming techniques that can be used in many situations for a broad range of application areas and which enable the reader to develop correct and efficient parallel programs. Many examples and exercises are provided to show how to apply the techniques. The book can be used as both a textbook for students and a reference book for professionals. The material presented has been used for courses in parallel programming at different universities for many years.
Practical Programming in the Cell Broadband Engine offers a unique programming guide for the Cell Broadband Engine, demonstrating a large number of real-life programs to identify and solve problems in engineering, logic design, VLSI CAD, number-theory, graph-theory, computational geometry, image processing, and other subjects. Key features include: Numerous diagrams, mnemonics, tables, charts, code samples for making program development on the CBE as accessible as possible Comprehensive reading list for introductory material to the subject matter A website providing all source codes and sample-data for examples presented in this text.
This book constitutes the refereed proceedings of the 30th International Conference, ISC High Performance 2015, [formerly known as the International Supercomputing Conference] held in Frankfurt, Germany, in July 2015. The 27 revised full papers presented together with 10 short papers were carefully reviewed and selected from 67 submissions. The papers cover the following topics: cost-efficient data centers, scalable applications, advances in algorithms, scientific libraries, programming models, architectures, performance models and analysis, automatic performance optimization, parallel I/O and energy efficiency.
This book constitutes the refereed proceedings of the 13th National Conference on Embedded System Technology, ESTC 2015, held in Beijing, China, in October 2015. The 18 revised full papers presented were carefully reviewed and selected from 63 papers. The topics cover a broad range of fields focusing on research about embedded system technologies, such as smart hardware, system and network, applications and algorithm.
Community structure is a salient structural characteristic of many real-world networks. Communities are generally hierarchical, overlapping, multi-scale and coexist with other types of structural regularities of networks. This poses major challenges for conventional methods of community detection. This book will comprehensively introduce the latest advances in community detection, especially the detection of overlapping and hierarchical community structures, the detection of multi-scale communities in heterogeneous networks, and the exploration of multiple types of structural regularities. These advances have been successfully applied to analyze large-scale online social networks, such as Facebook and Twitter. This book provides readers a convenient way to grasp the cutting edge of community detection in complex networks. The thesis on which this book is based was honored with the "Top 100 Excellent Doctoral Dissertations Award" from the Chinese Academy of Sciences and was nominated as the "Outstanding Doctoral Dissertation" by the Chinese Computer Federation.
This book constitutes the thoroughly refereed post-conference proceedings of the 7th TPC Technology Conference on Performance Evaluation and Benchmarking, TPSTC 2015, held in conjunction with the 40th International Conference on Very Large Databases (VLDB 2015) in Kohala Coast, Hawaii, USA, in August/September 2015. The 8 papers presented together with 1 keynote, and 1 vision paper were carefully reviewed and selected from 24 submissions. Many buyers use TPC benchmark results as points of comparison when purchasing new computing systems. The information technology landscape is evolving at a rapid pace, challenging industry experts and researchers to develop innovative techniques for evaluation, measurement and characterization of complex systems. The TPC remains committed to developing new benchmark standards to keep pace, and one vehicle for achieving this objective is the sponsorship of the Technology Conference on Performance Evaluation and Benchmarking (TPCTC).
This proceedings set contains 85 selected full papers presented at the 3rd International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences - MCO 2015, held on May 11-13, 2015 at Lorraine University, France. The present part I of the 2 volume set includes articles devoted to Combinatorial optimization and applications, DC programming and DCA: thirty years of Developments, Dynamic Optimization, Modelling and Optimization in financial engineering, Multiobjective programming, Numerical Optimization, Spline Approximation and Optimization, as well as Variational Principles and Applications.
This proceedings set contains 85 selected full papers presentedat the 3rd International Conference on Modelling, Computation and Optimization in Information Systems and Management Sciences - MCO 2015, held on May 11-13, 2015 at Lorraine University, France. The present part II of the 2 volume set includes articles devoted to Data analysis and Data mining, Heuristic / Meta heuristic methods for operational research applications, Optimization applied to surveillance and threat detection, Maintenance and Scheduling, Post Crises banking and eco-finance modelling, Transportation, as well as Technologies and methods for multi-stakeholder decision analysis in public settings.
This textbook serves as an introduction to fault-tolerance, intended for upper-division undergraduate students, graduate-level students and practicing engineers in need of an overview of the field. Readers will develop skills in modeling and evaluating fault-tolerant architectures in terms of reliability, availability and safety. They will gain a thorough understanding of fault tolerant computers, including both the theory of how to design and evaluate them and the practical knowledge of achieving fault-tolerance in electronic, communication and software systems. Coverage includes fault-tolerance techniques through hardware, software, information and time redundancy. The content is designed to be highly accessible, including numerous examples and exercises. Solutions and powerpoint slides are available for instructors.
This book offers readers a quick, comprehensive and up-to-date overview of the most important methodologies, technologies, APIs and standards related to the portability and interoperability of cloud applications and services, illustrated by a number of use cases representing a variety of interoperability and portability scenarios. The lack of portability and interoperability between cloud platforms at different service levels is the main issue affecting cloud-based services today. The brokering, negotiation, management, monitoring and reconfiguration of cloud resources are challenging tasks for developers and users of cloud applications due to the different business models associated with resource consumption, and to the variety of services and features offered by different cloud providers. In chapter 1 the concepts of cloud portability and interoperability are introduced, together with the issues and limitations arising when such features are lacking or ignored. Subsequently, chapter 2 provides an overview of the state-of-the-art methodologies and technologies that are currently used or being explored to enable cloud portability and interoperability. Chapter 3 illustrates the main cross-platform cloud APIs and how they can solve interoperability and portability issues. In turn, chapter 4 presents a set of ready-to-use solutions which, either because of their broad-scale use in cloud computing scenarios or because they utilize established or emerging standards, play a fundamental part in providing interoperable and portable solutions. Lastly, chapter 5 presents an overview of emerging standards for cloud Interoperability and portability. Researchers and developers of cloud-based services will find here a brief survey of the relevant methodologies, APIs and standards, illustrated by case studies and complemented by an extensive reference list for more detailed descriptions of every topic covered.
Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting potentials can be identified. The book presents the relevant theoretical background and measuring methods as well as proposed solutions. An evaluation of network monitors and checklists rounds out the work.
This book constitutes the refereed proceedings of the 18th National Conference on Computer Engineering and Technology, NCCET 2014, held in Guiyang, China, during July/August 2014. The 18 papers presented were carefully reviewed and selected from 85 submissions. They are organized in topical sections on processor architecture; computer application and software optimization; technology on the horizon.
This book constitutes the refereed proceedings of the 12 European Conference on Wireless Sensor Networks, EWSN 2015, held in Porto, Portugal, in February 2015. The 14 full papers and 9 short papers presented were carefully reviewed and selected from 85 submissions. They cover a wide range of topics grouped into five sessions: services and applications, mobility and delay-tolerance, routing and data dissemination, and human-centric sensing.
The demand for large-scale dependable, systems, such as Air Traffic Management, industrial plants and space systems, is attracting efforts of many word-leading European companies and SMEs in the area, and is expected to increase in the near future. The adoption of Off-The-Shelf (OTS) items plays a key role in such a scenario. OTS items allow mastering complexity and reducing costs and time-to-market; however, achieving these goals by ensuring dependability requirements at the same time is challenging. CRITICAL STEP project establishes a strategic collaboration between academic and industrial partners, and proposes a framework to support the development of dependable, OTS-based, critical systems. The book introduces methods and tools adopted by the critical systems industry, and surveys key achievements of the CRITICAL STEP project along four directions: fault injection tools, V&V of critical systems, runtime monitoring and evaluation techniques, and security assessment.
This volume presents selected papers from the International Conference on Reliability, Safety, and Hazard. It presents the latest developments in reliability engineering and probabilistic safety assessment, and brings together contributions from a diverse international community and covers all aspects of safety, reliability, and hazard assessment across a host of interdisciplinary applications. This book will be of interest to researchers in both academia and the industry.
This book constitutes the refereed proceedings of the 14th International Conference on Systems Simulation, Asia Simulation 2014, held in Kitakyushu, Japan, in October 2014. The 32 revised full papers presented were carefully reviewed and selected from 69 submissions. The papers are organized in topical sections on modeling and simulation technology; network simulation; high performance computing and cloud simulation; numerical simulation and visualization; simulation of instrumentation and control application; simulation technology in diversified higher education; general purpose simulation.
This book describes a model-based development approach for globally-asynchronous locally-synchronous distributed embedded controllers. This approach uses Petri nets as modeling formalism to create platform and network independent models supporting the use of design automation tools. To support this development approach, the Petri nets class in use is extended with time-domains and asynchronous-channels. The authors' approach uses models not only providing a better understanding of the distributed controller and improving the communication among the stakeholders, but also to be ready to support the entire lifecycle, including the simulation, the verification (using model-checking tools), the implementation (relying on automatic code generators), and the deployment of the distributed controller into specific platforms. Uses a graphical and intuitive modeling formalism supported by design automation tools; Enables verification, ensuring that the distributed controller was correctly specified; Provides flexibility in the implementation and maintenance phases to achieve desired constraints (high performance, low power consumption, reduced costs), enabling porting to different platforms using different communication nodes, without changing the underlying behavioral model.
To solve performance problems in modern computing infrastructures, often comprising thousands of servers running hundreds of applications, spanning multiple tiers, you need tools that go beyond mere reporting. You need tools that enable performance analysis of application workflow across the entire enterprise. That's what PDQ (Pretty Damn Quick) provides. PDQ is an open-source performance analyzer based on the paradigm of queues. Queues are ubiquitous in every computing environment as buffers, and since any application architecture can be represented as a circuit of queueing delays, PDQ is a natural fit for analyzing system performance. Building on the success of the first edition, this considerably expanded second edition now comprises four parts. Part I contains the foundational concepts, as well as a new first chapter that explains the central role of queues in successful performance analysis. Part II provides the basics of queueing theory in a highly intelligible style for the non-mathematician; little more than high-school algebra being required. Part III presents many practical examples of how PDQ can be applied. The PDQ manual has been relegated to an appendix in Part IV, along with solutions to the exercises contained in each chapter. Throughout, the Perl code listings have been newly formatted to improve readability. The PDQ code and updates to the PDQ manual are available from the author's web site at www.perfdynamics.com
Data Mining is the process of posing queries and extracting useful information, patterns and trends previously unknown from large quantities of data [Thu, 00]. It is the process where intelligent tools are applied in order to extract data patterns [JM, 01]. This encompasses a number of different technical approaches, such as cluster analysis, learning classification and association rules, and finding dependencies. Agents are defined as software entities that perform some set of tasks on behalf of users with some degree of autonomy. This research work deals about developing a automated data mining system which encompasses the familiar data mining algorithms using intelligent agents in object oriented databases and proposing a framework. Because the data mining system uses the intelligent agents, a new user will be able to interact with the data mining system without much data mining technical knowledge. This system will automatically select the appropriate data mining technique and select the necessary field needed from the database at the appropriate time without expecting the users to specify the specific technique and the parameters. Also a new framework is proposed for incorporating intelligent agents with automated data mining. One of the major goals in developing this system is to give the control to the computer for learning automatically by using intelligent agents.
Architecture Description Languages is an essential reference for both academic and professional researchers in the field of system engineering and design. The papers presented in this volume were selected from the workshop of the same name that was held as part of the World Computer Congress 2004 Conference, held in Toulouse, France in August 2004. This collection presents significant research and innovative developments and applications from both academic researchers and industry practitioners on topics ranging from Semantics to Tool and Development Environments. The aim of an ADL is to formally describe software and hardware architectures. Usually, an ADL describes components, their interfaces, their structures, their interactions (structure of data flow and control flow) and the mappings to hardware systems. A major goal of such description is to allow analysis with respect to several aspects like timing, safety, reliability. The papers in this state-of-the-art volume cover such topics of interest as components, connectors, composition; semantics and formalization; verification, simulation and test; tools and development environments; standardization; industrial projects. To encourage closer interaction between academic and industrial networking research communities, the workshop welcomed academic research papers as well as industrial contributions, and both are included here. Which makes this collection important not only for ADL experts and researchers, but also for all teachers and administrators interested in ADL. |
![]() ![]() You may like...
Reviews of Physiology, Biochemistry and…
Stine Helene Falsig Pedersen
Hardcover
R4,095
Discovery Miles 40 950
Amstrad Games Book - Cpc464 & Cpc664
Kevin Bergin, Andrew Lacey
Hardcover
R689
Discovery Miles 6 890
|