![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > General theory of computing > Systems analysis & design
This is a new type of edited volume in the Frontiers in Electronic Testing book series devoted to recent advances in electronic circuits testing. The book is a comprehensive elaboration on important topics which capture major research and development efforts today. "Hot" topics of current interest to test technology community have been selected, and the authors are key contributors in the corresponding topics.
Control system design is a challenging task for practicing engineers. It requires knowledge of different engineering fields, a good understanding of technical specifications and good communication skills. The current book introduces the reader into practical control system design, bridging the gap between theory and practice. The control design techniques presented in the book are all model based., considering the needs and possibilities of practicing engineers. Classical control design techniques are reviewed and methods are presented how to verify the robustness of the design. It is how the designed control algorithm can be implemented in real-time and tested, fulfilling different safety requirements. Good design practices and the systematic software development process are emphasized in the book according to the generic standard IEC61508. The book is mainly addressed to practicing control and embedded software engineers - working in research and development - as well as graduate students who are faced with the challenge to design control systems and implement them in real-time.
This book constitutes the thoroughly refereed proceedings of the 21st International Conference on Computer Networks, CN 2014, held in Brunow, Poland, in June 2014. The 34 revised full papers presented were carefully reviewed and selected for inclusion in the book. The papers in these proceedings cover the following topics: computer networks, tele informatics and communications, new technologies, queueing theory, innovative applications and networked and IT-related aspects of e-business."
This book constitutes the proceedings of the 14th IFIP International Conference on Distributed Applications and Interoperable Systems, DAIS 2014, held in Berlin, Germany, in June 2014. The 12 papers presented in this volume were carefully reviewed and selected from 53 submissions. They deal with cloud computing, replicated storage, and large-scale systems.
This book constitutes the thoroughly refereed post-conference proceedings of the Second International Workshop on Energy Efficient Data Centers, E(2)DC 2013, held in Berkeley, CA, USA, in May 2013; co-located with SIGCOMM e-Energy 2013. The 8 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on energy and workload measurement; energy management; simulators and control.
Ternary means "based on three". This book deals with reliability investigations of networks whose components subject to failures can be in three states -up, down and middle (mid), contrary to traditionally considered networks having only binary (up/down) components. Extending binary case to ternary allows to consider more realistic and flexible models for communication, flow and supply networks
This textbook describes the approaches used by software engineers to build quality into their software. The fundamental principles of software quality management and software process improvement are discussed in detail, with a particular focus on the CMMI framework. Features: includes review questions at the end of each chapter; covers both theory and practice, and provides guidance on applying the theory in an industrial environment; examines all aspects of the software development process, including project planning and tracking, software lifecycles, software inspections and testing, configuration management, and software quality assurance; provides detailed coverage of software metrics and problem solving; describes SCAMPI appraisals and how they form part of the continuous improvement cycle; presents an introduction to formal methods and the Z specification language; discusses UML, which is used to describe the architecture of the system; reviews the history of the field of software quality."
Why care about hardware/firmware interaction? These interfaces are
critical, a solid hardware design married with adaptive firmware
can access all the capabilities of an application and overcome
limitations caused by poor communication. For the first time, a
book has come along that will help hardware engineers and firmware
engineers work together to mitigate or eliminate problems that
occur when hardware and firmware are not optimally compatible.
Solving these issues will save time and money, getting products to
market sooner to create more revenue.
This book constitutes thoroughly refereed post-conference proceedings of the workshops of the 19th International Conference on Parallel Computing, Euro-Par 2013, held in Aachen, Germany in August 2013. The 99 papers presented were carefully reviewed and selected from 145 submissions. The papers include seven workshops that have been co-located with Euro-Par in the previous years: - Big Data Cloud (Second Workshop on Big Data Management in Clouds) - Hetero Par (11th Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms) - HiBB (Fourth Workshop on High Performance Bioinformatics and Biomedicine) - OMHI (Second Workshop on On-chip Memory Hierarchies and Interconnects) - PROPER (Sixth Workshop on Productivity and Performance) - Resilience (Sixth Workshop on Resiliency in High Performance Computing with Clusters, Clouds, and Grids) - UCHPC (Sixth Workshop on Un Conventional High Performance Computing) as well as six newcomers: - DIHC (First Workshop on Dependability and Interoperability in Heterogeneous Clouds) - Fed ICI (First Workshop on Federative and Interoperable Cloud Infrastructures) - LSDVE (First Workshop on Large Scale Distributed Virtual Environments on Clouds and P2P) - MHPC (Workshop on Middleware for HPC and Big Data Systems) -PADABS ( First Workshop on Parallel and Distributed Agent Based Simulations) - ROME (First Workshop on Runtime and Operating Systems for the Many core Era) All these workshops focus on promotion and advancement of all aspects of parallel and distributed computing.
Performance of Web Services provides innovative techniques to improve the performance of Web Services, as well as QoS (Quality of Service) requirements. This includes Qos performance, reliability and security. The author presents two levels of Web Services: the "kernel" (ithe SOAP engine which delivers messages from one point to another through various networks), and the "server side" (which processes heavy load / requests). The primary objective of this book is execution of applications delivered in a timely fashion. Case studies and examples are provided throughout this book.
This book constitutes the refereed proceedings of the 7th China Conference of Wireless Sensor Networks, held in Qingdao, China, in October 2013. The 35 revised full papers were carefully reviewed and selected from 191 submissions. The papers cover a wide range of topics in the wireless sensor network fields like node systems, infrastructures, communication protocols, data management.
Classes of socio-technical hazards allow a characterization of the risk in technology innovation and clarify the mechanisms underpinning emergent technological risk. "Emerging Technological Risk" provides an interdisciplinary account of risk in socio-technical systems including hazards which highlight: . How technological risk crosses organizational boundaries, . How technological trajectories and evolution develop from resolving tensions emerging between social aspects of organisations and technologies and . How social behaviour shapes, and is shaped by, technology. Addressing an audience from a range of academic and professional backgrounds, " Emerging Technological Risk" is a key source for those who wish to benefit from a detail and methodical exposure to multiple perspectives on technological risk. By providing a synthesis of recent work on risk that captures the complex mechanisms that characterize the emergence of risk in technology innovation, "Emerging Technological Risk" bridges contributions from many disciplines in order to sustain a fruitful debate. "Emerging Technological Risk" is one of a series of books developed by the Dependability Interdisciplinary Research Collaboration funded by the UK Engineering and Physical Sciences Research Council. "
This book constitutes the proceedings of the 6th International Workshop on Traffic Monitoring and Analysis, TMA 2014, held in London, UK, in April 2014. The thoroughly refereed 11 full papers presented in this volume were carefully reviewed and selected from 30 submissions. The contributions are organized in topical sections on tools and lessons learned from passive measurement, performance at the edge and Web, content and inter domain.
This book constitutes the refereed proceedings of the 25th International Conference on Parallel Computational Fluid Dynamics, ParCFD 2013, held in Changsha, China, in May 2013. The 35 revised full papers presented were carefully reviewed and selected from more than 240 submissions. The papers address issues such as parallel algorithms, developments in software tools and environments, unstructured adaptive mesh applications, industrial applications, atmospheric and oceanic global simulation, interdisciplinary applications and evaluation of computer architectures and software environments.
This volume constitutes the refereed proceedings of the 10th International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR 2015, held in Hong Kong, China, in January 2015. The 36 revised full papers were carefully reviewed and selected from 45 submissions. The papers are organized in topical sections on discrete and continuous optimization; image restoration and inpainting; segmentation; PDE and variational methods; motion, tracking and multiview reconstruction; statistical methods and learning; and medical image analysis.
The two volumes LNCS 8805 and 8806 constitute the thoroughly refereed post-conference proceedings of 18 workshops held at the 20th International Conference on Parallel Computing, Euro-Par 2014, in Porto, Portugal, in August 2014. The 100 revised full papers presented were carefully reviewed and selected from 173 submissions. The volumes include papers from the following workshops: APCI&E (First Workshop on Applications of Parallel Computation in Industry and Engineering - BigDataCloud (Third Workshop on Big Data Management in Clouds) - DIHC (Second Workshop on Dependability and Interoperability in Heterogeneous Clouds) - FedICI (Second Workshop on Federative and Interoperable Cloud Infrastructures) - Hetero Par (12th International Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Platforms) - HiBB (5th Workshop on High Performance Bioinformatics and Biomedicine) - LSDVE (Second Workshop on Large Scale Distributed Virtual Environments on Clouds and P2P) - MuCoCoS (7th International Workshop on Multi-/Many-core Computing Systems) - OMHI (Third Workshop on On-chip Memory Hierarchies and Interconnects) - PADAPS (Second Workshop on Parallel and Distributed Agent-Based Simulations) - PROPER (7th Workshop on Productivity and Performance) - Resilience (7th Workshop on Resiliency in High Performance Computing with Clusters, Clouds, and Grids) - REPPAR (First International Workshop on Reproducibility in Parallel Computing) - ROME (Second Workshop on Runtime and Operating Systems for the Many Core Era) - SPPEXA (Workshop on Software for Exascale Computing) - TASUS (First Workshop on Techniques and Applications for Sustainable Ultrascale Computing Systems) - UCHPC (7th Workshop on Un Conventional High Performance Computing) and VHPC (9th Workshop on Virtualization in High-Performance Cloud Computing.
The resilience of computing systems includes their dependability as well as their fault tolerance and security. It defines the ability of a computing system to perform properly in the presence of various kinds of disturbances and to recover from any service degradation. These properties are immensely important in a world where many aspects of our daily life depend on the correct, reliable and secure operation of often large-scale distributed computing systems. Wolter and her co-editors grouped the 20 chapters from leading researchers into seven parts: an introduction and motivating examples, modeling techniques, model-driven prediction, measurement and metrics, testing techniques, case studies, and conclusions. The core is formed by 12 technical papers, which are framed by motivating real-world examples and case studies, thus illustrating the necessity and the application of the presented methods. While the technical chapters are independent of each other and can be read in any order, the reader will benefit more from the case studies if he or she reads them together with the related techniques. The papers combine topics like modeling, benchmarking, testing, performance evaluation, and dependability, and aim at academic and industrial researchers in these areas as well as graduate students and lecturers in related fields. In this volume, they will find a comprehensive overview of the state of the art in a field of continuously growing practical importance.
Data Intensive Computing refers to capturing, managing, analyzing, and understanding data at volumes and rates that push the frontiers of current technologies. The challenge of data intensive computing is to provide the hardware architectures and related software systems and techniques which are capable of transforming ultra-large data into valuable knowledge. "Handbook of Data Intensive Computing" is written by leading international experts in the field. Experts from academia, research laboratories and private industry address both theory and application. Data intensive computing demands a fundamentally different set of principles than mainstream computing. Data-intensive applications typically are well suited for large-scale parallelism over the data and also require an extremely high degree of fault-tolerance, reliability, and availability. Real-world examples are provided throughout the book. "Handbook of Data Intensive Computing" is designed as a reference for practitioners and researchers, including programmers, computer and system infrastructure designers, and developers. This book can also be beneficial for business managers, entrepreneurs, and investors.
This book is dedicated to Prof. Dr. Heinz Gerhauser on the occasion of his retirement both from the position of Executive Director of the Fraunhofer Institute for Integrated Circuits IIS and from the Endowed Chair of Information Technologies with a Focus on Communication Electronics (LIKE) at the Friedrich-Alexander-Universitat Erlangen-Nurnberg. Heinz Gerhauser's vision and entrepreneurial spirit have made the Fraunhofer IIS one of the most successful and renowned German research institutions. He has been Director of the Fraunhofer IIS since 1993, and under his leadership it has grown to become the largest of Germany's 60 Fraunhofer Institutes, a position it retains to this day, currently employing over 730 staff. Likely his most important scientific as well as application-related contribution was his pivotal role in the development of the mp3 format, which would later become a worldwide success. The contributions to this Festschrift were written by both Fraunhofer IIS staff and external project team members in appreciation of Prof. Dr. Gerhauser's lifetime academic achievements and his inspiring leadership at the Fraunhofer IIS. The papers reflect the broad spectrum of the institute's research activities and are grouped into sections on circuits, information systems, visual computing, and audio and multimedia. They provide academic and industrial researchers in fields like signal processing, sensor networks, microelectronics, and integrated circuits with an up-to-date overview of research results that have a huge potential for cutting-edge industrial applications.
This book constitutes the thoroughly refereed post-conference proceedings of the 4th International ICST Conference on Sensor Systems and Software, S-Cube 2013, held in Lucca, Italy, 2013. The 8 revised full papers and 2 invited papers presented cover contributions on different technologies for wireless sensor networks, including security protocols, middleware, analysis tools and frameworks.
This book constitutes the proceedings of the 27th International Conference on Architecture of Computing Systems, ARCS 2014, held in Lubeck, Germany, in February 2014. The 20 papers presented in this volume were carefully reviewed and selected from 44 submissions. They are organized in topical sections named: parallelization: applications and methods; self-organization and trust; system design; system design and sensor systems; and virtualization: I/O, memory, cloud; dependability: safety, security, and reliability aspects."
This book constitutes the refereed proceedings of the 13th International Scientific Conference on Information Technologies and Mathematical Modeling, named after A.F. Terpugov, ITMM 2014, Anzhero-Sudzhensk, Russia, held in Anzhero-Sudzhensk, Russia, in November 2014. The 50 full papers included in this volume were carefully reviewed and selected from 254 submissions. The papers focus on probabilistic methods and models, queueing theory, telecommunication systems, and software engineering.
This book constitutes the refereed proceedings of the 11th
European Conference on Wireless Sensor Networks, EWSN 2014, held in
Oxford, UK, in February 2014.
This proceedings is a representation of decades of reasearch, teaching and application in the field. Image Processing, Fusion and Information Technology areas, Digital radio Communication, Wimax, Electrical engg, VLSI approach to processor design, embedded systems design are dealt in detail through models and illustrative techniques.
Queueing theory applications can be discovered in many walks of life including; transportation, manufacturing, telecommunications, computer systems and more. However, the most prevalent applications of queueing theory are in the telecommunications field. Queueing Theory for Telecommunications: Discrete Time Modelling of a Single Node System focuses on discrete time modeling and illustrates that most queueing systems encountered in real life can be set up as a Markov chain. This feature is very unique because the models are set in such a way that matrix-analytic methods are used to analyze them. Queueing Theory for Telecommunications: Discrete Time Modelling of a Single Node System is the most relevant book available on queueing models designed for applications to telecommunications. This book presents clear concise theories behind how to model and analyze key single node queues in discrete time using special tools that were presented in the second chapter. The text also delves into the types of single node queues that are very frequently encountered in telecommunication systems modeling, and provides simple methods for analyzing them. Where appropriate, alternative analysis methods are also presented. This book is for advanced-level students and researchers concentrating on engineering, computer science and mathematics as a secondary text or reference book. Professionals who work in the related industries of telecommunications, industrial engineering and communications engineering will find this book useful as well. |
![]() ![]() You may like...
Implementing Data Analytics and…
Chintan Bhatt, Neeraj Kumar, …
Hardcover
R6,766
Discovery Miles 67 660
Cases on Lean Thinking Applications in…
Eduardo Guilherme Satolo, Robisom Damasceno Calado
Hardcover
R6,835
Discovery Miles 68 350
|