![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
This book covers reliability assessment and prediction of new technologies such as next generation networks that use cloud computing, Network Function Virtualization (NVF), Software Defined Network (SDN), Next Generation Transport, Evolving Wireless Systems, Digital VoIP Telephony, and Reliability Testing techniques specific to Next Generation Networks (NGN). This book introduces the technology to the reader first, followed by advanced reliability techniques applicable to both hardware and software reliability analysis. The book covers methodologies that can predict reliability using component failure rates to system level downtimes. The book's goal is to familiarize the reader with analytical techniques, tools and methods necessary for analyzing very complex networks using very different technologies. The book lets readers quickly learn technologies behind currently evolving NGN and apply advanced Markov modeling and Software Reliability Engineering (SRE) techniques for assessing their operational reliability. Covers reliability analysis of advanced networks and provides basic mathematical tools and analysis techniques and methodology for reliability and quality assessment; Develops Markov and Software Engineering Models to predict reliability; Covers both hardware and software reliability for next generation technologies.
Managing Complexity is the first book that clearly defines the concept of Complexity, explains how Complexity can be measured and tuned, and describes the seven key features of Complex Systems: 1. Connectivity 2. Autonomy 3. Emergency 4. Nonequilibrium 5. Non-linearity 6. Self-organisation 7. Co-evolution The thesis of the book is that complexity of the environment in which we work and live offers new opportunities and that the best strategy for surviving and prospering under conditions of complexity is to develop adaptability to perpetually changing conditions. An effective method for designing adaptability into business processes using multi-agent technology is presented and illustrated by several extensive examples, including adaptive, real-time scheduling of taxis, see-going tankers, road transport, supply chains, railway trains, production processes and swarms of small space satellites. Additional case studies include adaptive servicing of the International Space Station; adaptive processing of design changes of large structures such as wings of the largest airliner in the world; dynamic data mining, knowledge discovery and distributed semantic processing.Finally, the book provides a foretaste of the next generation of complex issues, notably, The Internet of Things, Smart Cities, Digital Enterprises and Smart Logistics.
This book covers the important aspects involved in making cognitive radio devices portable, mobile and green, while also extending their service life. At the same time, it presents a variety of established theories and practices concerning cognitive radio from academia and industry. Cognitive radio can be utilized as a backbone communication medium for wireless devices. To effectively achieve its commercial application, various aspects of quality of service and energy management need to be addressed. The topics covered in the book include energy management and quality of service provisioning at Layer 2 of the protocol stack from the perspectives of medium access control, spectrum selection, and self-coexistence for cognitive radio networks.
Healthcare Informatics: Improving Efficiency and Productivity examines the complexities involved in managing resources in our healthcare system and explains how management theory and informatics applications can increase efficiencies in various functional areas of healthcare services. Delving into data and project management and advanced analytics, this book details and provides supporting evidence for the strategic concepts that are critical to achieving successful healthcare information technology (HIT), information management, and electronic health record (EHR) applications. This includes the vital importance of involving nursing staff in rollouts, engaging physicians early in any process, and developing a more receptive organizational culture to digital information and systems adoption. We owe it to ourselves and future generations to do all we can to make our healthcare systems work smarter, be more effective, and reach more people. The power to know is at our fingertips; we need only embrace it. -From the foreword by James H. Goodnight, PhD, CEO, SAS Bridging the gap from theory to practice, it discusses actual informatics applications that have been incorporated by various healthcare organizations and the corresponding management strategies that led to their successful employment. Offering a wealth of detail, it details several working projects, including: A computer physician order entry (CPOE) system project at a North Carolina hospital E-commerce self-service patient check-in at a New Jersey hospital The informatics project that turned a healthcare system's paper-based resources into digital assets Projects at one hospital that helped reduce excesses in length of stay, improved patient safety; and improved efficiency with an ADE alert system A healthcare system's use of algorithms to identify patients at risk for hepatitis Offering the guidance that healthcare specialists need to make use of various informatics platforms, this book provides the motivation and the proven methods that can be adapted and applied to any number of staff, patient, or regulatory concerns.
To solve performance problems in modern computing infrastructures, often comprising thousands of servers running hundreds of applications, spanning multiple tiers, you need tools that go beyond mere reporting. You need tools that enable performance analysis of application workflow across the entire enterprise. That's what PDQ (Pretty Damn Quick) provides. PDQ is an open-source performance analyzer based on the paradigm of queues. Queues are ubiquitous in every computing environment as buffers, and since any application architecture can be represented as a circuit of queueing delays, PDQ is a natural fit for analyzing system performance. Building on the success of the first edition, this considerably expanded second edition now comprises four parts. Part I contains the foundational concepts, as well as a new first chapter that explains the central role of queues in successful performance analysis. Part II provides the basics of queueing theory in a highly intelligible style for the non-mathematician; little more than high-school algebra being required. Part III presents many practical examples of how PDQ can be applied. The PDQ manual has been relegated to an appendix in Part IV, along with solutions to the exercises contained in each chapter. Throughout, the Perl code listings have been newly formatted to improve readability. The PDQ code and updates to the PDQ manual are available from the author's web site at www.perfdynamics.com
"Discrete-Time Linear Systems: Theory and Design with Applications "combines system theory and design in order to show the importance of system theory and its role in system design. The book focuses on system theory (including optimal state feedback and optimal state estimation) and system design (with applications to feedback control systems and wireless transceivers, plus system identification and channel estimation).
Modern embedded systems require high performance, low cost and low power consumption. Such systems typically consist of a heterogeneous collection of processors, specialized memory subsystems, and partially programmable or fixed-function components. This heterogeneity, coupled with issues such as hardware/software partitioning, mapping, scheduling, etc., leads to a large number of design possibilities, making performance debugging and validation of such systems a difficult problem. Embedded systems are used to control safety critical
applications such as flight control, automotive electronics and
healthcare monitoring. Clearly, developing reliable
software/systems for such applications is of utmost importance.
This book describes a host of debugging and verification methods
which can help to achieve this goal.
The innovative process of open source software is led in greater part by the end-users; therefore this aspect of open source software remains significant beyond the realm of traditional software development. Open Source Software Dynamics, Processes, and Applications is a multidisciplinary collection of research and approaches on the applications and processes of open source software. Highlighting the development processes performed by software programmers, the motivations of its participants, and the legal and economic issues that have been raised; this book is essential for scholars, students, and practitioners in the fields of software engineering and management as well as sociology.
Fundamental Problems in Computing is in honor of Professor Daniel J. Rosenkrantz, a distinguished researcher in Computer Science. Professor Rosenkrantz has made seminal contributions to many subareas of Computer Science including formal languages and compilers, automata theory, algorithms, database systems, very large scale integrated systems, fault-tolerant computing and discrete dynamical systems. For many years, Professor Rosenkrantz served as the Editor-in-Chief of the Journal of the Association for Computing Machinery (JACM), a very prestigious archival journal in Computer Science. His contributions to Computer Science have earned him many awards including the Fellowship from ACM and the ACM SIGMOD Contributions Award.
Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting potentials can be identified. The book presents the relevant theoretical background and measuring methods as well as proposed solutions. An evaluation of network monitors and checklists rounds out the work.
Systems for Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) are currently separate. The potential of the latest technologies and changes in operational and analytical applications over the last decade have given rise to the unification of these systems, which can be of benefit for both workloads. Research and industry have reacted and prototypes of hybrid database systems are now appearing. Benchmarks are the standard method for evaluating, comparing and supporting the development of new database systems. Because of the separation of OLTP and OLAP systems, existing benchmarks are only focused on one or the other. With the rise of hybrid database systems, benchmarks to assess these systems will be needed as well. Based on the examination of existing benchmarks, a new benchmark for hybrid database systems is introduced in this book. It is furthermore used to determine the effect of adding OLAP to an OLTP workload and is applied to analyze the impact of typically used optimizations in the historically separate OLTP and OLAP domains in mixed-workload scenarios.
Jack Ganssle has been forming the careers of embedded engineers for
20+ years. He has done this with four books, over 500 articles, a
weekly column, and continuous lecturing. Technology moves fast and
since the first edition of this best-selling classic much has
changed. The new edition will reflect the author's new and ever
evolving philosophy in the face of new technology and realities.
This book presents intuitive explanations of the principles and applications of power system resiliency, as well as a number of straightforward and practical methods for the impact analysis of risk events on power system operations. It also describes the challenges of modelling, distribution networks, optimal scheduling, multi-stage planning, deliberate attacks, cyber-physical systems and SCADA-based smart grids, and how to overcome these challenges. Further, it highlights the resiliency issues using various methods, including strengthening the system against high impact events with low frequency and the fast recovery of the system properties. A large number of specialists have collaborated to provide innovative solutions and research in power systems resiliency. They discuss the fundamentals and contemporary materials of power systems resiliency, theoretical and practical issues, as well as current issues and methods for controlling the risk attacks and other threats to AC power systems. The book includes theoretical research, significant results, case studies, and practical implementation processes to offer insights into electric power and engineering and energy systems. Showing how systems should respond in case of malicious attacks, and helping readers to decide on the best approaches, this book is essential reading for electrical engineers, researchers and specialists. The book is also useful as a reference for undergraduate and graduate students studying the resiliency and reliability of power systems.
This book presents cutting-edge research contributions that address various aspects of network design, optimization, implementation, and application of cognitive radio technologies. It demonstrates how to make better utilization of the available spectrum, cognitive radios and spectrum access to achieve effective spectrum sharing between licensed and unlicensed users. The book provides academics and researchers essential information on current developments and future trends in cognitive radios for possible integration with the upcoming 5G networks. In addition, it includes a brief introduction to cognitive radio networks for newcomers to the field.
With the rise of mobile and wireless technologies, more sustainable networks are necessary to support communication. These next-generation networks can now be utilized to extend the growing era of the Internet of Things. Enabling Technologies and Architectures for Next-Generation Networking Capabilities is an essential reference source that explores the latest research and trends in large-scale 5G technologies deployment, software-defined networking, and other emerging network technologies. Featuring research on topics such as data management, heterogeneous networks, and spectrum sensing, this book is ideally designed for computer engineers, technology developers, network administrators and researchers, professionals, and graduate-level students seeking coverage on current and future network technologies.
The creation and consumption of content, especially visual content, is ingrained into our modern world. This book contains a collection of texts centered on the evaluation of image retrieval systems. To enable reproducible evaluation we must create standardized benchmarks and evaluation methodologies. The individual chapters in this book highlight major issues and challenges in evaluating image retrieval systems and describe various initiatives that provide researchers with the necessary evaluation resources. In particular they describe activities within ImageCLEF, an initiative to evaluate cross-language image retrieval systems which has been running as part of the Cross Language Evaluation Forum (CLEF) since 2003. To this end, the editors collected contributions from a range of people: those involved directly with ImageCLEF, such as the organizers of specific image retrieval or annotation tasks; participants who have developed techniques to tackle the challenges set forth by the organizers; and people from industry and academia involved with image retrieval and evaluation generally. Mostly written for researchers in academia and industry, the book stresses the importance of combing textual and visual information - a multimodal approach - for effective retrieval. It provides the reader with clear ideas about information retrieval and its evaluation in contexts and domains such as healthcare, robot vision, press photography, and the Web.
This book presents the technical program of the International Embedded Systems Symposium (IESS) 2009. Timely topics, techniques and trends in embedded system design are covered by the chapters in this volume, including modelling, simulation, verification, test, scheduling, platforms and processors. Particular emphasis is paid to automotive systems and wireless sensor networks. Sets of actual case studies in the area of embedded system design are also included. Over recent years, embedded systems have gained an enormous amount of proce- ing power and functionality and now enter numerous application areas, due to the fact that many of the formerly external components can now be integrated into a single System-on-Chip. This tendency has resulted in a dramatic reduction in the size and cost of embedded systems. As a unique technology, the design of embedded systems is an essential element of many innovations. Embedded systems meet their performance goals, including real-time constraints, through a combination of special-purpose hardware and software components tailored to the system requirements. Both the development of new features and the reuse of existing intellectual property components are essential to keeping up with ever more demanding customer requirements. Furthermore, design complexities are steadily growing with an increasing number of components that have to cooperate properly. Embedded system designers have to cope with multiple goals and constraints simul- neously, including timing, power, reliability, dependability, maintenance, packaging and, last but not least, price.
Requirements Management has proven itself to be an enormous potential for the optimization of development projects throughout the last few years. Especially in the climate of an increasingly competitive market Requirements Management helps in carrying out developments faster, cheaper and with a higher quality. This book focuses on the interfaces of Requirements Management to the other disciplines of Systems Engineering, for example Project Management, Change Management and Configuration and Version Management. To this end, an introduction into Requirements Management and Requirements Development is given, along with a short sketch of Systems Engineering, and especially the necessary inputs and resulting outputs of Requirements Management are explained. Using these flows of information it is shown how Requirements Management can support and optimize the other project disciplines and how very important therefore a functioning Requirements Management is for all areas of development.
The most significant articles from each of the fields represented at the conference on Work with Display Units 1992 are presented in this volume. Such topics are:
As systems being developed by industry and government grow larger
and more complex, the need for superior specification and
verification approaches and tools becomes increasingly vital. The
developer and customer must have complete confidence that the
design produced is correct, and that it meets forma development and
verification standards. In this text, UML expert author Dr. Doron
Drusinsky compiles all the latest information on the application of
UML (Universal Modeling Language) statecharts, temporal logic,
automata, and other advanced tools for run-time monitoring and
verification. This is the first book that deals specifically with
UML verification techniques. This important information is
introduced within the context of real-life examples and solutions,
particularly focusing on national defense applications. A practical
text, as opposed to a high-level theoretical one, it emphasizes
getting the system developer up-to-speed on using the tools
necessary for daily practice.
This book is dedicated to Prof. Dr. Heinz Gerhauser on the occasion of his retirement both from the position of Executive Director of the Fraunhofer Institute for Integrated Circuits IIS and from the Endowed Chair of Information Technologies with a Focus on Communication Electronics (LIKE) at the Friedrich-Alexander-Universitat Erlangen-Nurnberg. Heinz Gerhauser's vision and entrepreneurial spirit have made the Fraunhofer IIS one of the most successful and renowned German research institutions. He has been Director of the Fraunhofer IIS since 1993, and under his leadership it has grown to become the largest of Germany's 60 Fraunhofer Institutes, a position it retains to this day, currently employing over 730 staff. Likely his most important scientific as well as application-related contribution was his pivotal role in the development of the mp3 format, which would later become a worldwide success. The contributions to this Festschrift were written by both Fraunhofer IIS staff and external project team members in appreciation of Prof. Dr. Gerhauser's lifetime academic achievements and his inspiring leadership at the Fraunhofer IIS. The papers reflect the broad spectrum of the institute's research activities and are grouped into sections on circuits, information systems, visual computing, and audio and multimedia. They provide academic and industrial researchers in fields like signal processing, sensor networks, microelectronics, and integrated circuits with an up-to-date overview of research results that have a huge potential for cutting-edge industrial applications.
This book aims to deconstruct ethnography to alert systems designers, and other stakeholders, to the issues presented by new approaches that move beyond the studies of 'work' and 'work practice' within the social sciences (in particular anthropology and sociology). The theoretical and methodological apparatus of the social sciences distort the social and cultural world as lived in and understood by ordinary members, whose common-sense understandings shape the actual milieu into which systems are placed and used. In Deconstructing Ethnography the authors show how 'new' calls are returning systems design to 'old' and problematic ways of understanding the social. They argue that systems design can be appropriately grounded in the social through the ordinary methods that members use to order their actions and interactions. This work is written for post-graduate students and researchers alike, as well as design practitioners who have an interest in bringing the social to bear on design in a systematic rather than a piecemeal way. This is not a 'how to' book, but instead elaborates the foundations upon which the social can be systematically built into the design of ubiquitous and interactive systems. |
You may like...
Advances in Non-volatile Memory and…
Yoshio Nishi, Blanka Magyari-Kope
Paperback
R4,593
Discovery Miles 45 930
Systems Analysis And Design In A…
John Satzinger, Robert Jackson, …
Hardcover
(1)
Test Generation of Crosstalk Delay…
S. Jayanthy, M.C. Bhuvaneswari
Hardcover
R3,785
Discovery Miles 37 850
Handbook of Research on 5G Networks and…
Augustine O Nwajana, Isibor Kennedy Ihianle
Hardcover
R7,962
Discovery Miles 79 620
|