![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > General
The advent of multimedia technology is creating a number of new problems in the fields of computer and communication systems. Perhaps the most important of these problems in communication, and certainly the most interesting, is that of designing networks to carry multimedia traffic, including digital audio and video, with acceptable quality. The main challenge in integrating the different services needed by the different types of traffic into the same network (an objective that is made worthwhile by its obvious economic advantages) is to satisfy the performance requirements of continuous media applications, as the quality of audio and video streams at the receiver can be guaranteed only if bounds on delay, delay jitters, bandwidth, and reliability are guaranteed by the network. Since such guarantees cannot be provided by traditional packet-switching technology, a number of researchers and research groups during the last several years have tried to meet the challenge by proposing new protocols or modifications of old ones, to make packet-switching networks capable of delivering audio and video with good quality while carrying all sorts of other traffic. The focus of this book is on HeiTS (the Heidelberg Transport System), and its contributions to integrated services network design. The HeiTS architecture is based on using the Internet Stream Protocol Version 2 (ST-II) at the network layer. The Heidelberg researchers were the first to implement ST-II. The author documents this activity in the book and provides thorough coverage of the improvements made to the protocol. The book also includes coverage of HeiTP as used in error handling, error control, congestion control, and the full specification of ST2+, a new version of ST-II. The ideas and techniques implemented by the Heidelberg group and their coverage in this volume apply to many other approaches to multimedia networking.
AND BACKGROUND 1. 1 CAD, Specification and Simulation Computer Aided Design (CAD) is today a widely used expression referring to the study of ways in which computers can be used to expedite the design process. This can include the design of physical systems, architectural environments, manufacturing processes, and many other areas. This book concentrates on one area of CAD: the design of computer systems. Within this area, it focusses on just two aspects of computer design, the specification and the simulation of digital systems. VLSI design requires support in many other CAD areas, induding automatic layout. IC fabrication analysis, test generation, and others. The problem of specification is unique, however, in that it i > often the first one encountered in large chip designs, and one that is unlikely ever to be completely automated. This is true because until a design's objectives are specified in a machine-readable form, there is no way for other CAD tools to verify that the target system meets them. And unless the specifications can be simulated, it is unlikely that designers will have confidence in them, since specifications are potentially erroneous themselves. (In this context the term target system refers to the hardware and/or software that will ultimately be fabricated. ) On the other hand, since the functionality of a VLSI chip is ultimately determined by its layout geometry, one might question the need for CAD tools that work with areas other than layout.
This book contains the papers from the IFIP Working Group 8.1 conference on Situational Method Engineering. Over the last decade, Method Engineering, defined as the engineering discipline to design, construct and adapt methods, including supportive tools, has emerged as the research and application area for using methods for systems development.
This handbook provides design considerations and rules-of-thumb to ensure the functionality you want will work. It brings together all the information needed by systems designers to develop applications that include configurability, from the simplest implementations to the most complicated.
The design of digital (computer) systems requires several design phases: from the behavioural design, over the logical structural design to the physical design, where the logical structure is implemented in the physical structure of the system (the chip). Due to the ever increasing demands on computer system performance, the physical design phase being one of the most complex design steps in the entire process. The major goal of this book is to develop a priori wire length estimation methods that can help the designer in finding a good lay-out of a circuit in less iterations of physical design steps and that are useful to compare different physical architectures. For modelling digital circuits, the interconnection complexity is of major importance. It can be described by the so called Rent's rule and the Rent exponent. A Priori Wire Length Estimates for Digital Design will provide the reader with more insight in this rule and clearly outlines when and where the rule can be used and when and where it fails. Also, for the first time, a comprehensive model for the partitioning behaviour of multi-terminal nets is developed. This leads to a new parameter for circuits that describes the distribution of net degrees over the nets in the circuit. This multi-terminal net model is used throughout the book for the wire length estimates but it also induces a method for the generation of synthetic benchmark circuits that has major advantages over existing benchmark generators. In the domain of wire length estimations, the most important contributions of this work are (i) a new model for placement optimization in a physical (computer) architecture and (ii) the inclusion of the multi-terminal net modelin the wire length estimates. The combination of the placement optimization model with Donath's model for a hierarchical partitioning and placement results in more accurate wire length estimates. The multi-terminal net model allows accurate assessments of the impact of multi-terminal nets on wire length estimates. We distinguish between delay-related applications, ' for which the length of source-sink pairs is important, and routing-related applications, ' for which the entire (Steiner) length of the multi-terminal net has to be taken into account. The wire length models are further extended by taking into account the interconnections between internal components and the chip boundary. The application of the models to three-dimensional systems broadens the scope to more exotic architectures and to opto-electronic design techniques. We focus on anisotropic three-dimensional systems and propose a way to estimate wire lengths for opto-electronic systems. The wire length estimates can be used for prediction of circuit characteristics, for improving placement and routing tools in Computer-Aided Design and for evaluating new computer architectures. All new models are validated with experiments on benchmark circuits.
This guidebook on e-science presents real-world examples of practices and applications, demonstrating how a range of computational technologies and tools can be employed to build essential infrastructures supporting next-generation scientific research. Each chapter provides introductory material on core concepts and principles, as well as descriptions and discussions of relevant e-science methodologies, architectures, tools, systems, services and frameworks. Features: includes contributions from an international selection of preeminent e-science experts and practitioners; discusses use of mainstream grid computing and peer-to-peer grid technology for "open" research and resource sharing in scientific research; presents varied methods for data management in data-intensive research; investigates issues of e-infrastructure interoperability, security, trust and privacy for collaborative research; examines workflow technology for the automation of scientific processes; describes applications of e-science.
The availability of effective global communication facilities in the last decade has changed the business goals of many manufacturing enterprises. They need to remain competitive by developing products and processes which are specific to individual requirements, completely packaged and manufactured globally. Networks of enterprises are formed to operate across time and space with world-wide distributed functions such as manufacturing, sales, customer support, engineering, quality assurance, supply chain management and so on. Research and technology development need to address architectures, methodologies, models and tools supporting intra- and inter-enterprise operation and management. Throughout the life cycle of products and enterprises there is the requirement to transform information sourced from globally distributed offices and partners into knowledge for decision and action. Building on the success of previous DrrSM conferences (Tokyo 1993, Eindhoven 1996, Fort Worth 1998), the fourth International Conference on Design of Information Infrastructure Systems for Manufacturing (DrrSM 2000) aims to: * Establish and manage the dynamics of virtual enterprises, define the information system requirements and develop solutions; * Develop and deploy information management in multi-cultural systems with universal applicability of the proposed architecture and solutions; * Develop enterprise integration architectures, methodologies and information infrastructure support for reconfigurable enterprises; * Explore information transformation into knowledge for decision and action by machine and skilful people; These objectives reflect changes of the business processes due to advancements of information and communication technologies (ICT) in the last couple of years.
This book brings together in one place important contributions and state-of-the-art research in the rapidly advancing area of analog VLSI neural networks. The book serves as an excellent reference, providing insights into some of the most important issues in analog VLSI neural networks research efforts.
Here is a comprehensive presentation of methodology for the design and synthesis of an intelligent complex robotic system, connecting formal tools from discrete system theory, artificial intelligence, neural network, and fuzzy logic. The necessary methods for solving real time action planning, coordination and control problems are described. A notable chapter presents a new approach to intelligent robotic agent control acting in a realworld environment based on a lifelong learning approach combining cognitive and reactive capabilities. Another key feature is the homogeneous description of all solutions and methods based on system theory formalism.
Targeted audience * Specialists in numerical computations, especially in numerical optimiza tion, who are interested in designing algorithms with automatie result ver ification, and who would therefore be interested in knowing how general their algorithms caIi in principle be. * Mathematicians and computer scientists who are interested in the theory 0/ computing and computational complexity, especially computational com plexity of numerical computations. * Students in applied mathematics and computer science who are interested in computational complexity of different numerical methods and in learning general techniques for estimating this computational complexity. The book is written with all explanations and definitions added, so that it can be used as a graduate level textbook. What this book .is about Data processing. In many real-life situations, we are interested in the value of a physical quantity y that is diflicult (or even impossible) to measure directly. For example, it is impossible to directly measure the amount of oil in an oil field or a distance to a star. Since we cannot measure such quantities directly, we measure them indirectly, by measuring some other quantities Xi and using the known relation between y and Xi'S to reconstruct y. The algorithm that transforms the results Xi of measuring Xi into an estimate fj for y is called data processing.
This book presents and discusses the most recent innovations, trends, results, experiences and concerns with regard to information systems. Individual chapters focus on IT for facility management, process management and applications, corporate information systems, design and manufacturing automation. The book includes new findings on software engineering, industrial internet, engineering cloud and advance BPM methods. It presents the latest research on intelligent information systems, computational intelligence methods in Information Systems and new trends in Business Process Management, making it a valuable resource for both researchers and practitioners looking to expand their information systems expertise.
For almost four decades, Software Engineering: A Practitioner's Approach (SEPA) has been the world's leading textbook in software engineering. The ninth edition represents a major restructuring and update of previous editions, solidifying the book's position as the most comprehensive guide to this important subject.
This monograph develops a framework for modeling and solving utility maximization problems in nonconvex wireless systems. The first part develops a model for utility optimization in wireless systems. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed in the second part of the book. The development is based on a careful examination of the properties that are required for the application of each method. This part focuses on problems whose initial formulation does not allow for a solution by standard methods and discusses alternative approaches. The last part presents two case studies to demonstrate the application of the proposed framework. In both cases, utility maximization in multi-antenna broadcast channels is investigated.
One of the fastest growing areas in computer science, granular computing, covers theories, methodologies, techniques, and tools that make use of granules in complex problem solving and reasoning. Novel Developments in Granular Computing: Applications for Advanced Human Reasoning and Soft Computation analyzes developments and current trends of granular computing, reviewing the most influential research and predicting future trends. This book not only presents a comprehensive summary of existing practices, but enhances understanding on human reasoning.
Exam board: OCR Level: A-level Subject: Computer Science First teaching: September 2015 First exams: Summer 2017 Strengthen your students' understanding and upgrade their confidence and exam skills with our OCR Computer Science workbooks, full of self-contained exercises to consolidate knowledge and exam practice questions to improve performance. Written by an experienced Computer Science author, these full colour workbooks provide stimulus materials on all AS and A-level topics, followed by sets of questions designed to develop and test skills in the unit. * Thoroughly prepares students for their examinations as they work through numerous practice questions that cover every question type in the specification. * Helps students identify their revision needs and see how to target the top grades using online answers for each question. * Encourages ongoing revision throughout the course as students progressively develop their skills in class and at home. * Packed full with consolidation and exam practice questions, these workbooks can save valuable preparation time and expense, with self-contained exercises that don't need photocopying and provide instant lesson and homework solutions for specialist and non-specialist teachers. * Ensures that students feel confident tackling their exams as they know what to expect in each section.
The Dynamics program and handbook allows the reader to explore nonlinear dynamics and chaos by the use of illustrated graphics. It is suitable for research and educational needs. This new edition allows the program = to run 3 times faster on the processes that are time consuming. Other major changes include: 1. There will be an add-your-own equation facility. This means it = will be unnecessary to have a compiler. PD and Lyanpunov exponents and Newton method for finding periodic orbits can all be carried out numerically without adding specific code for partial derivatives. 2. The program will support color postscript. 3. New menu system in which the user is prompted by options when a command is chosen. This means that the program is much easier to learn and to remember in comparison to current version. 4. Mouse support is added. 5. The program will be able to use the expanded memory available on modern PC's. This means pictures will be higher resolution. There are also many minor chan ce much of the source code will be available on the web, although some of ges such as zoom facility and help facility.=20 6. Due to limited spa it willr emain on the disk so that the unix users still have to purchase the book. This will allow minor upgrades for Unix users.
Develop a core understanding of the concepts of modern computer science Computer Science: An Overview, 13th edition, Global Edition, by J. Glenn Brookshear, and Dennis Brylow, is written for students from all backgrounds, giving you a bottom-up, concrete-to-abstract foundation in the subject. Its broad coverage encourages a practical and realistic understanding of computer science, covering all the major concepts. The book's broad background exposes beginning computer science students to the breadth of the subject they plan to major in and teaches students from other backgrounds how to relate to the technical society in which they live. Learn in a flexible way with independent chapters you can study in any order with full-colour design to help you engage with the information. The text also uses Python to provide programming tools for exploration and experimentation in your learning. This 13th edition has been corrected and updated in each chapter to refine your learning experience. With more than 1,000 questions and exercises, the book trains your thinking skills with useful chapter review problems and contains questions surrounding social issues to reinforce core concepts. This text is comprehensive and highly accessible, making it ideal for undergraduate studies in computer science. This title has a Companion Website.
This useful book addresses electrothermal problems in modern VLSI systems. It discusses electrothermal phenomena and the fundamental building blocks that electrothermal simulation requires. The authors present three important applications of VLSI electrothermal analysis: temperature-dependent electromigration diagnosis, cell-level thermal placement, and temperature-driven power and timing analysis.
Integrated circuit technology is widely used for the full integration of electronic systems. In general, these systems are realized using digital techniques implemented in CMOS technology. The low power dissipation, high packing density, high noise immunity, ease of design and the relative ease of scaling are the driving forces of CMOS technology for digital applications. Parts of these systems cannot be implemented in the digital domain and will remain analog. In order to achieve complete system integration these analog functions are preferably integrated in the same CMOS technology. An important class of analog circuits that need to be integrated in CMOS are analog filters. This book deals with very high frequency (VHF) filters, which are filters with cut-off frequencies ranging from the low megahertz range to several hundreds of megahertz. Until recently the maximal cut-off frequencies of CMOS filters were limited to the low megahertz range. By applying the techniques presented in this book the limit could be pushed into the true VHF domain, and integrated VHF filters become feasible. Application of these VHF filters can be found in the field of communication, instrumentation and control systems. For example, pre and post filtering for high-speed AD and DA converters, signal reconstruction, signal decoding, etc. The general design philosophy used in this book is to allow only the absolute minimum of signal carrying nodes throughout the whole filter. This strategy starts at the filter synthesis level and is extended to the level of electronic circuitry. The result is a filter realization in which all capacitators (including parasitics) have a desired function. The advantage of this technique is that high frequency parasitic effects (parasitic poles/zeros) are minimally present. The book is a reference for engineers in research or development, and is suitable for use as a text for advanced courses on the subject. >
Data science has always been an effective way of extracting knowledge and insights from information in various forms. One industry that can utilize the benefits from the advances in data science is the healthcare field. The Handbook of Research on Data Science for Effective Healthcare Practice and Administration is a critical reference source that overviews the state of data analysis as it relates to current practices in the health sciences field. Covering innovative topics such as linear programming, simulation modeling, network theory, and predictive analytics, this publication is recommended for all healthcare professionals, graduate students, engineers, and researchers that are seeking to expand their knowledge of efficient techniques for information analysis in the healthcare professions.
Operations Research and Cyber-Infrastructure is the companion volume to the Eleventh INFORMS Computing Society Conference (ICS 2009), held in Charleston, South Carolina, from January 11 to 13, 2009. It includes 24 high-quality refereed research papers. As always, the focus of interest for ICS is the interface between Operations Research and Computer Science, and the papers in this volume reflect that interest. This is naturally an evolving area as computational power increases rapidly while decreasing in cost even more quickly. The papers included here illustrate the wide range of topics at this interface. For convenience, they are grouped in broad categories and subcategories. There are three papers on modeling, reflecting the impact of recent development in computing on that area. Eight papers are on optimization (three on integer programming, two on heuristics, and three on general topics, of which two involve stochastic/probabilistic processes). Finally, there are thirteen papers on applications (three on the conference theme of cyber-infrastructure, four on routing, and six on other interesting topics). Several of the papers could be classified in more than one way, reflecting the interactions between these topic areas.
Memory Issues in Embedded Systems-On-Chip: Optimizations and Explorations is designed for different groups in the embedded systems-on-chip arena. First, it is designed for researchers and graduate students who wish to understand the research issues involved in memory system optimization and exploration for embedded systems-on-chip. Second, it is intended for designers of embedded systems who are migrating from a traditional micro-controllers centered, board-based design methodology to newer design methodologies using IP blocks for processor-core-based embedded systems-on-chip. Also, since Memory Issues in Embedded Systems-on-Chip: Optimization and Explorations illustrates a methodology for optimizing and exploring the memory configuration of embedded systems-on-chip, it is intended for managers and system designers who may be interested in the emerging capabilities of embedded systems-on-chip design methodologies for memory-intensive applications.
Thisvolumecontainstheinvitedandregularpaperspresentedat TCS 2010, the 6thIFIP International Conference on Theoretical Computer Science, organised by IFIP Tech- cal Committee 1 (Foundations of Computer Science) and IFIP WG 2.2 (Formal - scriptions of Programming Concepts) in association with SIGACT and EATCS. TCS 2010 was part of the World Computer Congress held in Brisbane, Australia, during September 20-23, 2010 ( ). TCS 2010 is composed of two main areas: (A) Algorithms, Complexity and Models of Computation, and (B) Logic, Semantics, Speci?cation and Veri?cation. The selection process led to the acceptance of 23 papers out of 39 submissions, eachofwhichwasreviewedbythreeProgrammeCommitteemembers.TheProgramme Committee discussion was held electronically using Easychair. The invited speakers at TCS 2010 are: Rob van Glabbeek (NICTA, Australia) Bart Jacobs (Nijmegen, The Netherlands) Catuscia Palamidessi (INRIA and LIX, Paris, France) Sabina Rossi (Venice, Italy) James Harland (Australia) and Barry Jay (Australia) acted as TCS 2010 Chairs. We take this occasion to thank the members of the Programme Committees and the external reviewers for the professional and timely work; the conference Chairs for their support; the invited speakers for their scholarly contribution; and of course the authors for submitting their work to TCS 2010
This book presents some of the most recent research results in the area of machine learning and robot perception. The chapters represent new ways of solving real-world problems. The book covers topics such as intelligent object detection, foveated vision systems, online learning paradigms, reinforcement learning for a mobile robot, object tracking and motion estimation, 3D model construction, computer vision system and user modelling using dialogue strategies. This book will appeal to researchers, senior undergraduate/postgraduate students, application engineers and scientists.
Introduction. Historical Overview. Databases: Office Information Systems Engineering (J. Palazzo, D. Alcoba) Artificial Intelligence, Logic, and Functional Programming: A HyperIcon Interface to a Blackboard System for Planning Research Projects (P. Charlton, C. Burdorf). Algorithms and Data Structures: Classification of Quadratic Algorithms for Multiplying Polynomials of Small Degree Over Finite Fields (A. Averbuch et al.). Object Oriented Systems: A Graphical Interactive Object Oriented Development System (M. Adar et al.). Distributed Systems: Preserving Distributed Data Coherence Using Asynchronous Broadcasts (J. Piquer). Complexity and Parallel Algorithms: Parallel Algorithms for NPComplete Problems (M. Robson). Computer Architecture and Networks: The Caracas Multiprocessor System (M. Campo et al.). 30 additional articles. Index. |
You may like...
Discovering Computers - Digital…
Misty Vermaat, Mark Ciampa, …
Paperback
Computer-Graphic Facial Reconstruction
John G. Clement, Murray K. Marks
Hardcover
R2,327
Discovery Miles 23 270
Computer Aided Verification
Hana Chockler, Georg Weissenbacher
Hardcover
R2,035
Discovery Miles 20 350
Practical Guide to Usability Testing
Joseph S. Dumas, Janice C. Redish
Paperback
R984
Discovery Miles 9 840
|