![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > General
This book contains the papers from the IFIP Working Group 8.1 conference on Situational Method Engineering. Over the last decade, Method Engineering, defined as the engineering discipline to design, construct and adapt methods, including supportive tools, has emerged as the research and application area for using methods for systems development.
This handbook provides design considerations and rules-of-thumb to ensure the functionality you want will work. It brings together all the information needed by systems designers to develop applications that include configurability, from the simplest implementations to the most complicated.
The design of digital (computer) systems requires several design phases: from the behavioural design, over the logical structural design to the physical design, where the logical structure is implemented in the physical structure of the system (the chip). Due to the ever increasing demands on computer system performance, the physical design phase being one of the most complex design steps in the entire process. The major goal of this book is to develop a priori wire length estimation methods that can help the designer in finding a good lay-out of a circuit in less iterations of physical design steps and that are useful to compare different physical architectures. For modelling digital circuits, the interconnection complexity is of major importance. It can be described by the so called Rent's rule and the Rent exponent. A Priori Wire Length Estimates for Digital Design will provide the reader with more insight in this rule and clearly outlines when and where the rule can be used and when and where it fails. Also, for the first time, a comprehensive model for the partitioning behaviour of multi-terminal nets is developed. This leads to a new parameter for circuits that describes the distribution of net degrees over the nets in the circuit. This multi-terminal net model is used throughout the book for the wire length estimates but it also induces a method for the generation of synthetic benchmark circuits that has major advantages over existing benchmark generators. In the domain of wire length estimations, the most important contributions of this work are (i) a new model for placement optimization in a physical (computer) architecture and (ii) the inclusion of the multi-terminal net modelin the wire length estimates. The combination of the placement optimization model with Donath's model for a hierarchical partitioning and placement results in more accurate wire length estimates. The multi-terminal net model allows accurate assessments of the impact of multi-terminal nets on wire length estimates. We distinguish between delay-related applications, ' for which the length of source-sink pairs is important, and routing-related applications, ' for which the entire (Steiner) length of the multi-terminal net has to be taken into account. The wire length models are further extended by taking into account the interconnections between internal components and the chip boundary. The application of the models to three-dimensional systems broadens the scope to more exotic architectures and to opto-electronic design techniques. We focus on anisotropic three-dimensional systems and propose a way to estimate wire lengths for opto-electronic systems. The wire length estimates can be used for prediction of circuit characteristics, for improving placement and routing tools in Computer-Aided Design and for evaluating new computer architectures. All new models are validated with experiments on benchmark circuits.
This guidebook on e-science presents real-world examples of practices and applications, demonstrating how a range of computational technologies and tools can be employed to build essential infrastructures supporting next-generation scientific research. Each chapter provides introductory material on core concepts and principles, as well as descriptions and discussions of relevant e-science methodologies, architectures, tools, systems, services and frameworks. Features: includes contributions from an international selection of preeminent e-science experts and practitioners; discusses use of mainstream grid computing and peer-to-peer grid technology for "open" research and resource sharing in scientific research; presents varied methods for data management in data-intensive research; investigates issues of e-infrastructure interoperability, security, trust and privacy for collaborative research; examines workflow technology for the automation of scientific processes; describes applications of e-science.
The availability of effective global communication facilities in the last decade has changed the business goals of many manufacturing enterprises. They need to remain competitive by developing products and processes which are specific to individual requirements, completely packaged and manufactured globally. Networks of enterprises are formed to operate across time and space with world-wide distributed functions such as manufacturing, sales, customer support, engineering, quality assurance, supply chain management and so on. Research and technology development need to address architectures, methodologies, models and tools supporting intra- and inter-enterprise operation and management. Throughout the life cycle of products and enterprises there is the requirement to transform information sourced from globally distributed offices and partners into knowledge for decision and action. Building on the success of previous DrrSM conferences (Tokyo 1993, Eindhoven 1996, Fort Worth 1998), the fourth International Conference on Design of Information Infrastructure Systems for Manufacturing (DrrSM 2000) aims to: * Establish and manage the dynamics of virtual enterprises, define the information system requirements and develop solutions; * Develop and deploy information management in multi-cultural systems with universal applicability of the proposed architecture and solutions; * Develop enterprise integration architectures, methodologies and information infrastructure support for reconfigurable enterprises; * Explore information transformation into knowledge for decision and action by machine and skilful people; These objectives reflect changes of the business processes due to advancements of information and communication technologies (ICT) in the last couple of years.
This book brings together in one place important contributions and state-of-the-art research in the rapidly advancing area of analog VLSI neural networks. The book serves as an excellent reference, providing insights into some of the most important issues in analog VLSI neural networks research efforts.
Here is a comprehensive presentation of methodology for the design and synthesis of an intelligent complex robotic system, connecting formal tools from discrete system theory, artificial intelligence, neural network, and fuzzy logic. The necessary methods for solving real time action planning, coordination and control problems are described. A notable chapter presents a new approach to intelligent robotic agent control acting in a realworld environment based on a lifelong learning approach combining cognitive and reactive capabilities. Another key feature is the homogeneous description of all solutions and methods based on system theory formalism.
Targeted audience * Specialists in numerical computations, especially in numerical optimiza tion, who are interested in designing algorithms with automatie result ver ification, and who would therefore be interested in knowing how general their algorithms caIi in principle be. * Mathematicians and computer scientists who are interested in the theory 0/ computing and computational complexity, especially computational com plexity of numerical computations. * Students in applied mathematics and computer science who are interested in computational complexity of different numerical methods and in learning general techniques for estimating this computational complexity. The book is written with all explanations and definitions added, so that it can be used as a graduate level textbook. What this book .is about Data processing. In many real-life situations, we are interested in the value of a physical quantity y that is diflicult (or even impossible) to measure directly. For example, it is impossible to directly measure the amount of oil in an oil field or a distance to a star. Since we cannot measure such quantities directly, we measure them indirectly, by measuring some other quantities Xi and using the known relation between y and Xi'S to reconstruct y. The algorithm that transforms the results Xi of measuring Xi into an estimate fj for y is called data processing.
This book presents and discusses the most recent innovations, trends, results, experiences and concerns with regard to information systems. Individual chapters focus on IT for facility management, process management and applications, corporate information systems, design and manufacturing automation. The book includes new findings on software engineering, industrial internet, engineering cloud and advance BPM methods. It presents the latest research on intelligent information systems, computational intelligence methods in Information Systems and new trends in Business Process Management, making it a valuable resource for both researchers and practitioners looking to expand their information systems expertise.
Automatic biometrics recognition techniques are becoming increasingly important in corporate and public security systems and have increased in methods due to rapid field development. ""Behavioral Biometrics for Human Identification: Intelligent Applications"" discusses classic behavioral biometrics as well as collects the latest advances in techniques, theoretical approaches, and dynamic applications. A critical mass of research, this innovative collection serves as an important reference tool for researchers, practitioners, academicians, and technologists.
This useful book addresses electrothermal problems in modern VLSI systems. It discusses electrothermal phenomena and the fundamental building blocks that electrothermal simulation requires. The authors present three important applications of VLSI electrothermal analysis: temperature-dependent electromigration diagnosis, cell-level thermal placement, and temperature-driven power and timing analysis.
This monograph develops a framework for modeling and solving utility maximization problems in nonconvex wireless systems. The first part develops a model for utility optimization in wireless systems. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed in the second part of the book. The development is based on a careful examination of the properties that are required for the application of each method. This part focuses on problems whose initial formulation does not allow for a solution by standard methods and discusses alternative approaches. The last part presents two case studies to demonstrate the application of the proposed framework. In both cases, utility maximization in multi-antenna broadcast channels is investigated.
One of the fastest growing areas in computer science, granular computing, covers theories, methodologies, techniques, and tools that make use of granules in complex problem solving and reasoning. Novel Developments in Granular Computing: Applications for Advanced Human Reasoning and Soft Computation analyzes developments and current trends of granular computing, reviewing the most influential research and predicting future trends. This book not only presents a comprehensive summary of existing practices, but enhances understanding on human reasoning.
The Dynamics program and handbook allows the reader to explore nonlinear dynamics and chaos by the use of illustrated graphics. It is suitable for research and educational needs. This new edition allows the program = to run 3 times faster on the processes that are time consuming. Other major changes include: 1. There will be an add-your-own equation facility. This means it = will be unnecessary to have a compiler. PD and Lyanpunov exponents and Newton method for finding periodic orbits can all be carried out numerically without adding specific code for partial derivatives. 2. The program will support color postscript. 3. New menu system in which the user is prompted by options when a command is chosen. This means that the program is much easier to learn and to remember in comparison to current version. 4. Mouse support is added. 5. The program will be able to use the expanded memory available on modern PC's. This means pictures will be higher resolution. There are also many minor chan ce much of the source code will be available on the web, although some of ges such as zoom facility and help facility.=20 6. Due to limited spa it willr emain on the disk so that the unix users still have to purchase the book. This will allow minor upgrades for Unix users.
Integrated circuit technology is widely used for the full integration of electronic systems. In general, these systems are realized using digital techniques implemented in CMOS technology. The low power dissipation, high packing density, high noise immunity, ease of design and the relative ease of scaling are the driving forces of CMOS technology for digital applications. Parts of these systems cannot be implemented in the digital domain and will remain analog. In order to achieve complete system integration these analog functions are preferably integrated in the same CMOS technology. An important class of analog circuits that need to be integrated in CMOS are analog filters. This book deals with very high frequency (VHF) filters, which are filters with cut-off frequencies ranging from the low megahertz range to several hundreds of megahertz. Until recently the maximal cut-off frequencies of CMOS filters were limited to the low megahertz range. By applying the techniques presented in this book the limit could be pushed into the true VHF domain, and integrated VHF filters become feasible. Application of these VHF filters can be found in the field of communication, instrumentation and control systems. For example, pre and post filtering for high-speed AD and DA converters, signal reconstruction, signal decoding, etc. The general design philosophy used in this book is to allow only the absolute minimum of signal carrying nodes throughout the whole filter. This strategy starts at the filter synthesis level and is extended to the level of electronic circuitry. The result is a filter realization in which all capacitators (including parasitics) have a desired function. The advantage of this technique is that high frequency parasitic effects (parasitic poles/zeros) are minimally present. The book is a reference for engineers in research or development, and is suitable for use as a text for advanced courses on the subject. >
Data science has always been an effective way of extracting knowledge and insights from information in various forms. One industry that can utilize the benefits from the advances in data science is the healthcare field. The Handbook of Research on Data Science for Effective Healthcare Practice and Administration is a critical reference source that overviews the state of data analysis as it relates to current practices in the health sciences field. Covering innovative topics such as linear programming, simulation modeling, network theory, and predictive analytics, this publication is recommended for all healthcare professionals, graduate students, engineers, and researchers that are seeking to expand their knowledge of efficient techniques for information analysis in the healthcare professions.
Operations Research and Cyber-Infrastructure is the companion volume to the Eleventh INFORMS Computing Society Conference (ICS 2009), held in Charleston, South Carolina, from January 11 to 13, 2009. It includes 24 high-quality refereed research papers. As always, the focus of interest for ICS is the interface between Operations Research and Computer Science, and the papers in this volume reflect that interest. This is naturally an evolving area as computational power increases rapidly while decreasing in cost even more quickly. The papers included here illustrate the wide range of topics at this interface. For convenience, they are grouped in broad categories and subcategories. There are three papers on modeling, reflecting the impact of recent development in computing on that area. Eight papers are on optimization (three on integer programming, two on heuristics, and three on general topics, of which two involve stochastic/probabilistic processes). Finally, there are thirteen papers on applications (three on the conference theme of cyber-infrastructure, four on routing, and six on other interesting topics). Several of the papers could be classified in more than one way, reflecting the interactions between these topic areas.
Memory Issues in Embedded Systems-On-Chip: Optimizations and Explorations is designed for different groups in the embedded systems-on-chip arena. First, it is designed for researchers and graduate students who wish to understand the research issues involved in memory system optimization and exploration for embedded systems-on-chip. Second, it is intended for designers of embedded systems who are migrating from a traditional micro-controllers centered, board-based design methodology to newer design methodologies using IP blocks for processor-core-based embedded systems-on-chip. Also, since Memory Issues in Embedded Systems-on-Chip: Optimization and Explorations illustrates a methodology for optimizing and exploring the memory configuration of embedded systems-on-chip, it is intended for managers and system designers who may be interested in the emerging capabilities of embedded systems-on-chip design methodologies for memory-intensive applications.
Thisvolumecontainstheinvitedandregularpaperspresentedat TCS 2010, the 6thIFIP International Conference on Theoretical Computer Science, organised by IFIP Tech- cal Committee 1 (Foundations of Computer Science) and IFIP WG 2.2 (Formal - scriptions of Programming Concepts) in association with SIGACT and EATCS. TCS 2010 was part of the World Computer Congress held in Brisbane, Australia, during September 20-23, 2010 ( ). TCS 2010 is composed of two main areas: (A) Algorithms, Complexity and Models of Computation, and (B) Logic, Semantics, Speci?cation and Veri?cation. The selection process led to the acceptance of 23 papers out of 39 submissions, eachofwhichwasreviewedbythreeProgrammeCommitteemembers.TheProgramme Committee discussion was held electronically using Easychair. The invited speakers at TCS 2010 are: Rob van Glabbeek (NICTA, Australia) Bart Jacobs (Nijmegen, The Netherlands) Catuscia Palamidessi (INRIA and LIX, Paris, France) Sabina Rossi (Venice, Italy) James Harland (Australia) and Barry Jay (Australia) acted as TCS 2010 Chairs. We take this occasion to thank the members of the Programme Committees and the external reviewers for the professional and timely work; the conference Chairs for their support; the invited speakers for their scholarly contribution; and of course the authors for submitting their work to TCS 2010
This book presents some of the most recent research results in the area of machine learning and robot perception. The chapters represent new ways of solving real-world problems. The book covers topics such as intelligent object detection, foveated vision systems, online learning paradigms, reinforcement learning for a mobile robot, object tracking and motion estimation, 3D model construction, computer vision system and user modelling using dialogue strategies. This book will appeal to researchers, senior undergraduate/postgraduate students, application engineers and scientists.
Adult students demand a wider variety of instructional strategies that encompass real-world, interactive, cooperative, and discovery learning experiences. ""Designing Instruction for the Traditional, Adult, and Distance Learner: A New Engine for Technology-Based Teaching"" explores how technology impacts the process of devising instructional plans as well as learning itself in adult students. Containing research from leading international experts, this publication proposes realistic and accurate archetypes to assist educators in incorporating state-of-the-art technologies into online instruction.
Formal Description Techniques and Protocol Specification, Testing and Verification addresses formal description techniques (FDTs) applicable to distributed systems and communication protocols. It aims to present the state of the art in theory, application, tools and industrialization of FDTs. Among the important features presented are: FDT-based system and protocol engineering; FDT-application to distributed systems; Protocol engineering; Practical experience and case studies. Formal Description Techniques and Protocol Specification, Testing and Verification comprises the proceedings of the Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols and Protocol Specification, Testing and Verification, sponsored by the International Federation for Information Processing, held in November 1998, Paris, France. Formal Description Techniques and Protocol Specification, Testing and Verification is suitable as a secondary text for a graduate-level course on Distributed Systems or Communications, and as a reference for researchers and practitioners in industry.
Since their introduction in 1984, Field-Programmable Gate Arrays (FPGAs) have become one of the most popular implementation media for digital circuits and have grown into a $2 billion per year industry. As process geometries have shrunk into the deep-submicron region, the logic capacity of FPGAs has greatly increased, making FPGAs a viable implementation alternative for larger and larger designs. To make the best use of these new deep-submicron processes, one must re-design one's FPGAs and Computer- Aided Design (CAD) tools. Architecture and CAD for Deep-Submicron FPGAs addresses several key issues in the design of high-performance FPGA architectures and CAD tools, with particular emphasis on issues that are important for FPGAs implemented in deep-submicron processes. Three factors combine to determine the performance of an FPGA: the quality of the CAD tools used to map circuits into the FPGA, the quality of the FPGA architecture, and the electrical (i.e. transistor-level) design of the FPGA. Architecture and CAD for Deep-Submicron FPGAs examines all three of these issues in concert. In order to investigate the quality of different FPGA architectures, one needs CAD tools capable of automatically implementing circuits in each FPGA architecture of interest. Once a circuit has been implemented in an FPGA architecture, one next needs accurate area and delay models to evaluate the quality (speed achieved, area required) of the circuit implementation in the FPGA architecture under test. This book therefore has three major foci: the development of a high-quality and highly flexible CAD infrastructure, the creation of accurate area and delay models for FPGAs, and the study of several important FPGA architectural issues. Architecture and CAD for Deep-Submicron FPGAs is an essential reference for researchers, professionals and students interested in FPGAs. |
You may like...
Dynamic Web Application Development…
David Parsons, Simon Stobart
Paperback
Discovering Computers 2018 - Digital…
Misty Vermaat, Steven Freund, …
Paperback
Computer-Graphic Facial Reconstruction
John G. Clement, Murray K. Marks
Hardcover
R2,327
Discovery Miles 23 270
Discovering Computers, Essentials…
Susan Sebok, Jennifer Campbell, …
Paperback
|