Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 6 of 6 matches in All Departments
Formal Equivalence Checking and Design Debugging covers two major topics in design verification: logic equivalence checking and design debugging. The first part of the book reviews the design problems that require logic equivalence checking and describes the underlying technologies that are used to solve them. Some novel approaches to the problems of verifying design revisions after intensive sequential transformations such as retiming are described in detail. The second part of the book gives a thorough survey of previous and recent literature on design error diagnosis and design error correction. This part also provides an in-depth analysis of the algorithms used in two logic debugging software programs, ErrorTracer and AutoFix, developed by the authors. From the Foreword: With the adoption of the static sign-off approach to verifying circuit implementations the application-specific integrated circuit (ASIC) industry will experience the first radical methodological revolution since the adoption of logic synthesis. Equivalence checking is one of the two critical elements of this methodological revolution. This book is timely for either the designer seeking to better understand the mechanics of equivalence checking or for the CAD researcher who wishes to investigate well-motivated research problems such as equivalence checking of retimed designs or error diagnosis in sequential circuits.' Kurt Keutzer, University of California, Berkeley
Operations Research is a field whose major contribution has been to propose a rigorous fonnulation of often ill-defmed problems pertaining to the organization or the design of large scale systems, such as resource allocation problems, scheduling and the like. While this effort did help a lot in understanding the nature of these problems, the mathematical models have proved only partially satisfactory due to the difficulty in gathering precise data, and in formulating objective functions that reflect the multi-faceted notion of optimal solution according to human experts. In this respect linear programming is a typical example of impressive achievement of Operations Research, that in its detenninistic fonn is not always adapted to real world decision-making : everything must be expressed in tenns of linear constraints ; yet the coefficients that appear in these constraints may not be so well-defined, either because their value depends upon other parameters (not accounted for in the model) or because they cannot be precisely assessed, and only qualitative estimates of these coefficients are available. Similarly the best solution to a linear programming problem may be more a matter of compromise between various criteria rather than just minimizing or maximizing a linear objective function. Lastly the constraints, expressed by equalities or inequalities between linear expressions, are often softer in reality that what their mathematical expression might let us believe, and infeasibility as detected by the linear programming techniques can often been coped with by making trade-offs with the real world.
Intelligent decision support is based on human knowledge related to a specific part of a real or abstract world. When the knowledge is gained by experience, it is induced from empirical data. The data structure, called an information system, is a record of objects described by a set of attributes. Knowledge is understood here as an ability to classify objects. Objects being in the same class are indiscernible by means of attributes and form elementary building blocks (granules, atoms). In particular, the granularity of knowledge causes that some notions cannot be expressed precisely within available knowledge and can be defined only vaguely. In the rough sets theory created by Z. Pawlak each imprecise concept is replaced by a pair of precise concepts called its lower and upper approximation. These approximations are fundamental tools and reasoning about knowledge. The rough sets philosophy turned out to be a very effective, new tool with many successful real-life applications to its credit. It is worthwhile stressing that no auxiliary assumptions are needed about data, like probability or membership function values, which is its great advantage. The present book reveals a wide spectrum of applications of the rough set concept, giving the reader the flavor of, and insight into, the methodology of the newly developed disciplines. Although the book emphasizes applications, comparison with other related methods and further developments receive due attention.
Formal Equivalence Checking and Design Debugging covers two major topics in design verification: logic equivalence checking and design debugging. The first part of the book reviews the design problems that require logic equivalence checking and describes the underlying technologies that are used to solve them. Some novel approaches to the problems of verifying design revisions after intensive sequential transformations such as retiming are described in detail. The second part of the book gives a thorough survey of previous and recent literature on design error diagnosis and design error correction. This part also provides an in-depth analysis of the algorithms used in two logic debugging software programs, ErrorTracer and AutoFix, developed by the authors. From the Foreword: `With the adoption of the static sign-off approach to verifying circuit implementations the application-specific integrated circuit (ASIC) industry will experience the first radical methodological revolution since the adoption of logic synthesis. Equivalence checking is one of the two critical elements of this methodological revolution. This book is timely for either the designer seeking to better understand the mechanics of equivalence checking or for the CAD researcher who wishes to investigate well-motivated research problems such as equivalence checking of retimed designs or error diagnosis in sequential circuits.' Kurt Keutzer, University of California, Berkeley
Operations Research is a field whose major contribution has been to propose a rigorous fonnulation of often ill-defmed problems pertaining to the organization or the design of large scale systems, such as resource allocation problems, scheduling and the like. While this effort did help a lot in understanding the nature of these problems, the mathematical models have proved only partially satisfactory due to the difficulty in gathering precise data, and in formulating objective functions that reflect the multi-faceted notion of optimal solution according to human experts. In this respect linear programming is a typical example of impressive achievement of Operations Research, that in its detenninistic fonn is not always adapted to real world decision-making : everything must be expressed in tenns of linear constraints ; yet the coefficients that appear in these constraints may not be so well-defined, either because their value depends upon other parameters (not accounted for in the model) or because they cannot be precisely assessed, and only qualitative estimates of these coefficients are available. Similarly the best solution to a linear programming problem may be more a matter of compromise between various criteria rather than just minimizing or maximizing a linear objective function. Lastly the constraints, expressed by equalities or inequalities between linear expressions, are often softer in reality that what their mathematical expression might let us believe, and infeasibility as detected by the linear programming techniques can often been coped with by making trade-offs with the real world.
Intelligent decision support is based on human knowledge related to a specific part of a real or abstract world. When the knowledge is gained by experience, it is induced from empirical data. The data structure, called an information system, is a record of objects described by a set of attributes. Knowledge is understood here as an ability to classify objects. Objects being in the same class are indiscernible by means of attributes and form elementary building blocks (granules, atoms). In particular, the granularity of knowledge causes that some notions cannot be expressed precisely within available knowledge and can be defined only vaguely. In the rough sets theory created by Z. Pawlak each imprecise concept is replaced by a pair of precise concepts called its lower and upper approximation. These approximations are fundamental tools and reasoning about knowledge. The rough sets philosophy turned out to be a very effective, new tool with many successful real-life applications to its credit. It is worthwhile stressing that no auxiliary assumptions are needed about data, like probability or membership function values, which is its great advantage. The present book reveals a wide spectrum of applications of the rough set concept, giving the reader the flavor of, and insight into, the methodology of the newly developed disciplines. Although the book emphasizes applications, comparison with other related methods and further developments receive due attention.
|
You may like...
Psychic Spellcraft - A Modern-Day…
Shawn Robbins, Leanna Greenaway
Hardcover
Liber Sigillum - Of the Lords Who Wander
Gary St Michael Nottingham
Hardcover
R1,095
Discovery Miles 10 950
WORDS OF POWER and TRANSFORMATION - 101…
Embrosewyn Tazkuvel
Hardcover
|