![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Technology: general issues > Technical design > Computer aided design (CAD)
ThIS IS an English verSIOn of the book m two volumes, entitled "KeiJo Shon Kogaku (1), (2)" (Nikkan Kogyo Shinbun Co.) written in Japanese. The purpose of the book is a umfied and systematic exposition of the wealth of research results m the field of mathematical representation of curves and surfaces for computer aided geometric design that have appeared in the last thirty years. The material for the book started hfe as a set of notes for computer aided geometnc design courses which I had at the graduate schools of both computer SCIence, the umversity of Utah m U.S.A. and Kyushu Institute of Design in Japan. The book has been used extensively as a standard text book of curves and surfaces for students, practtcal engmeers and researchers. With the aim of systematic expositIOn, the author has arranged the book in 8 chapters: Chapter 0: The sIgmficance of mathemattcal representations of curves and surfaces is explained and histoncal research developments in this field are revIewed. Chapter 1: BasIc mathematical theones of curves and surfaces are reviewed and summanzed. Chapter 2: A classical mterpolation method, the Lagrange interpolation, is discussed. Although its use is uncommon in practice, this chapter is helpful in understanding Chaps. 4 and 6. Chapter 3: This chapter dIscusses the Coons surface in detail, which is one of the most important contributions in this field. Chapter 4: The fundamentals of spline functions, spline curves and surfaces are discussed in some detail.
The total integration of the process of designing, manufacturing, and supporting a product from the earliest conceptual phase to the time it is removed from service remains an unfulfilled dream. Yet, when we look at the enormity of the process of integration even for the most simply conceived and manufactured items, we can recognize that substantial progress has been and is being made. It is our nature to be dissatisfied with near term progress, but when we realize how short a time the tools to do that integration have been available, the progress is clearly noteworthy - considering the multitudes of subjects we have to deal with. Most of the integration problems we confront today are multidisciplinary in nature. They require not only the knowledge and experience in a variety of fields but also good cooperation from different disciplined organizations to adequately comprehend and solve such problems. In Volume I we have many examples that reflect the current state of the art in integration of engineer ing and production processes. The papers for Volume I have been arranged in a more or less logical order of conceptual. design, computer-based modeling, analysis, production, and manufacturing. Chapter I is devoted to those with a design and geometrie modeling emphasis; Chapter II is devoted to an engineering analysis emphasis; and Chapter III to a production/manufacturing emphasis."
Are memory applications more critical than they have been in the past? Yes, but even more critical is the number of designs and the sheer number of bits on each design. It is assured that catastrophes, which were avoided in the past because memories were small, will easily occur if the design and test engineers do not do their jobs very carefully. High Performance Memory Testing: Design Principles, Fault Modeling and Self Test is based on the author's 20 years of experience in memory design, memory reliability development and memory self test. High Performance Memory Testing: Design Principles, Fault Modeling and Self Test is written for the professional and the researcher to help them understand the memories that are being tested.
Digital Systems Design and Prototyping: Using Field Programmable Logic and Hardware Description Languages, Second Edition covers the subject of digital systems design using two important technologies: Field Programmable Logic Devices (FPLDs) and Hardware Description Languages (HDLs). These two technologies are combined to aid in the design, prototyping, and implementation of a whole range of digital systems from very simple ones replacing traditional glue logic to very complex ones customized as the applications require. Three HDLs are presented: VHDL and Verilog, the widely used standard languages, and the proprietary Altera HDL (AHDL). The chapters on these languages serve as tutorials and comparisons are made that show the strengths and weaknesses of each language. A large number of examples are used in the description of each language providing insight for the design and implementation of FPLDs. With the addition of the Altera UP-1 prototyping board, all examples can be tested and verified in a real FPLD. Digital Systems Design and Prototyping: Using Field Programmable Logic and Hardware Description Languages, Second Edition is designed as an advanced level textbook as well as a reference for the professional engineer.
The role of arithmetic in datapath design in VLSI design has been increasing in importance over the last several years due to the demand for processors that are smaller, faster, and dissipate less power. Unfortunately, this means that many of these datapaths will be complex both algorithmically and circuit wise. As the complexity of the chips increases, less importance will be placed on understanding how a particular arithmetic datapath design is implemented and more importance will be given to when a product will be placed on the market. This is because many tools that are available today, are automated to help the digital system designer maximize their efficiently. Unfortunately, this may lead to problems when implementing particular datapaths. The design of high-performance architectures is becoming more compli cated because the level of integration that is capable for many of these chips is in the billions. Many engineers rely heavily on software tools to optimize their work, therefore, as designs are getting more complex less understanding is going into a particular implementation because it can be generated automati cally. Although software tools are a highly valuable asset to designer, the value of these tools does not diminish the importance of understanding datapath ele ments. Therefore, a digital system designer should be aware of how algorithms can be implemented for datapath elements. Unfortunately, due to the complex ity of some of these algorithms, it is sometimes difficult to understand how a particular algorithm is implemented without seeing the actual code."
The craft of designing mathematical models of dynamic objects offers a large number of methods to solve subproblems in the design, typically parameter estimation, order determination, validation, model reduc tion, analysis of identifiability, sensi tivi ty and accuracy. There is also a substantial amount of process identification software available. A typi cal 'identification package' consists of program modules that implement selections of solution methods, coordinated by supervising programs, communication, and presentation handling file administration, operator of results. It is to be run 'interactively', typically on a designer's 'work station' . However, it is generally not obvious how to do that. Using interactive identification packages necessarily leaves to the user to decide on quite a number of specifications, including which model structure to use, which subproblems to be solved in each particular case, and in what or der. The designer is faced with the task of setting up cases on the work station, based on apriori knowledge about the actual physical object, the experiment conditions, and the purpose of the identification. In doing so, he/she will have to cope with two basic difficulties: 1) The com puter will be unable to solve most of the tentative identification cases, so the latter will first have to be form11lated in a way the computer can handle, and, worse, 2) even in cases where the computer can actually produce a model, the latter will not necessarily be valid for the intended purpose."
System designers, computer scientists and engineers have c- tinuously invented and employed notations for modeling, speci- ing, simulating, documenting, communicating, teaching, verifying and controlling the designs of digital systems. Initially these s- tems were represented via electronic and fabrication details. F- lowing C. E. Shannon's revelation of 1948, logic diagrams and Boolean equations were used to represent digital systems in a fa- ion that de-emphasized electronic and fabrication detail while revealing logical behavior. A small number of circuits were made available to remove the abstraction of these representations when it was desirable to do so. As system complexity grew, block diagrams, timing charts, sequence charts, and other graphic and symbolic notations were found to be useful in summarizing the gross features of a system and describing how it operated. In addition, it always seemed necessary or appropriate to augment these documents with lengthy verbal descriptions in a natural language. While each notation was, and still is, a perfectly valid means of expressing a design, lack of standardization, conciseness, and f- mal definitions interfered with communication and the understa- ing between groups of people using different notations. This problem was recognized early and formal languages began to evolve in the 1950s when I. S. Reed discovered that flip-flop input equations were equivalent to a register transfer equation, and that xvi tor-like notation. Expanding these concepts Reed developed a no- tion that became known as a Register Transfer Language (RTL).
Computer Aided Design (CAD) technology plays a key role in today's advanced manufacturing environment. To reduce the time to market, achieve zero defect quality the first time, and use available production and logistics resources effectively, product and design process knowledge covering the whole product life-cycle must be used throughout product design. Once generated, this intensive design knowledge should be made available to later life-cycle activities. Due to the increasing concern about global environmental issues and rapidly changing economical situation worldwide, design must exhibit high performance not only in quality and productivity, but also in life-cycle issues, including extended producer's liability. These goals require designers and engineers to use various kinds of design knowledge intensively during product design and to generate design information for use in later stages of the product life-cycle such as production, distribution, operation, maintenance, reclamation, and recycling. Therefore, future CAD systems must incorporate product and design process knowledge, which are not explicitly dealt with in the current systems, in their design tools and design object models.
A recent technological advance is the art of designing circuits to test themselves, referred to as a Built-In Self-Test. This book is written from a designer's perspective and describes the major BIST approaches that have been proposed and implemented, along with their advantages and limitations.
Motivation for this Book Ontologies have received increasing attention over the last two decades. Their roots can be traced back to the ancient philosophers, who were interested in a c- ceptualization of the world. In the more recent past, ontologies and ontological engineering have evolved in computer science, building on various roots such as logics, knowledge representation, information modeling and management, and (knowledge-based) information systems. Most recently, largely driven by the next generation internet, the so-called Semantic Web, ontological software engineering has developed into a scientific field of its own, which puts particular emphasis on the theoretical foundations of representation and reasoning, and on the methods and tools required for building ontology-based software applications in diverse domains. Though this field is largely dominated by computer science, close re- tionships have been established with its diverse areas of application, where - searchers are interested in exploiting the results of ontological software engine- ing, particularly to build large knowledge-intensive applications at high productivity and low maintenance effort. Consequently, a large number of scientific papers and monographs have been p- lished in the very recent past dealing with the theory and practice of ontological software engineering. So far, the majority of those books are dedicated to the th- retical foundations of ontologies, including philosophical treatises and their re- tionships to established methods in information systems and ontological software engineering.
This book presents a collection of chapters describing the state of the art on computational modelling and fabrication in tissue engineering. Tissue Engineering is a multidisciplinary field involving scientists from different fields. The development of mathematical methods is quite relevant to understand cell biology and human tissues as well to model, design and fabricate optimized and smart scaffolds. The chapter authors are the distinguished keynote speakers at the first Eccomas thematic conference on Tissue Engineering where the emphasis was on mathematical and computational modeling for scaffold design and fabrication. This particular area of tissue engineering, whose goal is to obtain substitutes for hard tissues such as bone and cartilage, is growing in importance.
Current multimedia and telecom applications require complex, heterogeneous multiprocessor system on chip (MPSoC) architectures with specific communication infrastructure in order to achieve the required performance. Heterogeneous MPSoC includes different types of processing units (DSP, microcontroller, ASIP) and different communication schemes (fast links, non standard memory organization and access). Programming an MPSoC requires the generation of efficient software running on MPSoC from a high level environment, by using the characteristics of the architecture. This task is known to be tedious and error prone, because it requires a combination of high level programming environments with low level software design. This book gives an overview of concepts related to embedded software design for MPSoC. It details a full software design approach, allowing systematic, high-level mapping of software applications on heterogeneous MPSoC. This approach is based on gradual refinement of hardware/software interfaces and simulation models allowing to validate the software at different abstraction levels. This book combines Simulink for high level programming and SystemC for the low level software development. This approach is illustrated with multiple examples of application software and MPSoC architectures that can be used for deep understanding of software design for MPSoC.
Geometry, of all the branches of mathematics, is the one that is most easily visualized by making something. However, it is all too easy to reduce it to reams of formulas to memorize and proofs to replicate. This book aims to take geometry back to its practical roots with 3D printed models and puzzles as well as demonstrations with household objects like flashlights and paper towel tubes. This is not a traditional geometry textbook, but rather builds up understanding of geometry concepts while also bringing in elements of concepts normally learned much later. Some of the models are counterintuitive, and figuring out how and why they work will both entertain and give insights. Two final chapters suggesting open-ended projects in astronomy and physics, and art and architecture, allow for deeper understanding and integration of the learning in the rest of the book.
Modern electronics is driven by the explosive growth of digital communications and multi-media technology. A basic challenge is to design first-time-right complex digital systems, that meet stringent constraints on performance and power dissipation. In order to combine this growing system complexity with an increasingly short time-to-market, new system design technologies are emerging based on the paradigm of embedded programmable processors. This concept introduces modularity, flexibility and re-use in the electronic system design process. However, its success will critically depend on the availability of efficient and reliable CAD tools to design, programme and verify the functionality of embedded processors. Recently, new research efforts emerged on the edge between software compilation and hardware synthesis, to develop high-quality code generation tools for embedded processors. Code Generation for Embedded Systems provides a survey of these new developments. Although not limited to these targets, the main emphasis is on code generation for modern DSP processors. Important themes covered by the book include: the scope of general purpose versus application-specific processors, machine code quality for embedded applications, retargetability of the code generation process, machine description formalisms, and code generation methodologies. Code Generation for Embedded Systems is the essential introduction to this fast developing field of research for students, researchers, and practitioners alike.
VHDL Answers to Frequently Asked Questions is a follow-up to the author's book VHDL Coding Styles and Methodologies (ISBN 0-7923-9598-0). On completion of his first book, the author continued teaching VHDL and actively participated in the comp.lang.vhdl newsgroup. During his experiences, he was enlightened by the many interesting issues and questions relating to VHDL and synthesis. These pertained to: misinterpretations in the use of the language; methods for writing error-free, and simulation-efficient, code for testbench designs and for synthesis; and general principles and guidelines for design verification. As a result of this wealth of public knowledge contributed by a large VHDL community, the author decided to act as a facilitator of this information by collecting different classes of VHDL issues, and by elaborating on these topics through complex simulatable examples. This book is intended for those who are seeking an enhanced proficiency in VHDL. This book differs from other VHDL books in many respects.This book: * emphasizes real VHDL, rather than philosophical or introductory types of information * emphasizes application of VHDL for synthesis * uses complete examples to demonstrate problems and solutions * provides a disk that includes all the book examples and other useful reference VHDL material * uses easy to remember symbology notation to emphasize language rules, good and poor methodology and coding styles * identifies obsolete VHDL constructs that must be avoided * identifies synthesizable/non-synthesizable structures * uses a question and answer format to clarify and emphasize the concerns of VHDL users.
Boundary-Scan, formally known as IEEE/ANSI Standard 1149.1-1990, is a collection of design rules applied principally at the integrated circuit (IC) level that allow software to alleviate the growing cost of designing and producing digital systems. The primary benefit of the standard is its ability to transform extremely printed circuit board testing problems that could only be attacked with ad-hoc testing methods into well-structured problems that software can easily and swiftly deal with. The Boundary-Scan Handbook is for professionals in the electronics industry who are concerned with the practical problems of competing successfully in the face of rapid-fire technological change. Since many of these changes affect our ability to do testing and hence cost-effective production, the advent of the 1149.1 standard is rightly looked upon as a major breakthrough. However, there is a great deal of misunderstanding about what to expect of 1149.1 and how to use it. Because of this, The Boundary-Scan Handbook is not a rehash of the 1149.1 standard, nor does it intend to be a tutorial on the basics of its workings. The standard itself should always be consulted for this, being careful to follow supplements issued by the IEEE that clarify and correct it. Rather, The Boundary-Scan Handbook motivates proper expectations and explains how to use the standard successfully.
This book introduces a design methodology that can help to
bridge the productivity gap. Two different types of designs,
depending on the design challenge, have been identified. To
validate the presented methodologies, the authors have selected and
designed accordingly three different industrial-strength
applications.
Simulation Methods for Reliability and Availability of Complex Systems discusses the use of computer simulation-based techniques and algorithms to determine reliability and availability (R and A) levels in complex systems. The book: shares theoretical or applied models and decision support systems that make use of simulation to estimate and to improve system R and A levels, forecasts emerging technologies and trends in the use of computer simulation for R and A and proposes hybrid approaches to the development of efficient methodologies designed to solve R and A-related problems in real-life systems. Dealing with practical issues, Simulation Methods for Reliability and Availability of Complex Systems is designed to support managers and system engineers in the improvement of R and A, as well as providing a thorough exploration of the techniques and algorithms available for researchers, and for advanced undergraduate and postgraduate students.
Systematic Design of Sigma-Delta Analog-to-Digital Converters
describes the issues related to the sigma-delta analog-to-digital
converters (ADCs) design in a systematic manner: from the top level
of abstraction represented by the filters defining signal and noise
transfer functions (STF, NTF), passing through the architecture
level where topology-related performance is calculated and
simulated, and finally down to parameters of circuit elements like
resistors, capacitors, and amplifier transconductances used in
individual integrators. The systematic approach allows the
evaluation of different loop filters (order, aggressiveness,
discrete-time or continuous-time implementation) with quantizers
varying in resolution. Topologies explored range from simple single
loops to multiple cascaded loops with complex structures including
more feedbacks and feedforwards. For differential circuits, with
switched-capacitor integrators for discrete-time (DT) loop filters
and active-RC for continuous-time (CT) ones, the passive integrator
components are calculated and the power consumption is estimated,
based on top-level requirements like harmonic distortion and noise
budget.
This book presents a powerful new language and methodology for programming complex reactive systems in a scenario-based manner. The language is live sequence charts (LSCs), a multimodal extension of sequence charts and UML's sequence diagrams, used in the past mainly for requirements. The methodology is play-in/play-out, an unusually convenient means for specifying inter-object scenario-based behavior directly from a GUI or an object model diagram, with the surprising ability to execute that behavior, or those requirements, directly. The language and methodology are supported by a fully implemented tool the Play-Engine which is attached to the book in CD form. Comments from experts in the field: The design of reactive systems is one of the most challenging problems in computer science. This books starts with a critical insight to explain the difficulty of this problem: there is a fundamental gap between the scenario-based way in which people think about such systems and the state-based way in which these systems are implemented. The book then offers a radical proposal to bridge this gap by means of playing scenarios. Systems can be specified by playing in scenarios and implemented by means of a Play-Engine that plays out scenarios. This idea is carried out and developed, lucidly, formally and playfully, to its fullest. The result is a compelling proposal, accompanied by a prototype software engine, for reactive systems design, which is bound to cause a splash in the software-engineering community. Moshe Y. Vardi, Rice University, Houston, Texas, USA Scenarios are a primary exchange tool in explaining system behavior to others, but their limited expressive power never made them able to fully describe systems, thus limiting their use. The language of Live Sequence Charts (LSCs) presented in this beautifully written book achieves this goal, and the attached Play-Engine software makes these LSCs really come alive. This is undoubtedly a key breakthrough that will start long-awaited and exciting new directions in systems specification, synthesis, and analysis. Gerard Berry, Esterel Technologies and INRIA, Sophia-Antipolis, France The approach of David Harel and Rami Marelly is a fascinating way of combining prototyping techniques with techniques for identifying behavior and user interfaces. Manfred Broy, Technical University of Munich, Germany"
Since their introduction in 1984, Field-Programmable Gate Arrays (FPGAs) have become one of the most popular implementation media for digital circuits and have grown into a $2 billion per year industry. As process geometries have shrunk into the deep-submicron region, the logic capacity of FPGAs has greatly increased, making FPGAs a viable implementation alternative for larger and larger designs. To make the best use of these new deep-submicron processes, one must re-design one's FPGAs and Computer- Aided Design (CAD) tools. Architecture and CAD for Deep-Submicron FPGAs addresses several key issues in the design of high-performance FPGA architectures and CAD tools, with particular emphasis on issues that are important for FPGAs implemented in deep-submicron processes. Three factors combine to determine the performance of an FPGA: the quality of the CAD tools used to map circuits into the FPGA, the quality of the FPGA architecture, and the electrical (i.e. transistor-level) design of the FPGA. Architecture and CAD for Deep-Submicron FPGAs examines all three of these issues in concert. In order to investigate the quality of different FPGA architectures, one needs CAD tools capable of automatically implementing circuits in each FPGA architecture of interest. Once a circuit has been implemented in an FPGA architecture, one next needs accurate area and delay models to evaluate the quality (speed achieved, area required) of the circuit implementation in the FPGA architecture under test. This book therefore has three major foci: the development of a high-quality and highly flexible CAD infrastructure, the creation of accurate area and delay models for FPGAs, and the study of several important FPGA architectural issues. Architecture and CAD for Deep-Submicron FPGAs is an essential reference for researchers, professionals and students interested in FPGAs.
Structural optimization is currently attracting considerable attention. Interest in - search in optimal design has grown in connection with the rapid development of aeronautical and space technologies, shipbuilding, and design of precision mach- ery. A special ?eld in these investigations is devoted to structural optimization with incomplete information (incomplete data). The importance of these investigations is explained as follows. The conventional theory of optimal structural design - sumes precise knowledge of material parameters, including damage characteristics and loadings applied to the structure. In practice such precise knowledge is seldom available. Thus, it is important to be able to predict the sensitivity of a designed structure to random ?uctuations in the environment and to variations in the material properties. To design reliable structures it is necessary to apply the so-called gu- anteed approach, based on a "worst case scenario" or a more optimistic probabilistic approach, if we have additional statistical data. Problems of optimal design with incomplete information also have consid- able theoretical importance. The introduction and investigations into new types of mathematical problems are interesting in themselves. Note that some ga- theoretical optimization problems arise for which there are no systematic techniques of investigation. This monograph is devoted to the exposition of new ways of formulating and solving problems of structural optimization with incomplete information. We recall some research results concerning the optimum shape and structural properties of bodies subjected to external loadings.
Modeling in Analog Design highlights some of the most pressing issues in the use of modeling techniques for design of analogue circuits. Using models for circuit design gives designers the power to express directly the behaviour of parts of a circuit in addition to using other pre-defined components. There are numerous advantages to this new category of analog behavioral language. In the short term, by favouring the top-down design and raising the level of description abstraction, this approach provides greater freedom of implementation and a higher degree of technology independence. In the longer term, analog synthesis and formal optimisation are targeted. Modeling in Analog Design introduces the reader to two main language standards: VHDL-A and MHDL. It goes on to provide in-depth examples of the use of these languages to model analog devices. The final part is devoted to the very important topic of modeling the thermal and electrothermal aspects of devices. This book is essential reading for analog designers using behavioral languages and analog CAD tool development environments who have to provide the tools used by the designers.
Leaf Cell and Hierarchical Compaction Techniques presents novel algorithms developed for the compaction of large layouts. These algorithms have been implemented as part of a system that has been used on many industrial designs. The focus of Leaf Cell and Hierarchical Compaction Techniques is three-fold. First, new ideas for compaction of leaf cells are presented. These cells can range from small transistor-level layouts to very large layouts generated by automatic Place and Route tools. Second, new approaches for hierarchical pitchmatching compaction are described and the concept of a Minimum Design is introduced. The system for hierarchical compaction is built on top of the leaf cell compaction engine and uses the algorithms implemented for leaf cell compaction in a modular fashion. Third, a new representation for designs called Virtual Interface, which allows for efficient topological specification and representation of hierarchical layouts, is outlined. The Virtual Interface representation binds all of the algorithms and their implementations for leaf and hierarchical compaction into an intuitive and easy-to-use system. From the Foreword: `...In this book, the authors provide a comprehensive approach to compaction based on carefully conceived abstractions. They describe the design of algorithms that provide true hierarchical compaction based on linear programming, but cut down the complexity of the computations through introduction of innovative representations that capture the provably minimum amount of required information needed for correct compaction. In most compaction algorithms, the complexity goes up with the number of design objects, but in this approach, complexity is due to the irregularity of the design, and hence is often tractable for most designs which incorporate substantial regularity. Here the reader will find an elegant treatment of the many challenges of compaction, and a clear conceptual focus that provides a unified approach to all aspects of the compaction task...' Jonathan Allen, Massachusetts Institute of Technology
Advanced ASIC Chip Synthesis: Using Synopsys (R) Design Compiler (R) Physical Compiler (R) and PrimeTime (R), Second Edition describes the advanced concepts and techniques used towards ASIC chip synthesis, physical synthesis, formal verification and static timing analysis, using the Synopsys suite of tools. In addition, the entire ASIC design flow methodology targeted for VDSM (Very-Deep-Sub-Micron) technologies is covered in detail. The emphasis of this book is on real-time application of Synopsys tools, used to combat various problems seen at VDSM geometries. Readers will be exposed to an effective design methodology for handling complex, sub-micron ASIC designs. Significance is placed on HDL coding styles, synthesis and optimization, dynamic simulation, formal verification, DFT scan insertion, links to layout, physical synthesis, and static timing analysis. At each step, problems related to each phase of the design flow are identified, with solutions and work-around described in detail. In addition, crucial issues related to layout, which includes clock tree synthesis and back-end integration (links to layout) are also discussed at length. Furthermore, the book contains in-depth discussions on the basis of Synopsys technology libraries and HDL coding styles, targeted towards optimal synthesis solution. Target audiences for this book are practicing ASIC design engineers and masters level students undertaking advanced VLSI courses on ASIC chip design and DFT techniques. |
You may like...
Mem-elements for Neuromorphic Circuits…
Christos Volos, Viet-Thanh Pham
Paperback
R3,613
Discovery Miles 36 130
SolidWorks Electrical 2022 Black Book…
Gaurav Verma, Matt Weber
Hardcover
R1,347
Discovery Miles 13 470
SolidWorks CAM 2022 Black Book (Colored)
Gaurav Verma, Matt Weber
Hardcover
R1,477
Discovery Miles 14 770
Digital Control Engineering - Analysis…
M. Sami Fadali, Antonio Visioli
Paperback
R2,709
Discovery Miles 27 090
|