![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Professional & Technical > Technology: general issues > Technical design > Computer aided design (CAD)
Informationssysteme sind die Grundlage von Building Information Modelling, BIM. Vernetzte Informationen und durchgangig vernetzte Modelldaten sind die Grundlage partnerschaftlichen Bauens. Sie erlauben transparentes Controlling und zuverlassiges Risikomanagement. Multimodelle sind vernetzte Informationen. Die Grundlagen und Methoden von BIM und Multimodellen werden erlautert und es wird aufgezeigt, wie ein prozessorientiertes Management mit Multimodellen neue Qualitat in die Planung und Steuerung von Bauprozessen bringt. Die durchgehende BIM Arbeitsweise mit vernetzten Informationen erlaubt Bauablaufsimulationen in kurzester Zeit durchzufuhren. Neben dem virtuellen Bauwerk wird auch eine virtuelle Baustelle virtuelle Realitat und gibt wichtige neue Eindrucke fur das Baumanagement. Baumanagementinformationen werden auf einmal transparent, erfassbar, begreifbar. Band 1 konzentriert sich auf die Grundlagen der Modelle und ihre Erweiterung durch Linkmodelle, auf die Methoden fur BIM und Multimodelldaten wie das Filtern, das Visualisieren und auf die Prozesse, ihre schnelle Konfiguration und das prozessbasierte Planen und Managen sowie die Informationslogistik, die gerade durch Multimodelle neue Ansatze und Qualitaten erhalt, wahrend Band 2 anschauliche Anwendungen in Baustellenplanung, Bauablaufsimulation, Bauprojekt- und Risikomanagement aufzeigt.
This unique book deals with the migration of existing hard IP from one technology to another, using repeatable procedures. It will allow CAD practitioners to quickly develop methodologies that capitalize on the large volumes of legacy data available within a company today.
mental improvements during the same period. What is clearly needed in verification techniques and technology is the equivalent of a synthesis productivity breakthrough. In the second edition of Writing Testbenches, Bergeron raises the verification level of abstraction by introducing coverage-driven constrained-random transaction-level self-checking testbenches all made possible through the introduction of hardware verification languages (HVLs), such as e from Verisity and OpenVera from Synopsys. The state-of-art methodologies described in Writing Test benches will contribute greatly to the much-needed equivalent of a synthesis breakthrough in verification productivity. I not only highly recommend this book, but also I think it should be required reading by anyone involved in design and verification of today's ASIC, SoCs and systems. Harry Foster Chief Architect Verplex Systems, Inc. xviii Writing Testbenches: Functional Verification of HDL Models PREFACE If you survey hardware design groups, you will learn that between 60% and 80% of their effort is now dedicated to verification.
Advanced ASIC Chip Synthesis: Using Synopsys (R) Design Compiler (R) and PrimeTime (R) describes the advanced concepts and techniques used for ASIC chip synthesis, formal verification and static timing analysis, using the Synopsys suite of tools. In addition, the entire ASIC design flow methodology targeted for VDSM (Very-Deep-Sub-Micron) technologies is covered in detail. The emphasis of this book is on real-time application of Synopsys tools used to combat various problems seen at VDSM geometries. Readers will be exposed to an effective design methodology for handling complex, sub-micron ASIC designs. Significance is placed on HDL coding styles, synthesis and optimization, dynamic simulation, formal verification, DFT scan insertion, links to layout, and static timing analysis. At each step, problems related to each phase of the design flow are identified, with solutions and work-arounds described in detail. In addition, crucial issues related to layout, which includes clock tree synthesis and back-end integration (links to layout) are also discussed at length. Furthermore, the book contains in-depth discussions on the basics of Synopsys technology libraries and HDL coding styles, targeted towards optimal synthesis solutions. Advanced ASIC Chip Synthesis: Using Synopsys (R) Design Compiler (R) and PrimeTime (R) is intended for anyone who is involved in the ASIC design methodology, starting from RTL synthesis to final tape-out. Target audiences for this book are practicing ASIC design engineers and graduate students undertaking advanced courses in ASIC chip design and DFT techniques. From the Foreword: `This book, written by Himanshu Bhatnagar, provides a comprehensive overview of the ASIC design flow targeted for VDSM technologies using the Synopsis suite of tools. It emphasizes the practical issues faced by the semiconductor design engineer in terms of synthesis and the integration of front-end and back-end tools. Traditional design methodologies are challenged and unique solutions are offered to help define the next generation of ASIC design flows. The author provides numerous practical examples derived from real-world situations that will prove valuable to practicing ASIC design engineers as well as to students of advanced VLSI courses in ASIC design'. Dr Dwight W. Decker, Chairman and CEO, Conexant Systems, Inc., (Formerly, Rockwell Semiconductor Systems), Newport Beach, CA, USA.
This book constitutes the refereed proceedings of the 8th International Workshop on Self-Organizing Maps, WSOM 2011, held in Espoo, Finland, in June 2011. The 36 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers are organized in topical sections on plenaries; financial and societal applications; theory and methodology; applications of data mining and analysis; language processing and document analysis; and visualization and image processing.
Principles of Verilog PLI is a 'how to do' text on Verilog Programming Language Interface. The primary focus of the book is on how to use PLI for problem solving. Both PLI 1.0 and PLI 2.0 are covered. Particular emphasis has been put on adopting a generic step-by-step approach to create a fully functional PLI code. Numerous examples were carefully selected so that a variety of problems can be solved through ther use. A separate chapter on Bus Functional Model (BFM), one of the most widely used commercial applications of PLI, is included. Principles of Verilog PLI is written for the professional engineer who uses Verilog for ASIC design and verification. Principles of Verilog PLI will be also of interest to students who are learning Verilog.
What are the design or selection criteria for robots that will be capable of carrying out particular functions? How can robots and machines be installed in work locations to obtain maximum effectiveness? How can their programming be made easier? How can a work location be arranged so as to accommodate successfully automatic machines? Traditionally, these questions have only been answered as a result of long and exhaustive study, involving complex calculations and the use of many sketches and plans. Computers and interactive computer graphics provide the possibility of automation for this type of analysis, thus making the task of robot designers and users easier. This volume is concerned with mathematical modelling and graphics representation of robot performance (eg their fields of action, their performance index) as a function of their structure, mechanical parts and memory systems. Used in conjunction with operating specifications, such as movement programs and computer-aided design (CAD) data bases that describe parts or tools, these perform ance models can allow the potential of different robots or different models of the same type of robot to be compared, workstations to be organized efficiently, responses to be optimized, errors to be minimized and can make off-line programming by computer a real possibility. In the future, it is certain that the appearance of robots designed to monitor their own performances will allow applications and safety conditions to be considerably improved."
Oversampling techniques based on sigma-delta modulation are widely used to implement the analog/digital interfaces in CMOS VLSI technologies. This approach is relatively insensitive to imperfections in the manufacturing process and offers numerous advantages for the realization of high-resolution analog-to-digital (A/D) converters in the low-voltage environment that is increasingly demanded by advanced VLSI technologies and by portable electronic systems. In The Design of Low-Voltage, Low-Power Sigma-Delta Modulators, an analysis of power dissipation in sigma-delta modulators is presented, and a low-voltage implementation of a digital-audio performance A/D converter based on the results of this analysis is described. Although significant power savings can typically be achieved in digital circuits by reducing the power supply voltage, the power dissipation in analog circuits actually tends to increase with decreasing supply voltages. Oversampling architectures are a potentially power-efficient means of implementing high-resolution A/D converters because they reduce the number and complexity of the analog circuits in comparison with Nyquist-rate converters. In fact, it is shown that the power dissipation of a sigma-delta modulator can approach that of a single integrator with the resolution and bandwidth required for a given application. In this research the influence of various parameters on the power dissipation of the modulator has been evaluated and strategies for the design of a power-efficient implementation have been identified. The Design of Low-Voltage, Low-Power Sigma-Delta Modulators begins with an overview of A/D conversion, emphasizing sigma-delta modulators. It includes a detailed analysis of noise in sigma-delta modulators, analyzes power dissipation in integrator circuits, and addresses practical issues in the circuit design and testing of a high-resolution modulator. The Design of Low-Voltage, Low-Power Sigma-Delta Modulators will be of interest to practicing engineers and researchers in the areas of mixed-signal and analog integrated circuit design.
Since the establishment of the CAAD Futures Foundation in 1985, CAAD experts from all over the world meet every two years to present and document the state of the art of research in Computer Aided Architectural Design. Together, the series provides a good record of the evolving state of research in this area over the last fourteen years. The Proceedings this year is the eighth in the series. The conference held at Georgia Institute of Technology in Atlanta, Georgia, includes twenty-five papers presenting new and exciting results and capabilities in areas such as computer graphics, building modeling, digital sketching and drawing systems, Web-based collaboration and information exchange. An overall reading shows that computers in architecture is still a young field, with many exciting results emerging out of both greater understanding of the human processes and information processing needed to support design and also the continuously expanding capabilities of digital technology.
After long years of work that have seen little industrial application, high-level synthesis is finally on the verge of becoming a practical tool. The state of high-level synthesis today is similar to the state of logic synthesis ten years ago. At present, logic-synthesis tools are widely used in digital system design. In the future, high-level synthesis will play a key role in mastering design complexity and in truly exploiting the potential of ASIes and PLDs, which demand extremely short design cycles. Work on high-level synthesis began over twenty years ago. Since substantial progress has been made in understanding the basic then, problems involved, although no single universally-accepted theoretical framework has yet emerged. There is a growing number of publications devoted to high-level synthesis, specialized workshops are held regularly, and tutorials on the topic are commonly held at major conferences. This book gives an extensive survey of the research and development in high-level synthesis. In Part I, a short tutorial explains the basic concepts used in high-level synthesis, and follows an example design throughout the synthesis process. In Part II, current high-level synthesis systems are surveyed.
This book presents an updated selection of the most representative contributions to the 2nd and 3rd IEEE Workshops on Signal Propagation on Interconnects (SPI) which were held in Travemtinde (Baltic See Side), Germany, May 13-15, 1998, and in Titisee-Neustadt (Black Forest), Germany, May 19-21, 1999. This publication addresses the need of developers and researchers in the field of VLSI chip and package design. It offers a survey of current problems regarding the influence of interconnect effects on the electrical performance of electronic circuits and suggests innovative solutions. In this sense the present book represents a continua tion and a supplement to the first book "Signal Propagation on Interconnects," Kluwer Academic Publishers, 1998. The papers in this book cover a wide area of research directions: Beneath the des cription of general trends they deal with the solution of signal integrity problems, the modeling of interconnects, parameter extraction using calculations and measurements and last but not least actual problems in the field of optical interconnects."
Logic Synthesis and Optimization presents up-to-date research information in a pedagogical form. The authors are recognized as the leading experts on the subject. The focus of the book is on logic minimization and includes such topics as two-level minimization, multi-level minimization, application of binary decision diagrams, delay optimization, asynchronous circuits, spectral method for logic design, field programmable gate array (FPGA) design, EXOR logic synthesis and technology mapping. Examples and illustrations are included so that each contribution can be read independently. Logic Synthesis and Optimization is an indispensable reference for academic researchers as well as professional CAD engineers.
Switching Theory for Logic Synthesis covers the basic topics of switching theory and logic synthesis in fourteen chapters. Chapters 1 through 5 provide the mathematical foundation. Chapters 6 through 8 include an introduction to sequential circuits, optimization of sequential machines and asynchronous sequential circuits. Chapters 9 through 14 are the main feature of the book. These chapters introduce and explain various topics that make up the subject of logic synthesis: multi-valued input two-valued output function, logic design for PLDs/FPGAs, EXOR-based design, and complexity theories of logic networks. An appendix providing a history of switching theory is included. The reference list consists of over four hundred entries. Switching Theory for Logic Synthesis is based on the author's lectures at Kyushu Institute of Technology as well as seminars for CAD engineers from various Japanese technology companies. Switching Theory for Logic Synthesis will be of interest to CAD professionals and students at the advanced level. It is also useful as a textbook, as each chapter contains examples, illustrations, and exercises.
The intense drive for signal integrity has been at the forefront of rapid and new developments in CAD algorithms. With increasing demands for high signal speeds coupled with a decrease in feature size, interconnect effects such as signal delay, distortion and crosstalk become the dominant factor limiting overall performance of VLSI systems. Although SPICE is used on a daily basis by many engineers for analog simulation and general circuit analysis, current versions of SPICE do not handle adequately the new emerging challenges of interconnect effects. Moment-matching techniques, such as asymptotic waveform evaluation, have recently proven useful in the analysis of large interconnect structures containing elements such as lossy coupled transmission lines with linear or nonlinear terminations. At a CPU cost of a little more than one DC analysis, these techniques are 2--3 orders of magnitude faster than full simulation techniques such as FFT. Asymptotic Waveform Evaluation presents an overview of the diverse algorithms and applications of moment matching techniques. The material is presented systematically and is supported by many examples.Issues such as sensitivity analysis and three-dimensional analysis are also covered. Asymptotic Waveform Evaluation will be of interest to engineers, students and researchers involved in the development and study of circuit simulation as well as interconnect analysis. It will also interest design engineers who are involved in dealing with high-speed issues, and graduate students who are active in the development of CAD tools for electronic systems.
Codesign for Real-Time Video Applications describes a modern design approach for embedded systems. It combines the design of hardware, software, and algorithms. Traditionally, these design domains are treated separately to reduce the design complexity. Advanced design tools support a codesign of the different domains which opens an opportunity for exploiting synergetic effects. The design approach is illustrated by the design of a video compression system. It is integrated into the video card of a PC. A VLIW processor architecture is used as the basis of the compression system and popular video compression algorithms (MPEG, JPEG, H.261) are analyzed. A complete top-down design flow is presented and the design tools for each of the design steps are explained. The tools are integrated into an HTML-based design framework. The resulting design data can be directly integrated into the WWW. This is a crucial aspect for supporting distributed design groups. The design data can be directly documented an cross referencing in an almost arbitrary way is supported. This provides a platform for information sharing among the different design domains. Codesign for Real-Time Video Applications focuses on the multi-disciplinary aspects of embedded system design. It combines design automation and advanced processor design with an important application domain. A quantitative design approach is emphasized which focuses the design time on the most crucial components. Thus enabling a fast and cost efficient design methodology. This book will be of interest to researchers, designers and managers working in embedded system design.
Synthesis of Finite State Machines: Logic Optimization is the second in a set of two monographs devoted to the synthesis of Finite State Machines (FSMs). The first volume, Synthesis of Finite State Machines: Functional Optimization, addresses functional optimization, whereas this one addresses logic optimization. The result of functional optimization is a symbolic description of an FSM which represents a sequential function chosen from a collection of permissible candidates. Logic optimization is the body of techniques for converting a symbolic description of an FSM into a hardware implementation. The mapping of a given symbolic representation into a two-valued logic implementation is called state encoding (or state assignment) and it impacts heavily area, speed, testability and power consumption of the realized circuit. The first part of the book introduces the relevant background, presents results previously scattered in the literature on the computational complexity of encoding problems, and surveys in depth old and new approaches to encoding in logic synthesis. The second part of the book presents two main results about symbolic minimization; a new procedure to find minimal two-level symbolic covers, under face, dominance and disjunctive constraints, and a unified frame to check encodability of encoding constraints and find codes of minimum length that satisfy them. The third part of the book introduces generalized prime implicants (GPIs), which are the counterpart, in symbolic minimization of two-level logic, to prime implicants in two-valued two-level minimization. GPIs enable the design of an exact procedure for two-level symbolic minimization, based on a covering step which is complicated by the need to guarantee encodability of the final cover. A new efficient algorithm to verify encodability of a selected cover is presented. If a cover is not encodable, it is shown how to augment it minimally until an encodable superset of GPIs is determined. To handle encodability the authors have extended the frame to satisfy encoding constraints presented in the second part. The covering problems generated in the minimization of GPIs tend to be very large. Recently large covering problems have been attacked successfully by representing the covering table with binary decision diagrams (BDD). In the fourth part of the book the authors introduce such techniques and extend them to the case of the implicit minimization of GPIs, where the encodability and augmentation steps are also performed implicitly. Synthesis of Finite State Machines: Logic Optimization will be of interest to researchers and professional engineers who work in the area of computer-aided design of integrated circuits.
A Formal Approach to Hardware Design discusses designing computations to be realised by application specific hardware. It introduces a formal design approach based on a high-level design language called Synchronized Transitions. The models created using Synchronized Transitions enable the designer to perform different kinds of analysis and verification based on descriptions in a single language. It is, for example, possible to use exactly the same design description both for mechanically supported verification and synthesis. Synchronized Transitions is supported by a collection of public domain CAD tools. These tools can be used with the book in presenting a course on the subject. A Formal Approach to Hardware Design illustrates the benefits to be gained from adopting such techniques, but it does so without assuming prior knowledge of formal design methods. The book is thus not only an excellent reference, it is also suitable for use by students and practitioners.
Adopting new fabrication technologies not only provides higher integration and enhances performance, but also increases the types of manufacturing defects. With design size in millions of gates and working frequency in GHz timing-related defects havv become a high proportion of the total chip defects. For nanometer technology designs, the stuck-at fault test alone cannot ensure a high quality level of chips. At-speed tests using the transition fault model has become a requirement in technologies below 180nm. Traditional at-speed test methods cannot guarantee high quality test results as they face many new challenges. Supply noise (including IR-drop, ground bounce, and Ldi/dt) effects on chip performance, high test pattern volume, low fault/defect coverage, small delay defect test pattern generation, high cost of test implementation and application, and utilizing low-cost testers are among these challenges. This book discusses these challenges in detail and proposes new techniques and methodologies to improve the overall quality of the transition fault test.
This book presents a detailed summary of research on automatic layout of device-level analog circuits that was undertaken in the late 1980s and early 1990s at Carnegie Mellon University. We focus on the work behind the creation of the tools called KOAN and ANAGRAM II, which form part of the core of the CMU ACACIA analog CAD system. KOAN is a device placer for custom analog cells; ANANGRAM II a detailed area router for these analog cells. We strive to present the motivations behind the architecture of these tools, including detailed discussion of the subtle technology and circuit concerns that must be addressed in any successful analog or mixed-signal layout tool. Our approach in organizing the chapters of the book has been to present our algo rithms as a series of responses to these very real and very difficult analog layout problems. Finally, we present numerous examples of results generated by our algorithms. This research was supported in part by the Semiconductor Research Corpora tion, by the National Science Foundation, by Harris Semiconductor, and by the International Business Machines Corporation Resident Study Program. Finally, just for the record: John Cohn was the designer of the KOAN placer; David Garrod was the designer of the ANAGRAM II router (and its predeces sor, ANAGRAM I). This book was architected by all four authors, edited by John Cohn and Rob Rutenbar, and produced in finished form by John Cohn.
As MOS devices are scaled to meet increasingly demanding circuit specifications, process variations have a greater effect on the reliability of circuit performance. For this reason, statistical techniques are required to design integrated circuits with maximum yield. Statistical Modeling for Computer-Aided Design of MOS VLSI Circuits describes a statistical circuit simulation and optimization environment for VLSI circuit designers. The first step toward accomplishing statistical circuit design and optimization is the development of an accurate CAD tool capable of performing statistical simulation. This tool must be based on a statistical model which comprehends the effect of device and circuit characteristics, such as device size, bias, and circuit layout, which are under the control of the circuit designer on the variability of circuit performance. The distinctive feature of the CAD tool described in this book is its ability to accurately model and simulate the effect in both intra- and inter-die process variability on analog/digital circuits, accounting for the effects of the aforementioned device and circuit characteristics. Statistical Modeling for Computer-Aided Design of MOS VLSI Circuits serves as an excellent reference for those working in the field, and may be used as the text for an advanced course on the subject.
It is a great honor to provide a few words of introduction for Dr. Georges Gielen's and Prof. Willy Sansen's book "Symbolic analysis for automated design of analog integrated circuits." The symbolic analysis method presented in this book represents a significant step forward in the area of analog circuit design. As demonstrated in this book, symbolic analysis opens up new possibilities for the development of computer-aided design (CAD) tools that can analyze an analog circuit topology and automatically size the components for a given set of specifications. Symbolic analysis even has the potential to improve the training of young analog circuit designers and to guide more experienced designers through second-order phenomena such as distortion. This book can also serve as an excellent reference for researchers in the analog circuit design area and creators of CAD tools, as it provides a comprehensive overview and comparison of various approaches for analog circuit design automation and an extensive bibliography. The world is essentially analog in nature, hence most electronic systems involve both analog and digital circuitry. As the number of transistors that can be integrated on a single integrated circuit (IC) substrate steadily increases over time, an ever increasing number of systems will be implemented with one, or a few, very complex ICs because of their lower production costs.
This book is an extension of one author's doctoral thesis on the false path problem. The work was begun with the idea of systematizing the various solutions to the false path problem that had been proposed in the literature, with a view to determining the computational expense of each versus the gain in accuracy. However, it became clear that some of the proposed approaches in the literature were wrong in that they under estimated the critical delay of some circuits under reasonable conditions. Further, some other approaches were vague and so of questionable accu racy. The focus of the research therefore shifted to establishing a theory (the viability theory) and algorithms which could be guaranteed correct, and then using this theory to justify (or not) existing approaches. Our quest was successful enough to justify presenting the full details in a book. After it was discovered that some existing approaches were wrong, it became apparent that the root of the difficulties lay in the attempts to balance computational efficiency and accuracy by separating the tempo ral and logical (or functional) behaviour of combinational circuits. This separation is the fruit of several unstated assumptions; first, that one can ignore the logical relationships of wires in a network when considering timing behaviour, and, second, that one can ignore timing considerations when attempting to discover the values of wires in a circuit."
Powerful new technology has been made available to researchers by an increasingly competitive workstation market. Papers from Canada, Japan, Italy, Germany, and the U.S., to name a few of the countries represented in this volume, discuss how workstations are used in experiments and what impact this new technology will have on experiments. As usual for IFIP workshops, the emphasis in this volume is on the formulation of strategies for future research, the determination of new market areas, and the identification of new areas for workstation research. This is the first volume of a book series reporting the work of IFIP WG 5.10. The mission of this IFIP work- ing group is to promote, develop and encourage advancement of the field of computer graphics as a basic tool, as an enabling technology and as an important part of various application areas.
The development of the 'factory of the future' by major international corporations such as General Motors, IBM, Westinghouse, etc now involves many practising engineers. This book is an attempt to identify and describe some of the building blocks required for computer aided engineering for manufacture. It begins with numerical control and the infrastructure required for the automation of individual 'islands' within existing factories. Computer aided design and computer aided manufacture are then discussed in detail together with their integration to improve manufacturing efficiency and flexibility. Robotics and flexible manufacturing systems are examined, as well as the management of these systems required for production optimization. Finally, there is an overview of the relatively new field of artificial intelligence, which is being increasingly used in most aspects of computer aided engineering for manufacture. There are many topics which could have been included or expanded upon with advantage, but the authors have attempted to strike a balance so that the reader can obtain the maximum usefulness from a reasonably concise volume.
Multi-objective optimization deals with the simultaneous optimization of two or more objectives which are normally in con?ict with each other. Since mul- objective optimization problems are relatively common in real-world appli- tions, this area has become a very popular research topic since the 1970s. However, the use of bio-inspired metaheuristics for solving multi-objective op- mization problems started in the mid-1980s and became popular until the mid- 1990s. Nevertheless, the e?ectiveness of multi-objective evolutionary algorithms has made them very popular in a variety of domains. Swarm intelligence refers to certain population-based metaheuristics that are inspired on the behavior of groups of entities (i.e., living beings) interacting locallywitheachotherandwiththeirenvironment.Suchinteractionsproducean emergentbehaviorthatismodelledinacomputerinordertosolveproblems.The two most popular metaheuristics within swarm intelligence are particle swarm optimization (which simulates a ?ock of birds seeking food) and ant colony optimization (which simulates the behavior of colonies of real ants that leave their nest looking for food). These two metaheuristics havebecome verypopular inthelastfewyears, andhavebeenwidelyusedinavarietyofoptimizationtasks, including some related to data mining and knowledge discovery in databases. However, such work has been mainly focused on single-objective optimization models. The use of multi-objective extensions of swarm intelligence techniques in data mining has been relatively scarce, in spite of their great potential, which constituted the main motivation to produce this book |
You may like...
Models for Dependent Time Series
Granville Tunnicliffe-Wilson, Marco Reale, …
Paperback
R1,679
Discovery Miles 16 790
Optimal Decision Making in Operations…
Irfan Ali, Leopoldo Eduardo Cardenas-Barron, …
Hardcover
R6,654
Discovery Miles 66 540
Modelling Spatial and Spatial-Temporal…
Guangquan Li, Robert P. Haining
Paperback
R1,606
Discovery Miles 16 060
|