Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Applications of computing > Image processing
The purpose of this volume is to present current work of the Intelligent Computer Graphics community, a community growing up year after year. This volume is a kind of continuation of the previously published Springer volume "Artificial Int- ligence Techniques for Computer Graphics". Nowadays, intelligent techniques are more and more used in Computer Graphics in order, not only to optimise the pr- essing time, but also to find more accurate solutions for a lot of Computer Gra- ics problems, than with traditional methods. What are intelligent techniques for Computer Graphics? Mainly, they are te- niques based on Artificial Intelligence. So, problem resolution (especially constraint satisfaction) techniques, as well as evolutionary techniques, are used in Declarative scene Modelling; heuristic search techniques, as well as strategy games techniques, are currently used in scene understanding and in virtual world exploration; multi-agent techniques and evolutionary algorithms are used in behavioural animation; and so on. However, even if in most cases the used intelligent techniques are due to Artificial - telligence, sometimes, simple human intelligence can find interesting solutions in cases where traditional Computer Graphics techniques, even combined with Artificial Intelligence ones, cannot propose any satisfactory solution. A good example of such a case is the one of scene understanding, in the case where several parts of the scene are impossible to access.
Nature-inspired algorithms such as cuckoo search and firefly algorithm have become popular and widely used in recent years in many applications. These algorithms are flexible, efficient and easy to implement. New progress has been made in the last few years, and it is timely to summarize the latest developments of cuckoo search and firefly algorithm and their diverse applications. This book will review both theoretical studies and applications with detailed algorithm analysis, implementation and case studies so that readers can benefit most from this book. Application topics are contributed by many leading experts in the field. Topics include cuckoo search, firefly algorithm, algorithm analysis, feature selection, image processing, travelling salesman problem, neural network, GPU optimization, scheduling, queuing, multi-objective manufacturing optimization, semantic web service, shape optimization, and others. This book can serve as an ideal reference for both graduates and researchers in computer science, evolutionary computing, machine learning, computational intelligence, and optimization, as well as engineers in business intelligence, knowledge management and information technology. "
Learn how to program JavaScript while creating interactive audio applications with JavaScript for Sound Artists: Learn to Code With the Web Audio API! William Turner and Steve Leonard showcase the basics of JavaScript language programing so that readers can learn how to build browser based audio applications, such as music synthesizers and drum machines. The companion website offers further opportunity for growth. Web Audio API instruction includes oscillators, audio file loading and playback, basic audio manipulation, panning and time. This book encompasses all of the basic features of JavaScript with aspects of the Web Audio API to heighten the capability of any browser. Key Features Uses the readers existing knowledge of audio technology to facilitate learning how to program using JavaScript. The teaching will be done through a series of annotated examples and explanations. Downloadable code examples and links to additional reference material included on the books companion website. This book makes learning programming more approachable to nonprofessional programmers The context of teaching JavaScript for the creative audio community in this manner does not exist anywhere else in the market and uses example-based teaching
Video Object Extraction and Representation: Theory and Applications is an essential reference for electrical engineers working in video; computer scientists researching or building multimedia databases; video system designers; students of video processing; video technicians; and designers working in the graphic arts. In the coming years, the explosion of computer technology will enable a new form of digital media. Along with broadband Internet access and MPEG standards, this new media requires a computational infrastructure to allow users to grab and manipulate content. The book reviews relevant technologies and standards for content-based processing and their interrelations. Within this overview, the book focuses upon two problems at the heart of the algorithmic/computational infrastructure: video object extraction, or how to automatically package raw visual information by content; and video object representation, or how to automatically index and catalogue extracted content for browsing and retrieval. The book analyzes the designs of two novel, working systems for content-based extraction and representation in the support of MPEG-4 and MPEG-7 video standards, respectively. Features of the book include: Overview of MPEG standards; A working system for automatic video object segmentation; A working system for video object query by shape; Novel technology for a wide range of recognition problems; Overview of neural network and vision technologies Video Object Extraction and Representation: Theory and Applications will be of interest to research scientists and practitioners working in fields related to the topic. It may also be used as an advanced-level graduate text.
"Blind Signal Processing: Theory and Practice" not only introduces related fundamental mathematics, but also reflects the numerous advances in the field, such as probability density estimation-based processing algorithms, underdetermined models, complex value methods, uncertainty of order in the separation of convolutive mixtures in frequency domains, and feature extraction using Independent Component Analysis (ICA). At the end of the book, results from a study conducted at Shanghai Jiao Tong University in the areas of speech signal processing, underwater signals, image feature extraction, data compression, and the like are discussed. This book will be of particular interest to advanced undergraduate students, graduate students, university instructors and research scientists in related disciplines. Xizhi Shi is a Professor at Shanghai Jiao Tong University.
The overall aim of the book is to introduce students to the typical course followed by a data analysis project in earth sciences. A project usually involves searching relevant literature, reviewing and ranking published books and journal articles, extracting relevant information from the literature in the form of text, data, or graphs, searching and processing the relevant original data using MATLAB, and compiling and presenting the results as posters, abstracts, and oral presentations using graphics design software. The text of this book includes numerous examples on the use of internet resources, on the visualization of data with MATLAB, and on preparing scientific presentations. As with its sister book MATLAB Recipes for Earth Sciences-3rd Edition (2010), which demonstrates the use of statistical and numerical methods on earth science data, this book uses state-of-the art software packages, including MATLAB and the Adobe Creative Suite, to process and present geoscientific information collected during the course of an earth science project. The book's supplementary electronic material (available online through the publisher's website) includes color versions of all figures, recipes with all the MATLAB commands featured in the book, the example data, exported MATLAB graphics, and screenshots of the most important steps involved in processing the graphics.
Bayesian Approach to Image Interpretation will interest anyone working in image interpretation. It is complete in itself and includes background material. This makes it useful for a novice as well as for an expert. It reviews some of the existing probabilistic methods for image interpretation and presents some new results. Additionally, there is extensive bibliography covering references in varied areas. For a researcher in this field, the material on synergistic integration of segmentation and interpretation modules and the Bayesian approach to image interpretation will be beneficial. For a practicing engineer, the procedure for generating knowledge base, selecting initial temperature for the simulated annealing algorithm, and some implementation issues will be valuable. New ideas introduced in the book include: New approach to image interpretation using synergism between the segmentation and the interpretation modules. A new segmentation algorithm based on multiresolution analysis. Novel use of the Bayesian networks (causal networks) for image interpretation. Emphasis on making the interpretation approach less dependent on the knowledge base and hence more reliable by modeling the knowledge base in a probabilistic framework. Useful in both the academic and industrial research worlds, Bayesian Approach to Image Interpretation may also be used as a textbook for a semester course in computer vision or pattern recognition.
Super-Resolution Imaging serves as an essential reference for both academicians and practicing engineers. It can be used both as a text for advanced courses in imaging and as a desk reference for those working in multimedia, electrical engineering, computer science, and mathematics. The first book to cover the new research area of super-resolution imaging, this text includes work on the following groundbreaking topics: Image zooming based on wavelets and generalized interpolation; Super-resolution from sub-pixel shifts; Use of blur as a cue; Use of warping in super-resolution; Resolution enhancement using multiple apertures; Super-resolution from motion data; Super-resolution from compressed video; Limits in super-resolution imaging. Written by the leading experts in the field, Super-Resolution Imaging presents a comprehensive analysis of current technology, along with new research findings and directions for future work.
Bringing together key researchers in disciplines ranging from visualization and image processing to applications in structural mechanics, fluid dynamics, elastography, and numerical mathematics, the workshop that generated this edited volume was the third in the successful Dagstuhl series. Its aim, reflected in the quality and relevance of the papers presented, was to foster collaboration and fresh lines of inquiry in the analysis and visualization of tensor fields, which offer a concise model for numerous physical phenomena. Despite their utility, there remains a dearth of methods for studying all but the simplest ones, a shortage the workshops aim to address. Documenting the latest progress and open research questions in tensor field analysis, the chapters reflect the excitement and inspiration generated by this latest Dagstuhl workshop, held in July 2009. The topics they address range from applications of the analysis of tensor fields to purer research into their mathematical and analytical properties. They show how cooperation and the sharing of ideas and data between those engaged in pure and applied research can open new vistas in the study of tensor fields. "
'Subdivision' is a way of representing smooth shapes in a computer. A curve or surface (both of which contain an in?nite number of points) is described in terms of two objects. One object is a sequence of vertices, which we visualise as a polygon, for curves, or a network of vertices, which we visualise by drawing the edges or faces of the network, for surfaces. The other object is a set of rules for making denser sequences or networks. When applied repeatedly, the denser and denser sequences are claimed to converge to a limit, which is the curve or surface that we want to represent. This book focusses on curves, because the theory for that is complete enough that a book claiming that our understanding is complete is exactly what is needed to stimulate research proving that claim wrong. Also because there are already a number of good books on subdivision surfaces. The way in which the limit curve relates to the polygon, and a lot of interesting properties of the limit curve, depend on the set of rules, and this book is about how one can deduce those properties from the set of rules, and how one can then use that understanding to construct rules which give the properties that one wants.
This book presents the main concepts in handling digital images of mixed content, traditionally referenced as mixed raster content (MRC), in two main parts. The first includes introductory chapters covering the scientific and technical background aspects, whereas the second presents a set of research and development approaches to tackle key issues in MRC segmentation, compression and transmission. The book starts with a review of color theory and the mechanism of color vision in humans. In turn, the second chapter reviews data coding and compression methods so as to set the background and demonstrate the complexity involved in dealing with MRC. Chapter three addresses the segmentation of images through an extensive literature review, which highlights the various approaches used to tackle MRC segmentation. The second part of the book focuses on the segmentation of color images for optimized compression, including multi-layered decomposition and representation of MRC and the processes that can be employed to optimize the coding rates of those different layers. Rounding out the coverage, the final chapter examines the segmentation of color images for optimized transmission.
Adobe Photoshop CC for Photographers by Photoshop hall-of-famer and acclaimed digital imaging professional Martin Evening has been revamped to include detailed instruction for all of the updates to Photoshop CC on Adobe's Creative Cloud, including significant new features, such as Select and Mask editing, Facial Liquify adjustments and Guided Upright corrections in Camera Raw. This guide covers all the tools and techniques photographers and professional image editors need to know when using Photoshop, from workflow guidance to core skills to advanced techniques for professional results. Using clear, succinct instruction and real world examples, this guide is the essential reference for Photoshop users. The accompanying website has been updated with new sample images, tutorial videos, bonus chapters, and a chapter on the changes in Photoshop 2017.
A key element of any modern video codec is the efficient exploitation of temporal redundancy via motion-compensated prediction. In this book, a novel paradigm of representing and employing motion information in a video compression system is described that has several advantages over existing approaches. Traditionally, motion is estimated, modelled, and coded as a vector field at the target frame it predicts. While this "prediction-centric" approach is convenient, the fact that the motion is "attached" to a specific target frame implies that it cannot easily be re-purposed to predict or synthesize other frames, which severely hampers temporal scalability. In light of this, the present book explores the possibility of anchoring motion at reference frames instead. Key to the success of the proposed "reference-based" anchoring schemes is high quality motion inference, which is enabled by the use of a more "physical" motion representation than the traditionally employed "block" motion fields. The resulting compression system can support computationally efficient, high-quality temporal motion inference, which requires half as many coded motion fields as conventional codecs. Furthermore, "features" beyond compressibility - including high scalability, accessibility, and "intrinsic" framerate upsampling - can be seamlessly supported. These features are becoming ever more relevant as the way video is consumed continues shifting from the traditional broadcast scenario to interactive browsing of video content over heterogeneous networks. This book is of interest to researchers and professionals working in multimedia signal processing, in particular those who are interested in next-generation video compression. Two comprehensive background chapters on scalable video compression and temporal frame interpolation make the book accessible for students and newcomers to the field.
This book presents advances in biomedical imaging analysis and processing techniques using time dependent medical image datasets for computer aided diagnosis. The analysis of time-series images is one of the most widely appearing problems in science, engineering, and business. In recent years this problem has gained importance due to the increasing availability of more sensitive sensors in science and engineering and due to the wide-spread use of computers in corporations which have increased the amount of time-series data collected by many magnitudes. An important feature of this book is the exploration of different approaches to handle and identify time dependent biomedical images. Biomedical imaging analysis and processing techniques deal with the interaction between all forms of radiation and biological molecules, cells or tissues, to visualize small particles and opaque objects, and to achieve the recognition of biomedical patterns. These are topics of great importance to biomedical science, biology, and medicine. Biomedical imaging analysis techniques can be applied in many different areas to solve existing problems. The various requirements arising from the process of resolving practical problems motivate and expedite the development of biomedical imaging analysis. This is a major reason for the fast growth of the discipline.
This book introduces new techniques for cellular image feature extraction, pattern recognition and classification. The authors use the antinuclear antibodies (ANAs) in patient serum as the subjects and the Indirect Immunofluorescence (IIF) technique as the imaging protocol to illustrate the applications of the described methods. Throughout the book, the authors provide evaluations for the proposed methods on two publicly available human epithelial (HEp-2) cell datasets: ICPR2012 dataset from the ICPR'12 HEp-2 cell classification contest and ICIP2013 training dataset from the ICIP'13 Competition on cells classification by fluorescent image analysis. First, the reading of imaging results is significantly influenced by one's qualification and reading systems, causing high intra- and inter-laboratory variance. The authors present a low-order LP21 fiber mode for optical single cell manipulation and imaging staining patterns of HEp-2 cells. A focused four-lobed mode distribution is stable and effective in optical tweezer applications, including selective cell pick-up, pairing, grouping or separation, as well as rotation of cell dimers and clusters. Both translational dragging force and rotational torque in the experiments are in good accordance with the theoretical model. With a simple all-fiber configuration, and low peak irradiation to targeted cells, instrumentation of this optical chuck technology will provide a powerful tool in the ANA-IIF laboratories. Chapters focus on the optical, mechanical and computing systems for the clinical trials. Computer programs for GUI and control of the optical tweezers are also discussed. to more discriminative local distance vector by searching for local neighbors of the local feature in the class-specific manifolds. Encoding and pooling the local distance vectors leads to salient image representation. Combined with the traditional coding methods, this method achieves higher classification accuracy. Then, a rotation invariant textural feature of Pairwise Local Ternary Patterns with Spatial Rotation Invariant (PLTP-SRI) is examined. It is invariant to image rotations, meanwhile it is robust to noise and weak illumination. By adding spatial pyramid structure, this method captures spatial layout information. While the proposed PLTP-SRI feature extracts local feature, the BoW framework builds a global image representation. It is reasonable to combine them together to achieve impressive classification performance, as the combined feature takes the advantages of the two kinds of features in different aspects. Finally, the authors design a Co-occurrence Differential Texton (CoDT) feature to represent the local image patches of HEp-2 cells. The CoDT feature reduces the information loss by ignoring the quantization while it utilizes the spatial relations among the differential micro-texton feature. Thus it can increase the discriminative power. A generative model adaptively characterizes the CoDT feature space of the training data. Furthermore, exploiting a discriminant representation allows for HEp-2 cell images based on the adaptive partitioned feature space. Therefore, the resulting representation is adapted to the classification task. By cooperating with linear Support Vector Machine (SVM) classifier, this framework can exploit the advantages of both generative and discriminative approaches for cellular image classification. The book is written for those researchers who would like to develop their own programs, and the working MatLab codes are included for all the important algorithms presented. It can also be used as a reference book for graduate students and senior undergraduates in the area of biomedical imaging, image feature extraction, pattern recognition and classification. Academics, researchers, and professional will find this to be an exceptional resource.
Packed with more than 350 techniques, this book delivers what you need to know-on the spot. Its concise presentation of professional techniques is suited to experienced artists whether you are: * Migrating from another visual effects application * Upgrading to Houdini 9 * Seeking a handy reference to raise your proficiency with Houdini Houdini On the Spot presents immediate solutions in an accessible format. It clearly illustrates the essential methods that pros use to get the job done efficiently and creatively. Screenshots and step-by-step instructions show you how to: * Navigate and manipulate the version 9 interface * Create procedural models that can be modified quickly and efficiently with Surface Operators (SOPs) * Use Particle Operators (POPs) to build complex simulations with speed and precision * Minimize the number of operators in your simulations with Dynamics Operators (DOPs) * Extend Houdini with customized tools to include data or scripts with Houdini Digital Assets (HDAs) * Master the version 9 rendering options including Physically Based Rendering (PBR), volume rendering and motion blur * Quickly modify timing, geometry, space and rotational values of your animations with Channel Operators (CHOPs) * Create and manipulate elements with Composite Operators (COPs); Houdini's full-blown compositor toolset * Make your own SOPs, COPs, POPs, CHOPs, and shaders with the Vector Expressions (VEX) shading language * Configure the Houdini interface with customized environments and hotkeys * Mine the treasures of the dozens of standalone applications that are bundled with Houdini
Creative professionals seeking the fastest, easiest, most comprehensive way to learn Adobe Animate choose Adobe Animate CC Classroom in a Book from Adobe Press. The project-based lessons in this book show users step-by-step the key techniques for working in Animate. Adobe Animate CC provides more expressive tools, powerful controls for animation, and robust support for playback across a wide variety of platforms. The online companion files include all the necessary assets for readers to complete the projects featured in each chapter. All buyers of the book get full access to the Web Edition: a Web-based version of the complete eBook enhanced with video and interactive multiple-choice quizzes.
In recent years there has been an increasing interest in Second Generation Image and Video Coding Techniques. These techniques introduce new concepts from image analysis that greatly improve the performance of the coding schemes for very high compression. This interest has been further emphasized by the future MPEG 4 standard. Second generation image and video coding techniques are the ensemble of approaches proposing new and more efficient image representations than the conventional canonical form. As a consequence, the human visual system becomes a fundamental part of the encoding/decoding chain. More insight to distinguish between first and second generation can be gained if it is noticed that image and video coding is basically carried out in two steps. First, image data are converted into a sequence of messages and, second, code words are assigned to the messages. Methods of the first generation put the emphasis on the second step, whereas methods of the second generation put it on the first step and use available results for the second step. As a result of including the human visual system, second generation can be also seen as an approach of seeing the image composed by different entities called objects. This implies that the image or sequence of images have first to be analyzed and/or segmented in order to find the entities. It is in this context that we have selected in this book three main approaches as second generation video coding techniques: Segmentation-based schemes Model Based Schemes Fractal Based Schemes GBP/LISTGBP Video Coding: The Second Generation Approach is an important introduction to the new coding techniques for video. As such, all researchers, students and practitioners working in image processing will find this book of interest.
Accurate Visual Metrology from Single and Multiple Uncalibrated
Images presents novel techniques for constructing three-dimensional
models from bi-dimensional images using virtual reality tools.
Antonio Criminisi develops the mathematical theory of computing
world measurements from single images, and builds up a hierarchy of
novel, flexible techniques to make measurements and reconstruct
three-dimensional scenes from uncalibrated images, paying
particular attention to the accuracy of the reconstruction.
Nordic Animation examines the state of the animation industry within the Nordic countries. It looks at the success of popular brands such as the Moomins and Angry Birds, studios such as Anima Vitae and Qvisten, and individuals from the Nordics that have made their mark on the global animation industry. The book begins with some historical findings before moving to look at the stories of some of the most well-known Nordic animation brands. A section on Nordic animation studios looks at the international success and impact on the global animation industry that has been made by these companies. The book is forward thinking in scope and places these stories within the context of what the future holds for the Nordic animation industry. This book will be of great interest to those in the fields of animation and film studies, as well as those with a general interest in Nordic animation.
One of the challenges facing professionals working in computer animation is keeping abreast of the latest developments and future trends - some of which are determined by industry where the state-of-the-art is continuously being re-defined by the latest computer-generated film special effects, while others arise from research projects whose results are quickly taken on board by programmers and animators working in industry. This handbook will be an invaluable toolkit for programmers, technical directors and professionals working in computer animation. A wide range of topics are covered including: * Computer games * Evolutionary algorithms * Shooting and live action * Digital effects * Cubic curves and surfaces * Subdivision surfaces * Rendering and shading Written by a team of experienced practitioners, each chapter provides a clear and precise overview of each area, reflecting the dynamic and fast-moving field of computer animation. This is a complete and up-to-date reference book on the state-of-the-art techniques used in computer animation.
Nowadays, highly-detailed animations of live-actor performances are increasingly easier to acquire and 3D Video has reached considerable attentions in visual media production. In this book, we address the problem of extracting or acquiring and then reusing non-rigid parametrization for video-based animations. At first sight, a crucial challenge is to reproduce plausible boneless deformations while preserving global and local captured properties of dynamic surfaces with a limited number of controllable, flexible and reusable parameters. To solve this challenge, we directly rely on a skin-detached dimension reduction thanks to the well-known cage-based paradigm. First, we achieve Scalable Inverse Cage-based Modeling by transposing the inverse kinematics paradigm on surfaces. Thus, we introduce a cage inversion process with user-specified screen-space constraints. Secondly, we convert non-rigid animated surfaces into a sequence of optimal cage parameters via Cage-based Animation Conversion. Building upon this reskinning procedure, we also develop a well-formed Animation Cartoonization algorithm for multi-view data in term of cage-based surface exaggeration and video-based appearance stylization. Thirdly, motivated by the relaxation of prior knowledge on the data, we propose a promising unsupervised approach to perform Iterative Cage-based Geometric Registration. This novel registration scheme deals with reconstructed target point clouds obtained from multi-view video recording, in conjunction with a static and wrinkled template mesh. Above all, we demonstrate the strength of cage-based subspaces in order to reparametrize highly non-rigid dynamic surfaces, without the need of secondary deformations. To the best of our knowledge this book opens the field of Cage-based Performance Capture.
The problem of structure and motion recovery from image sequences is an important theme in computer vision. Considerable progress has been made in this field during the past two decades, resulting in successful applications in robot navigation, augmented reality, industrial inspection, medical image analysis, and digital entertainment, among other areas. However, many of these methods work only for rigid objects and static scenes. The study of non-rigid structure from motion is not only of academic significance, but also has important practical applications in real-world, nonrigid or dynamic scenarios, such as human facial expressions and moving vehicles. This practical guide/reference provides a comprehensive overview of Euclidean structure and motion recovery, with a specific focus on factorization-based algorithms. The book discusses the latest research in this field, including the extension of the factorization algorithm to recover the structure of non-rigid objects, and presents some new algorithms developed by the authors. Readers require no significant knowledge of computer vision, although some background on projective geometry and matrix computation would be beneficial. Topics and features: presents the first systematic study of structure and motion recovery of both rigid and non-rigid objects from images sequences; discusses in depth the theory, techniques, and applications of rigid and non-rigid factorization methods in three dimensional computer vision; examines numerous factorization algorithms, covering affine, perspective and quasi-perspective projection models; provides appendices describing the mathematical principles behind projective geometry, matrix decomposition, least squares, and nonlinear estimation techniques; includes chapter-ending review questions, and a glossary of terms used in the book. This unique text offers practical guidance in real applications and implementations of 3D modeling systems for practitioners in computer vision and pattern recognition, as well as serving as an invaluable source of new algorithms and methodologies for structure and motion recovery for graduate students and researchers.
3D rotation analysis is widely encountered in everyday problems thanks to the development of computers. Sensing 3D using cameras and sensors, analyzing and modeling 3D for computer vision and computer graphics, and controlling and simulating robot motion all require 3D rotation computation. This book focuses on the computational analysis of 3D rotation, rather than classical motion analysis. It regards noise as random variables and models their probability distributions. It also pursues statistically optimal computation for maximizing the expected accuracy, as is typical of nonlinear optimization. All concepts are illustrated using computer vision applications as examples. Mathematically, the set of all 3D rotations forms a group denoted by SO(3). Exploiting this group property, we obtain an optimal solution analytical or numerically, depending on the problem. Our numerical scheme, which we call the "Lie algebra method," is based on the Lie group structure of SO(3). This book also proposes computing projects for readers who want to code the theories presented in this book, describing necessary 3D simulation setting as well as providing real GPS 3D measurement data. To help readers not very familiar with abstract mathematics, a brief overview of quaternion algebra, matrix analysis, Lie groups, and Lie algebras is provided as Appendix at the end of the volume. |
You may like...
Examining Fractal Image Processing and…
Soumya Ranjan Nayak, Jibitesh Mishra
Hardcover
R7,028
Discovery Miles 70 280
Multi-faceted Deep Learning - Models and…
Jenny Benois-Pineau, Akka Zemmari
Hardcover
R4,593
Discovery Miles 45 930
Next-Generation Applications and…
Filipe Portela, Ricardo Queiros
Hardcover
R7,022
Discovery Miles 70 220
Minecraft: Guide to Creative (Updated)
Mojang AB, The Official Minecraft Team
Hardcover
|