![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > General theory of computing > Systems analysis & design
Systems development is the process of creating and maintaining information systems, including hardware, software, data, procedures and people. It combines technical expertise with business knowledge and management skill. This practical book provides a comprehensive introduction to the topic and can also be used as a handy reference guide. It discusses key elements of systems development and is the only textbook that supports the BCS Certificate in Systems Development.
Despite its importance, the role of HdS is most often underestimated and the topic is not well represented in literature and education. To address this, Hardware-dependent Software brings together experts from different HdS areas. By providing a comprehensive overview of general HdS principles, tools, and applications, this book provides adequate insight into the current technology and upcoming developments in the domain of HdS. The reader will find an interesting text book with self-contained introductions to the principles of Real-Time Operating Systems (RTOS), the emerging BIOS successor UEFI, and the Hardware Abstraction Layer (HAL). Other chapters cover industrial applications, verification, and tool environments. Tool introductions cover the application of tools in the ASIP software tool chain (i.e. Tensilica) and the generation of drivers and OS components from C-based languages. Applications focus on telecommunication and automotive systems.
As the complexity of today s networked computer systems grows,
they become increasingly difficult to understand, predict, and
control. Addressing these challenges requires new approaches to
building these systems. Adaptive, Dynamic, and Resilient Systems
supplies readers with various perspectives of the critical
infrastructure that systems of networked computers rely on. It
introduces the key issues, describes their interrelationships, and
presents new research in support of these areas.
Until now, there has been a lack of a complete knowledge base to fully comprehend Low power (LP) design and power aware (PA) verification techniques and methodologies and deploy them all together in a real design verification and implementation project. This book is a first approach to establishing a comprehensive PA knowledge base. LP design, PA verification, and Unified Power Format (UPF) or IEEE-1801 power format standards are no longer special features. These technologies and methodologies are now part of industry-standard design, verification, and implementation flows (DVIF). Almost every chip design today incorporates some kind of low power technique either through power management on chip, by dividing the design into different voltage areas and controlling the voltages, through PA dynamic and PA static verification, or their combination. The entire LP design and PA verification process involves thousands of techniques, tools, and methodologies, employed from the r egister transfer level (RTL) of design abstraction down to the synthesis or place-and-route levels of physical design. These techniques, tools, and methodologies are evolving everyday through the progression of design-verification complexity and more intelligent ways of handling that complexity by engineers, researchers, and corporate engineering policy makers.
This book explains in detail how to define requirements modelling languages - formal languages used to solve requirement-related problems in requirements engineering. It moves from simple languages to more complicated ones and uses these languages to illustrate a discussion of major topics in requirements modelling language design. The book positions requirements problem solving within the framework of broader research on ill-structured problem solving in artificial intelligence and engineering in general. Further, it introduces the reader to many complicated issues in requirements modelling language design, starting from trivial questions and the definition of corresponding simple languages used to answer them, and progressing to increasingly complex issues and languages. In this way the reader is led step by step (and with the help of illustrations) to learn about the many challenges involved in designing modelling languages for requirements engineering. The book offers the first comprehensive treatment of a major challenge in requirements engineering and business analysis, namely, how to design and define requirements modelling languages. It is intended for researchers and graduate students interested in advanced topics of requirements engineering and formal language design.
A crucial step during the design and engineering of communication systems is the estimation of their performance and behavior; especially for mathematically complex or highly dynamic systems network simulation is particularly useful. This book focuses on tools, modeling principles and state-of-the art models for discrete-event based network simulations, the standard method applied today in academia and industry for performance evaluation of new network designs and architectures. The focus of the tools part is on two distinct simulations engines: OmNet++ and ns-3, while it also deals with issues like parallelization, software integration and hardware simulations. The parts dealing with modeling and models for network simulations are split into a wireless section and a section dealing with higher layers. The wireless section covers all essential modeling principles for dealing with physical layer, link layer and wireless channel behavior. In addition, detailed models for prominent wireless systems like IEEE 802.11 and IEEE 802.16 are presented. In the part on higher layers, classical modeling approaches for the network layer, the transport layer and the application layer are presented in addition to modeling approaches for peer-to-peer networks and topologies of networks. The modeling parts are accompanied with catalogues of model implementations for a large set of different simulation engines. The book is aimed at master students and PhD students of computer science and electrical engineering as well as at researchers and practitioners from academia and industry that are dealing with network simulation at any layer of the protocol stack.
The Art of Computer Systems Performance Analysis "At last, a welcome and needed text for computer professionals who require practical, ready-to-apply techniques for performance analysis. Highly recommended!" —Dr. Leonard Kleinrock University of California, Los Angeles "An entirely refreshing text which has just the right mixture of theory and real world practice. The book is ideal for both classroom instruction and self-study." —Dr. Raymond L. Pickholtz President, IEEE Communications Society "An extraordinarily comprehensive treatment of both theoretical and practical issues." —Dr. Jeffrey P. Buzen Internationally recognized performance analysis expert "… it is the most thorough book available to date" —Dr. Erol Gelenbe Université René Descartes, Paris "… an extraordinary book.… A worthy addition to the bookshelf of any practicing computer or communications engineer" —Dr. Vinton G. Cer??? Chairman, ACM SIGCOMM "This is an unusual object, a textbook that one wants to sit down and peruse. The prose is clear and fluent, but more important, it is witty." —Allison Mankin The Mitre Washington Networking Center Newsletter
This volume chronicles the 16th Annual Conference on System Engineering Research (CSER) held on May 8-9, 2018 at the University of Virginia, Charlottesville, Virginia, USA. The CSER offers researchers in academia, industry, and government a common forum to present, discuss, and influence systems engineering research. It provides access to forward-looking research from across the globe, by renowned academicians as well as perspectives from senior industry and government representatives. Co-founded by the University of Southern California and Stevens Institute of Technology in 2003, CSER has become the preeminent event for researchers in systems engineering across the globe. Topics include though are not limited to the following: Systems in context: * Formative methods: requirements * Integration, deployment, assurance * Human Factors * Safety and Security Decisions/ Control & Design; Systems Modeling: * Optimization, Multiple Objectives, Synthesis * Risk and resiliency * Collaborative autonomy * Coordination and distributed decision-making Prediction: * Prescriptive modeling; state estimation * Stochastic approximation, stochastic optimization and control Integrative Data engineering: * Sensor Management * Design of Experiments
Conventional build-then-test practices are making today's embedded, software-reliant systems unaffordable to build. In response, more than thirty leading industrial organizations have joined SAE (formerly, the Society of Automotive Engineers) to define the SAE Architecture Analysis & Design Language (AADL) AS-5506 Standard, a rigorous and extensible foundation for model-based engineering analysis practices that encompass software system design, integration, and assurance. Using AADL, you can conduct lightweight and rigorous analyses of critical real-time factors such as performance, dependability, security, and data integrity. You can integrate additional established and custom analysis/specification techniques into your engineering environment, developing a fully unified architecture model that makes it easier to build reliable systems that meet customer expectations. Model-Based Engineering with AADL is the first guide to using this new international standard to optimize your development processes. Coauthored by Peter H. Feiler, the standard's author and technical lead, this introductory reference and tutorial is ideal for self-directed learning or classroom instruction, and is an excellent reference for practitioners, including architects, developers, integrators, validators, certifiers, first-level technical leaders, and project managers. Packed with real-world examples, it introduces all aspects of the AADL notation as part of an architecture-centric, model-based engineering approach to discovering embedded software systems problems earlier, when they cost less to solve. Throughout, the authors compare AADL to other modeling notations and approaches, while presenting the language via a complete case study: the development and analysis of a realistic example system through repeated refinement and analysis. Part One introduces both the AADL language and core Model-Based Engineering (MBE) practices, explaining basic software systems modeling and analysis in the context of an example system, and offering practical guidelines for effectively applying AADL. Part Two describes the characteristics of each AADL element, including their representations, applicability, and constraints. The Appendix includes comprehensive listings of AADL language elements, properties incorporated in the AADL standard, and a description of the book's example system.
Molecular recognition, also known as biorecognition, is the heart of all biological interactions. Originating from protein stretching experiments, dynamic force spectroscopy (DFS) allows for the extraction of detailed information on the unbinding process of biomolecular complexes. It is becoming progressively more important in biochemical studies and is finding wider applications in areas such as biophysics and polymer science. In six chapters, Dynamic Force Spectroscopy and Biomolecular Recognition covers the most recent ideas and advances in the field of DFS applied to biorecognition:
Although DFS is a widespread, worldwide technique, no books focused on this subject have been available until now. Dynamic Force Spectroscopy and Biomolecular Recognition provides the state of the art of experimental data analysis and theoretical procedures, making it a useful tool for researchers applying DFS to study biorecognition processes.
Obtain better system performance, lower energy consumption, and avoid hand-coding arithmetic functions with this concise guide to automated optimization techniques for hardware and software design. High-level compiler optimizations and high-speed architectures for implementing FIR filters are covered, which can improve performance in communications, signal processing, computer graphics, and cryptography. Clearly explained algorithms and illustrative examples throughout make it easy to understand the techniques and write software for their implementation. Background information on the synthesis of arithmetic expressions and computer arithmetic is also included, making the book ideal for newcomers to the subject. This is an invaluable resource for researchers, professionals, and graduate students working in system level design and automation, compilers, and VLSI CAD.
Stem Cell Labeling for Delivery and Tracking Using Noninvasive Imaging provides a comprehensive overview of cell therapy imaging, ranging from the basic biology of cell therapeutic choices to the preclinical and clinical applications of cell therapy. It emphasizes the use of medical imaging for therapeutic delivery/targeting, cell tracking, and determining therapeutic efficacy. The book first presents background information and insight on the major classes of stem and progenitor cells. It then describes the main imaging modalities and state-of-the-art techniques that are currently employed for stem cell tracking. In the final chapters, leading scholars offer clinical perspectives on existing and potential uses of stem cells as well as the impact of image-guided delivery and tracking in major organ systems. Through clear descriptions and color images, this volume illustrates how noninvasive imaging is used to track stem cells as they repair damaged tissue in the body. With contributions from some of the most prominent preclinical and clinical researchers in the field, the book helps readers to understand the evolving concepts of stem cell labeling and tracking as the field continues to move forward.
Cultural factors, in both the narrow sense of different national, racial, and ethnic groups, and in the broader sense of different groups of any type, play major roles in individual and group decisions. Written by an international, interdisciplinary group of experts, Cultural Factors in Systems Design: Decision Making and Action explores innovations in the understanding of how cultural differences influence decision making and action. Reflecting the diverse interests and viewpoints that characterize the current state of decision making and cultural research, the chapter authors represent a variety of disciplines and specialize in areas ranging from basic decision processes of individuals, to decisions made in teams and large organizations, to cultural influences on behavior. Balancing theoretical and practical perspectives, the book explores why the best laid plans go awry, examining conditions that can yield unanticipated behaviors from complex, adaptive sociotechnical systems. It highlights the different ways in which East Asians and Westerners make decisions and explores how to model and investigate cultural influences in interpersonal interactions, social judgment, and decision making. The book also reviews decision field theory and examines its implications for cross cultural decision making. With increasing globalization of organizations and interactions among people from various cultures, a better understanding of how cultural factors influence decision making and action is a necessity. Much is known about decision processes, culture and cognition, design of products and interfaces for human interaction with machines and organizational processes, however this knowledge is dispersed across several disciplines and research areas. Presenting a range of current research and new ideas, this volume brings together previously scattered research and explores how to apply it when designing systems that will be used by individuals of varied backgrounds.
IOT: Security and Privacy Paradigm covers the evolution of security and privacy issues in the Internet of Things (IoT). It focuses on bringing all security and privacy related technologies into one source, so that students, researchers, and practitioners can refer to this book for easy understanding of IoT security and privacy issues. This edited book uses Security Engineering and Privacy-by-Design principles to design a secure IoT ecosystem and to implement cyber-security solutions. This book takes the readers on a journey that begins with understanding the security issues in IoT-enabled technologies and how it can be applied in various aspects. It walks readers through engaging with security challenges and builds a safe infrastructure for IoT devices. The book helps readers gain an understand of security architecture through IoT and describes the state of the art of IoT countermeasures. It also differentiates security threats in IoT-enabled infrastructure from traditional ad hoc or infrastructural networks, and provides a comprehensive discussion on the security challenges and solutions in RFID, WSNs, in IoT. This book aims to provide the concepts of related technologies and novel findings of the researchers through its chapter organization. The primary audience includes specialists, researchers, graduate students, designers, experts and engineers who are focused on research and security related issues. Souvik Pal, PhD, has worked as Assistant Professor in Nalanda Institute of Technology, Bhubaneswar, and JIS College of Engineering, Kolkata (NAAC "A" Accredited College). He is the organizing Chair and Plenary Speaker of RICE Conference in Vietnam; and organizing co-convener of ICICIT, Tunisia. He has served in many conferences as chair, keynote speaker, and he also chaired international conference sessions and presented session talks internationally. His research area includes Cloud Computing, Big Data, Wireless Sensor Network (WSN), Internet of Things, and Data Analytics. Vicente Garcia-Diaz, PhD, is an Associate Professor in the Department of Computer Science at the University of Oviedo (Languages and Computer Systems area). He is also the editor of several special issues in prestigious journals such as Scientific Programming and International Journal of Interactive Multimedia and Artificial Intelligence. His research interests include eLearning, machine learning and the use of domain specific languages in different areas. Dac-Nhuong Le, PhD, is Deputy-Head of Faculty of Information Technology, and Vice-Director of Information Technology Apply and Foreign Language Training Center, Haiphong University, Vietnam. His area of research includes: evaluation computing and approximate algorithms, network communication, security and vulnerability, network performance analysis and simulation, cloud computing, IoT and image processing in biomedical. Presently, he is serving on the editorial board of several international journals and has authored nine computer science books published by Springer, Wiley, CRC Press, Lambert Publication, and Scholar Press.
Because of the continuous evolution of integrated circuit manufacturing (ICM) and design for manufacturability (DfM), most books on the subject are obsolete before they even go to press. That's why the field requires a reference that takes the focus off of numbers and concentrates more on larger economic concepts than on technical details. Semiconductors: Integrated Circuit Design for Manufacturability covers the gradual evolution of integrated circuit design (ICD) as a basis to propose strategies for improving return-on-investment (ROI) for ICD in manufacturing. Where most books put the spotlight on detailed engineering enhancements and their implications for device functionality, in contrast, this one offers, among other things, crucial, valuable historical background and roadmapping, all illustrated with examples. Presents actual test cases that illustrate product challenges, examine possible solution strategies, and demonstrate how to select and implement the right one This book shows that DfM is a powerful generic engineering concept with potential extending beyond its usual application in automated layout enhancements centered on proximity correction and pattern density. This material explores the concept of ICD for production by breaking down its major steps: product definition, design, layout, and manufacturing. Averting extended discussion of technology, techniques, or specific device dimensions, the author also avoids the clumsy chapter architecture that can hinder other books on this subject. The result is an extremely functional, systematic presentation that simplifies existing approaches to DfM, outlining a clear set of criteria to help readers assess reliability, functionality, and yield. With careful consideration of the economic and technical trade-offs involved in ICD for manufacturing, this reference addresses techniques for physical, electrical, and logical design, keeping coverage fresh and concise for the designers, manufacturers, and researchers defining product architecture and research programs.
Haptics technology is being used more and more in different applications, such as in computer games for increased immersion, in surgical simulators to create a realistic environment for training of surgeons, in surgical robotics due to safety issues and in mobile phones to provide feedback from user action. The existence of these applications highlights a clear need to understand performance metrics for haptic interfaces and their implications on device design, use and application. Performance Metrics for Haptic Interfaces aims at meeting this need by establishing standard practices for the evaluation of haptic interfaces and by identifying significant performance metrics. Towards this end, a combined physical and psychophysical experimental methodology is presented. Firstly, existing physical performance measures and device characterization techniques are investigated and described in an illustrative way. Secondly, a wide range of human psychophysical experiments are reviewed and the appropriate ones are applied to haptic interactions. The psychophysical experiments are unified as a systematic and complete evaluation method for haptic interfaces. Finally, synthesis of both evaluation methods is discussed. The metrics provided in this state-of-the-art volume will guide readers in evaluating the performance of any haptic interface. The generic methodology will enable researchers to experimentally assess the suitability of a haptic interface for a specific purpose, to characterize and compare devices quantitatively and to identify possible improvement strategies in the design of a system.
Ubiquitous in today's consumer-driven society, embedded systems use microprocessors that are hidden in our everyday products and designed to perform specific tasks. Effective use of these embedded systems requires engineers to be proficient in all phases of this effort, from planning, design, and analysis to manufacturing and marketing. Taking a systems-level approach, Real-Time Embedded Systems: Optimization, Synthesis, and Networking describes the field from three distinct aspects that make up the three major trends in current embedded system design. The first section of the text examines optimization in real-time embedded systems. The authors present scheduling algorithms in multi-core embedded systems, instruct on a robust measurement against the inaccurate information that can exist in embedded systems, and discuss potential problems of heterogeneous optimization. The second section focuses on synthesis-level approaches for embedded systems, including a scheduling algorithm for phase change memory and scratch pad memory and a treatment of thermal-aware multiprocessor synthesis technology. The final section looks at networking with a focus on task scheduling in both a wireless sensor network and cloud computing. It examines the merging of networking and embedded systems and the resulting evolution of a new type of system known as the cyber physical system (CPS). Encouraging readers to discover how the computer interacts with its environment, Real-Time Embedded Systems provides a sound introduction to the design, manufacturing, marketing, and future directions of this important tool.
With the rapid advancement of information discovery techniques, machine learning and data mining continue to play a significant role in cybersecurity. Although several conferences, workshops, and journals focus on the fragmented research topics in this area, there has been no single interdisciplinary resource on past and current works and possible paths for future research in this area. This book fills this need. From basic concepts in machine learning and data mining to advanced problems in the machine learning domain, Data Mining and Machine Learning in Cybersecurity provides a unified reference for specific machine learning solutions to cybersecurity problems. It supplies a foundation in cybersecurity fundamentals and surveys contemporary challenges-detailing cutting-edge machine learning and data mining techniques. It also: Unveils cutting-edge techniques for detecting new attacks Contains in-depth discussions of machine learning solutions to detection problems Categorizes methods for detecting, scanning, and profiling intrusions and anomalies Surveys contemporary cybersecurity problems and unveils state-of-the-art machine learning and data mining solutions Details privacy-preserving data mining methods This interdisciplinary resource includes technique review tables that allow for speedy access to common cybersecurity problems and associated data mining methods. Numerous illustrative figures help readers visualize the workflow of complex techniques and more than forty case studies provide a clear understanding of the design and application of data mining and machine learning techniques in cybersecurity.
A well-rounded, accessible exposition of honeypots in wired and wireless networks, this book addresses the topic from a variety of perspectives. Following a strong theoretical foundation, case studies enhance the practical understanding of the subject. The book covers the latest technology in information security and honeypots, including honeytokens, honeynets, and honeyfarms. Additional topics include denial of service, viruses, worms, phishing, and virtual honeypots and forensics. The book also discusses practical implementations and the current state of research.
For the last two decades, IS researchers have conducted empirical studies leading to a better understanding of the impact of Systems Analysis and Design methods in business, managerial, and cultural contexts. SA&D research has established a balanced focus not only on technical issues, but also on organizational and social issues in the information society..This volume presents the very latest, state-of-the-art research by well-known figures in the field. The chapters are grouped into three categories: techniques, methodologies, and approaches.
Your customers want rock-solid, bug-free software that does exactly what they expect it to do. Yet they can't always articulate their ideas clearly enough for you to turn them into code. You need Cucumber: a testing, communication, and requirements tool-all rolled into one. All the code in this book is updated for Cucumber 2.4, Rails 5, and RSpec 3.5. Express your customers' wild ideas as a set of clear, executable specifications that everyone on the team can read. Feed those examples into Cucumber and let it guide your development. Build just the right code to keep your customers happy. You can use Cucumber to test almost any system or any platform. Get started by using the core features of Cucumber and working with Cucumber's Gherkin DSL to describe-in plain language-the behavior your customers want from the system. Then write Ruby code that interprets those plain-language specifications and checks them against your application. Next, consolidate the knowledge you've gained with a worked example, where you'll learn more advanced Cucumber techniques, test asynchronous systems, and test systems that use a database. Recipes highlight some of the most difficult and commonly seen situations the authors have helped teams solve. With these patterns and techniques, test Ajax-heavy web applications with Capybara and Selenium, REST web services, Ruby on Rails applications, command-line applications, legacy applications, and more. Written by the creator of Cucumber and the co-founders of Cucumber Ltd., this authoritative guide will give you and your team all the knowledge you need to start using Cucumber with confidence. What You Need: Windows, Mac OS X (with XCode) or Linux, Ruby 1.9.2 and upwards, Cucumber 2.4, Rails 5, and RSpec 3.5
Cancer is a complex disease process that spans multiple scales in space and time. Driven by cutting-edge mathematical and computational techniques, in silico biology provides powerful tools to investigate the mechanistic relationships of genes, cells, and tissues. It enables the creation of experimentally testable hypotheses, the integration of data across scales, and the prediction of tumor progression and treatment outcome (in silico oncology). Drawing on an interdisciplinary group of distinguished international experts, Multiscale Cancer Modeling discusses the scientific and technical expertise necessary to conduct innovative cancer modeling research across scales. It presents contributions from some of the top in silico modeling groups in the United States and Europe. The ultimate goal of multiscale modeling and simulation approaches is their use in clinical practice, such as supporting patient-specific treatment optimization. This volume covers state-of-the-art methods of multiscale cancer modeling and addresses the field's potential as well as future challenges. It encourages collaborations among researchers in various disciplines to achieve breakthroughs in cancer modeling.
Analysis and Synthesis of Computer Systems presents a broad overview of methods that are used to evaluate the performance of computer systems and networks, manufacturing systems, and interconnected services systems. Aside from a highly readable style that rigorously addresses all subjects, this second edition includes new chapters on numerical methods for queueing models and on G-networks, the latter being a new area of queuing theory that one of the authors has pioneered.This book will have a broad appeal to students, practitioners and researchers in several different areas, including practicing computer engineers as well as computer science and engineering students.
This absorbing book provides a broad introduction to the surprising nature of change, and explains how the Law of Unintended Consequences arises from the waves of change following one simple change. Change is a constant topic of discussion, whether be it on climate, politics, technology, or any of the many other changes in our lives. However, does anyone truly understand what change is?Over time, mankind has deliberately built social and technology based systems that are goal-directed - there are goals to achieve and requirements to be met. Building such systems is man's way of planning for the future, and these plans are based on predicting the behavior of the system and its environment, at specified times in the future. Unfortunately, in a truly complex social or technical environment, this planned predictability can break down into a morass of surprising and unexpected consequences. Such unpredictability stems from the propagation of the effects of change through the influence of one event on another.The Nature of Change explains in detail the mechanism of change and will serve as an introduction to complex systems, or as complementary reading for systems engineering. This textbook will be especially useful to professionals in system building or business change management, and to students studying systems in a variety of fields such as information technology, business, law and society.
In considering ways that physics has helped advance biology and medicine, what typically comes to mind are the various tools used by researchers and clinicians. We think of the optics put to work in microscopes, endoscopes, and lasers; the advanced diagnostics permitted through magnetic, x-ray, and ultrasound imaging; and even the nanotools, that allow us to tinker with molecules. We build these instruments in accordance with the closest thing to absolute truths we know, the laws of physics, but seldom do we apply those same constants of physics to the study of our own carbon-based beings, such as fluidics applied to the flow of blood, or the laws of motion and energy applied to working muscle. Instead of considering one aspect or the other, Handbook of Physics in Medicine and Biology explores the full gamut of physics' relationship to biology and medicine in more than 40 chapters, written by experts from the lab to the clinic. The book begins with a basic description of specific biological features and delves into the physics of explicit anatomical structures starting with the cell. Later chapters look at the body's senses, organs, and systems, continuing to explain biological functions in the language of physics. The text then details various analytical modalities such as imaging and diagnostic methods. A final section turns to future perspectives related to tissue engineering, including the biophysics of prostheses and regenerative medicine. The editor's approach throughout is to address the major healthcare challenges, including tissue engineering and reproductive medicine, as well as development of artificial organs and prosthetic devices. The contents are organized by organ type and biological function, which is given a clear description in terms of electric, mechanical, thermodynamic, and hydrodynamic properties. In addition to the physical descriptions, each chapter discusses principles of related clinical diagnostic method |
You may like...
Advanced Concrete Technology 4 - Testing…
John Newman, B.S. Choo
Hardcover
R2,590
Discovery Miles 25 900
Technical and Biological Components of…
C.Dean Buckner, Reginald Clift
Hardcover
R7,685
Discovery Miles 76 850
|