![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming > Software engineering
This book addresses extensible and adaptable computing, a broad range of methods and techniques used to systematically tackle the future growth of systems and respond proactively and seamlessly to change. The book is divided into five main sections: Agile Software Development, Data Management, Web Intelligence, Machine Learning and Computing in Education. These sub-domains of computing work together in mutually complementary ways to build systems and applications that scale well, and which can successfully meet the demands of changing times and contexts. The topics under each track have been carefully selected to highlight certain qualitative aspects of applications and systems, such as scalability, flexibility, integration, efficiency and context awareness. The first section (Agile Software Development) includes six contributions that address related issues, including risk management, test case prioritization and tools, open source software reliability and predicting the change proneness of software. The second section (Data Management) includes discussions on myriad issues, such as extending database caches using solid-state devices, efficient data transmission, healthcare applications and data security. In turn, the third section (Machine Learning) gathers papers that investigate ML algorithms and present their specific applications such as portfolio optimization, disruption classification and outlier detection. The fourth section (Web Intelligence) covers emerging applications such as metaphor detection, language identification and sentiment analysis, and brings to the fore web security issues such as fraud detection and trust/reputation systems. In closing, the fifth section (Computing in Education) focuses on various aspects of computer-aided pedagogical methods.
This book provides a snapshot of the current state-of-the-art in the fields of mobile and wireless technology, security and applications. The proceedings of the 2nd International Conference on Mobile and Wireless Technology (ICMWT2015), it represents the outcome of a unique platform for researchers and practitioners from academia and industry to share cutting-edge developments in the field of mobile and wireless science technology, including those working on data management and mobile security. The contributions presented here describe the latest academic and industrial research from the international mobile and wireless community. The scope covers four major topical areas: mobile and wireless networks and applications; security in mobile and wireless technology; mobile data management and applications; and mobile software. The book will be a valuable reference for current researchers in academia and industry, and a useful resource for graduate-level students working on mobile and wireless technology.
This book presents a collection of research papers that address the challenge of how to develop software in a principled way that, in particular, enables reasoning. The individual papers approach this challenge from various perspectives including programming languages, program verification, and the systematic variation of software. Topics covered include programming abstractions for concurrent and distributed software, specification and verification techniques for imperative programs, and development techniques for software product lines. With this book the editors and authors wish to acknowledge - on the occasion of his 60th birthday - the work of Arnd Poetzsch-Heffter, who has made major contributions to software technology throughout his career. It features articles on Arnd's broad research interests including, among others, the implementation of programming languages, formal semantics, specification and verification of object-oriented and concurrent programs, programming language design, distributed systems, software modeling, and software product lines. All contributing authors are leading experts in programming languages and software engineering who have collaborated with Arnd in the course of his career. Overall, the book offers a collection of high-quality articles, presenting original research results, major case studies, and inspiring visions. Some of the work included here was presented at a symposium in honor of Arnd Poetzsch-Heffter, held in Kaiserslautern, Germany, in November 2018.
To deal with the flexible architectures and evolving functionalities of complex modern systems, the agent metaphor and agent-based computing are often the most appropriate software design approach. As a result, a broad range of special-purpose design processes has been developed in the last several years to tackle the challenges of these specific application domains. In this context, in early 2012 the IEEE-FIPA Design Process Documentation Template SC0097B was defined, which facilitates the representation of design processes and method fragments through the use of standardized templates, thus supporting the creation of easily sharable repositories and facilitating the composition of new design processes. Following this standardization approach, this book gathers the documentations of some of the best-known agent-oriented design processes. After an introductory section, describing the goal of the book and the existing IEEE FIPA standard for design process documentation, thirteen processes (including the widely known Open UP, the de facto standard in object-oriented software engineering) are documented by their original creators or other well-known scientists working in the field. As a result, this is the first work to adopt a standard, unified descriptive approach for documenting different processes, making it much easier to study the individual processes, to rigorously compare them, and to apply them in industrial projects.While there are a few books on the market describing the individual agent-oriented design processes, none of them presents all the processes, let alone in the same format. With this handbook, for the first time, researchers as well as professional software developers looking for an overview as well as for detailed and standardized descriptions of design processes will find a comprehensive presentation of the most important agent-oriented design processes, which will be an invaluable resource when developing solutions in various application areas.
Addressing the open problem of engineering normative open systems using the multi-agent paradigm, normative open systems are explained as systems in which heterogeneous and autonomous entities and institutions coexist in a complex social and legal framework that can evolve to address the different and often conflicting objectives of the many stakeholders involved. Presenting a software engineering approach which covers both the analysis and design of these kinds of systems, and which deals with the open issues in the area, ROMAS (Regulated Open Multi-Agent Systems) defines a specific multi-agent architecture, meta-model, methodology and CASE tool. This CASE tool is based on Model-Driven technology and integrates the graphical design with the formal verification of some properties of these systems by means of model checking techniques. Utilizing tables to enhance reader insights into the most important requirements for designing normative open multi-agent systems, the book also provides a detailed and easy to understand description of the ROMAS approach and the advantages of using ROMAS. This method is illustrated with case studies, in which the reader may develop a comprehensive understanding of applying ROMAS to a given problem. The case studies are presented with illustrations of the developments. Reading this book will help readers to understand the increasing demand for normative open systems and their development requirements; understand how multi-agent systems approaches can be used to deal with the development of systems of this kind; to learn an easy to use and complete engineering method for large-scale and complex normative systems and to recognize how Model-Driven technology can be used to integrate the analysis, design, verification and implementation of multi-agent systems.
This book highlights the current challenges for engineers involved in product development and the associated changes in procedure they make necessary. Methods for systematically analyzing the requirements for safety and security mechanisms are described using examples of how they are implemented in software and hardware, and how their effectiveness can be demonstrated in terms of functional and design safety are discussed. Given today's new E-mobility and automated driving approaches, new challenges are arising and further issues concerning "Road Vehicle Safety" and "Road Traffic Safety" have to be resolved. To address the growing complexity of vehicle functions, as well as the increasing need to accommodate interdisciplinary project teams, previous development approaches now have to be reconsidered, and system engineering approaches and proven management systems need to be supplemented or wholly redefined. The book presents a continuous system development process, starting with the basic requirements of quality management and continuing until the release of a vehicle and its components for road use. Attention is paid to the necessary definition of the respective development item, the threat-, hazard- and risk analysis, safety concepts and their relation to architecture development, while the book also addresses the aspects of product realization in mechanics, electronics and software as well as for subsequent testing, verification, integration and validation phases. In November 2011, requirements for the Functional Safety (FuSa) of road vehicles were first published in ISO 26262. The processes and methods described here are intended to show developers how vehicle systems can be implemented according to ISO 26262, so that their compliance with the relevant standards can be demonstrated as part of a safety case, including audits, reviews and assessments.
Validation and verification is an area of software engineering that has been around since the early stages of program development, especially one of its more known areas: testing. Testing, the dynamic side of validation and verification (V&V), has been complemented with other, more formal techniques of software engineering, and so the static verification - traditional in formal methods - has been joined by model checking and other techniques. ""Verification, Validation and Testing in Software Engineering"" offers thorough coverage of many valuable formal and semiformal techniques of V&V. It explores, depicts, and provides examples of different applications in V&V that produce many areas of software development - including real-time applications - where V&V techniques are required.
This unique text/reference reviews the key principles and techniques in conceptual modelling which are of relevance to specialists in the field of cultural heritage. Information modelling tasks are a vital aspect of work and study in such disciplines as archaeology, anthropology, history, and architecture. Yet the concepts and methods behind information modelling are rarely covered by the training in cultural heritage-related fields. With the increasing popularity of the digital humanities, and the rapidly growing need to manage large and complex datasets, the importance of information modelling in cultural heritage is greater than ever before. To address this need, this book serves in the place of a course on software engineering, assuming no previous knowledge of the field. Topics and features: Presents a general philosophical introduction to conceptual modelling Introduces the basics of conceptual modelling, using the ConML language as an infrastructure Reviews advanced modelling techniques relating to issues of vagueness, temporality and subjectivity, in addition to such topics as metainformation and feature redefinition Proposes an ontology for cultural heritage supported by the Cultural Heritage Abstract Reference Model (CHARM), to enable the easy construction of conceptual models Describes various usage scenarios and applications of cultural heritage modelling, offering practical tips on how to use different techniques to solve real-world problems This interdisciplinary work is an essential primer for tutors and students (at both undergraduate and graduate level) in any area related to cultural heritage, including archaeology, anthropology, art, history, architecture, or literature. Cultural heritage managers, researchers, and professionals will also find this to be a valuable reference, as will anyone involved in database design, data management, or the conceptualization of cultural heritage in general. Dr. Cesar Gonzalez-Perez is a Staff Scientist at the Institute of Heritage Sciences (Incipit), within the Spanish National Research Council (CSIC), Santiago de Compostela, Spain.
The latest work by the world's leading authorities on the use of formal methods in computer science is presented in this volume, based on the 1995 International Summer School in Marktoberdorf, Germany. Logic is of special importance in computer science, since it provides the basis for giving correct semantics of programs, for specification and verification of software, and for program synthesis. The lectures presented here provide the basic knowledge a researcher in this area should have and give excellent starting points for exploring the literature. Topics covered include semantics and category theory, machine based theorem proving, logic programming, bounded arithmetic, proof theory, algebraic specifications and rewriting, algebraic algorithms, and type theory.
Multiple criteria decision aid (MCDA) methods are illustrated in this book through theoretical and computational techniques utilizing Python. Existing methods are presented in detail with a step by step learning approach. Theoretical background is given for TOPSIS, VIKOR, PROMETHEE, SIR, AHP, goal programming, and their variations. Comprehensive numerical examples are also discussed for each method in conjunction with easy to follow Python code. Extensions to multiple criteria decision making algorithms such as fuzzy number theory and group decision making are introduced and implemented through Python as well. Readers will learn how to implement and use each method based on the problem, the available data, the stakeholders involved, and the various requirements needed. Focusing on the practical aspects of the multiple criteria decision making methodologies, this book is designed for researchers, practitioners and advanced graduate students in the applied mathematics, information systems, operations research and business administration disciplines, as well as other engineers and scientists oriented in interdisciplinary research. Readers will greatly benefit from this book by learning and applying various MCDM/A methods. (Adiel Teixeira de Almeida, CDSID-Center for Decision System and Information Development, Universidade Federal de Pernambuco, Recife, Brazil) Promoting the development and application of multicriteria decision aid is essential to ensure more ethical and sustainable decisions. This book is a great contribution to this objective. It is a perfect blend of theory and practice, providing potential users and researchers with the theoretical bases of some of the best-known methods as well as with the computing tools needed to practice, to compare and to put these methods to use. (Jean-Pierre Brans, Vrije Universiteit Brussel, Brussels, Belgium) This book is intended for researchers, practitioners and students alike in decision support who wish to familiarize themselves quickly and efficiently with multicriteria decision aiding algorithms. The proposed approach is original, as it presents a selection of methods from the theory to the practical implementation in Python, including a detailed example. This will certainly facilitate the learning of these techniques, and contribute to their effective dissemination in applications. (Patrick Meyer, IMT Atlantique, Lab-STICC, Univ. Bretagne Loire, Brest, France)
In a down-to-the earth manner, the volume lucidly presents how the fundamental concepts, methodology, and algorithms of Computational Intelligence are efficiently exploited in Software Engineering and opens up a novel and promising avenue of a comprehensive analysis and advanced design of software artifacts. It shows how the paradigm and the best practices of Computational Intelligence can be creatively explored to carry out comprehensive software requirement analysis, support design, testing, and maintenance. Software Engineering is an intensive knowledge-based endeavor of inherent human-centric nature, which profoundly relies on acquiring semiformal knowledge and then processing it to produce a running system. The knowledge spans a wide variety of artifacts, from requirements, captured in the interaction with customers, to design practices, testing, and code management strategies, which rely on the knowledge of the running system. This volume consists of contributions written by widely acknowledged experts in the field who reveal how the Software Engineering benefits from the key foundations and synergistically existing technologies of Computational Intelligence being focused on knowledge representation, learning mechanisms, and population-based global optimization strategies. This book can serve as a highly useful reference material for researchers, software engineers and graduate students and senior undergraduate students in Software Engineering and its sub-disciplines, Internet engineering, Computational Intelligence, management, operations research, and knowledge-based systems.
Genetic programming (GP) is a popular heuristic methodology of program synthesis with origins in evolutionary computation. In this generate-and-test approach, candidate programs are iteratively produced and evaluated. The latter involves running programs on tests, where they exhibit complex behaviors reflected in changes of variables, registers, or memory. That behavior not only ultimately determines program output, but may also reveal its `hidden qualities' and important characteristics of the considered synthesis problem. However, the conventional GP is oblivious to most of that information and usually cares only about the number of tests passed by a program. This `evaluation bottleneck' leaves search algorithm underinformed about the actual and potential qualities of candidate programs. This book proposes behavioral program synthesis, a conceptual framework that opens GP to detailed information on program behavior in order to make program synthesis more efficient. Several existing and novel mechanisms subscribing to that perspective to varying extent are presented and discussed, including implicit fitness sharing, semantic GP, co-solvability, trace convergence analysis, pattern-guided program synthesis, and behavioral archives of subprograms. The framework involves several concepts that are new to GP, including execution record, combined trace, and search driver, a generalization of objective function. Empirical evidence gathered in several presented experiments clearly demonstrates the usefulness of behavioral approach. The book contains also an extensive discussion of implications of the behavioral perspective for program synthesis and beyond.
This is the first book organized around code clone analysis. To cover the broad studies of code clone analysis, this book selects past research results that are important to the progress of the field and updates them with new results and future directions. The first chapter provides an introduction for readers who are inexperienced in the foundation of code clone analysis, defines clones and related terms, and discusses the classification of clones. The chapters that follow are categorized into three main parts to present 1) major tools for code clone analysis, 2) fundamental topics such as evaluation benchmarks, clone visualization, code clone searches, and code similarities, and 3) applications to actual problems. Each chapter includes a valuable reference list that will help readers to achieve a comprehensive understanding of this diverse field and to catch up with the latest research results. Code clone analysis relies heavily on computer science theories such as pattern matching algorithms, computer language, and software metrics. Consequently, code clone analysis can be applied to a variety of real-world tasks in software development and maintenance such as bug finding and program refactoring. This book will also be useful in designing an effective curriculum that combines theory and application of code clone analysis in university software engineering courses.
"Distributed Programming: Theory and Practice" presents a practical and rigorous method to develop distributed programs that correctly implement their specifications. The method also covers how to write specifications and how to use them. Numerous examples such as bounded buffers, distributed locks, message-passing services, and distributed termination detection illustrate the method. Larger examples include data transfer protocols, distributed shared memory, and TCP network sockets. "Distributed Programming: Theory and Practice" bridges the gap between books that focus on specific concurrent programming languages and books that focus on distributed algorithms. Programs are written in a "real-life" programming notation, along the lines of Java and Python with explicit instantiation of threads and programs.Students and programmers will see these as programs and not "merely" algorithms in pseudo-code. The programs implement interesting algorithms and solve problems that are large enough to serve as projects in programming classes and software engineering classes. Exercises and examples are included at the end of each chapter with on-line access to the solutions. "Distributed Programming: Theory and Practice "is designed as an advanced-level text book for students in computer science and electrical engineering. Programmers, software engineers and researchers working in this field will also find this book useful."
The development of software system with acceptable level of reliability and quality within available time frame and budget becomes a challenging objective. This objective could be achieved to some extent through early prediction of number of faults present in the software, which reduces the cost of development as it provides an opportunity to make early corrections during development process. The book presents an early software reliability prediction model that will help to grow the reliability of the software systems by monitoring it in each development phase, i.e. from requirement phase to testing phase. Different approaches are discussed in this book to tackle this challenging issue. An important approach presented in this book is a model to classify the modules into two categories (a) fault-prone and (b) not fault-prone. The methods presented in this book for assessing expected number of faults present in the software, assessing expected number of faults present at the end of each phase and classification of software modules in fault-prone or no fault-prone category are easy to understand, develop and use for any practitioner. The practitioners are expected to gain more information about their development process and product reliability, which can help to optimize the resources used.
Collaboration among individuals - from users to developers - is central to modern software engineering. It takes many forms: joint activity to solve common problems, negotiation to resolve conflicts, creation of shared definitions, and both social and technical perspectives impacting all software development activity. The difficulties of collaboration are also well documented. The grand challenge is not only to ensure that developers in a team deliver effectively as individuals, but that the whole team delivers more than just the sum of its parts. The editors of this book have assembled an impressive selection of authors, who have contributed to an authoritative body of work tackling a wide range of issues in the field of collaborative software engineering. The resulting volume is divided into four parts, preceded by a general editorial chapter providing a more detailed review of the domain of collaborative software engineering. Part 1 is on "Characterizing Collaborative Software Engineering," Part 2 examines various "Tools and Techniques," Part 3 addresses organizational issues, and finally Part 4 contains four examples of "Emerging Issues in Collaborative Software Engineering." As a result, this book delivers a comprehensive state-of-the-art overview and empirical results for researchers in academia and industry in areas like software process management, empirical software engineering, and global software development. Practitioners working in this area will also appreciate the detailed descriptions and reports which can often be used as guidelines to improve their daily work.
This book examines how and why collaborative quality assurance techniques, particularly pair programming and peer code review, affect group cognition and software quality in agile software development teams. Prior research on these extremely popular but also costly techniques has focused on isolated pairs of developers and ignored the fact that they are typically applied in larger, enduring teams. This book is one of the first studies to investigate how these techniques depend on and influence the joint cognitive accomplishments of entire development teams rather than individuals. It employs theories on transactive memory systems and functional affordances to provide answers based on empirical research. The mixed-methods research presented includes several in-depth case studies and survey results from more than 500 software developers, team leaders, and product managers in 81 software development teams. The book's findings will advance IS research and have explicit implications for developers of code review tools, information systems development teams, and software development managers.
Managing Complexity is the first book that clearly defines the concept of Complexity, explains how Complexity can be measured and tuned, and describes the seven key features of Complex Systems: 1. Connectivity 2. Autonomy 3. Emergency 4. Nonequilibrium 5. Non-linearity 6. Self-organisation 7. Co-evolution The thesis of the book is that complexity of the environment in which we work and live offers new opportunities and that the best strategy for surviving and prospering under conditions of complexity is to develop adaptability to perpetually changing conditions. An effective method for designing adaptability into business processes using multi-agent technology is presented and illustrated by several extensive examples, including adaptive, real-time scheduling of taxis, see-going tankers, road transport, supply chains, railway trains, production processes and swarms of small space satellites. Additional case studies include adaptive servicing of the International Space Station; adaptive processing of design changes of large structures such as wings of the largest airliner in the world; dynamic data mining, knowledge discovery and distributed semantic processing.Finally, the book provides a foretaste of the next generation of complex issues, notably, The Internet of Things, Smart Cities, Digital Enterprises and Smart Logistics.
Explores and identifies the main issues, concepts, principles and evolution of software testing, including software quality engineering and testing concepts, test data generation, test deployment analysis, and software test management This book examines the principles, concepts, and processes that are fundamental to the software testing function. This book is divided into five broad parts. Part I introduces software testing in the broader context of software engineering and explores the qualities that testing aims to achieve or ascertain, as well as the lifecycle of software testing. Part II covers mathematical foundations of software testing, which include software specification, program correctness and verification, concepts of software dependability, and a software testing taxonomy. Part III discusses test data generation, specifically, functional criteria and structural criteria. Test oracle design, test driver design, and test outcome analysis is covered in Part IV. Finally, Part V surveys managerial aspects of software testing, including software metrics, software testing tools, and software product line testing. * Presents software testing, not as an isolated technique, but as part of an integrated discipline of software verification and validation * Proposes program testing and program correctness verification within the same mathematical model, making it possible to deploy the two techniques in concert, by virtue of the law of diminishing returns * Defines the concept of a software fault, and the related concept of relative correctness, and shows how relative correctness can be used to characterize monotonic fault removal * Presents the activity of software testing as a goal oriented activity, and explores how the conduct of the test depends on the selected goal * Covers all phases of the software testing lifecycle, including test data generation, test oracle design, test driver design, and test outcome analysis Software Testing: Concepts and Operations is a great resource for software quality and software engineering students because it presents them with fundamentals that help them to prepare for their ever evolving discipline.
Tourism is one of the most rapidly evolving industries of the 21st century. The integration of technological advancements plays a crucial role in the ability for many countries, all over the world, to attract visitors and maintain a distinct edge in a highly competitive market. The Handbook of Research on Technological Developments for Cultural Heritage and eTourism Applications is a pivotal reference source for the latest research findings on the utilization of information and communication technologies in tourism. Featuring extensive coverage on relevant areas such as smart tourism, user interfaces, and social media, this publication is an ideal resource for policy makers, academicians, researchers, advanced-level students, and technology developers seeking current research on new trends in ICT systems and application and tourism.
This book provides a comprehensive overview of the field of software processes, covering in particular the following essential topics: software process modelling, software process and lifecycle models, software process management, deployment and governance, and software process improvement (including assessment and measurement). It does not propose any new processes or methods; rather, it introduces students and software engineers to software processes and life cycle models, covering the different types ranging from "classical", plan-driven via hybrid to agile approaches. The book is structured as follows: In chapter 1, the fundamentals of the topic are introduced: the basic concepts, a historical overview, and the terminology used. Next, chapter 2 covers the various approaches to modelling software processes and lifecycle models, before chapter 3 discusses the contents of these models, addressing plan-driven, agile and hybrid approaches. The following three chapters address various aspects of using software processes and lifecycle models within organisations, and consider the management of these processes, their assessment and improvement, and the measurement of both software and software processes. Working with software processes normally involves various tools, which are the focus of chapter 7, before a look at current trends in software processes in chapter 8 rounds out the book. This book is mainly intended for graduate students and practicing professionals. It can be used as a textbook for courses and lectures, for self-study, and as a reference guide. When used as a textbook, it may support courses and lectures on software processes, or be used as complementary literature for more basic courses, such as introductory courses on software engineering or project management. To this end, it includes a wealth of examples and case studies, and each chapter is complemented by exercises that help readers gain a better command of the concepts discussed.
This book focuses on defining the achievements of software engineering in the past decades and showcasing visions for the future. It features a collection of articles by some of the most prominent researchers and technologists who have shaped the field: Barry Boehm, Manfred Broy, Patrick Cousot, Erich Gamma, Yuri Gurevich, Tony Hoare, Michael A. Jackson, Rustan Leino, David L. Parnas, Dieter Rombach, Joseph Sifakis, Niklaus Wirth, Pamela Zave, and Andreas Zeller. The contributed articles reflect the authors' individual views on what constitutes the most important issues facing software development. Both research- and technology-oriented contributions are included. The book provides at the same time a record of a symposium held at ETH Zurich on the occasion of Bertrand Meyer's 60th birthday.
To solve performance problems in modern computing infrastructures, often comprising thousands of servers running hundreds of applications, spanning multiple tiers, you need tools that go beyond mere reporting. You need tools that enable performance analysis of application workflow across the entire enterprise. That's what PDQ (Pretty Damn Quick) provides. PDQ is an open-source performance analyzer based on the paradigm of queues. Queues are ubiquitous in every computing environment as buffers, and since any application architecture can be represented as a circuit of queueing delays, PDQ is a natural fit for analyzing system performance. Building on the success of the first edition, this considerably expanded second edition now comprises four parts. Part I contains the foundational concepts, as well as a new first chapter that explains the central role of queues in successful performance analysis. Part II provides the basics of queueing theory in a highly intelligible style for the non-mathematician; little more than high-school algebra being required. Part III presents many practical examples of how PDQ can be applied. The PDQ manual has been relegated to an appendix in Part IV, along with solutions to the exercises contained in each chapter. Throughout, the Perl code listings have been newly formatted to improve readability. The PDQ code and updates to the PDQ manual are available from the author's web site at www.perfdynamics.com
"Requirements Engineering and Management for Software Development Projects" presents a complete guide on requirements for software development including engineering, computer science and management activities. It is the first book to cover all aspects of requirements management in software development projects. This book introduces the understanding of the requirements, elicitation and gathering, requirements analysis, verification and validation of the requirements, establishment of requirements, different methodologies in brief, requirements traceability and change management among other topics. The best practices, pitfalls, and metrics used for efficient software requirements management are also covered. Intended for the professional market, including software engineers, programmers, designers and researchers, this book is also suitable for advanced-level students in computer science or engineering courses as a textbook or reference." |
![]() ![]() You may like...
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R4,186
Discovery Miles 41 860
Research Anthology on Architectures…
Information R Management Association
Hardcover
R13,716
Discovery Miles 137 160
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,341
Discovery Miles 13 410
|