![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Internet > Network computers
Broadband communications is widely recognized as one of the key technologies for building the next generation global network infrastructure to support ever-increasing multimedia applications. This book contains a collection of timely leading-edge research papers that address some of the important issues of providing such a broadband network infrastructure. Broadband Communications represents the selected proceedings of the Fifth International Conference on Broadband Communications, sponsored by the International Federation for Information Processing (IFIP) and held in Hong Kong in November 1999. The book is organized according to the eighteen technical sessions of the conference. The topics covered include internet services, traffic modeling, internet traffic control, performance evaluation, billing, pricing, admission policy, mobile network protocols, TCP/IP performance, mobile network performance, bandwidth allocation, switching systems, traffic flow control, routing, congestion and admission control, multicast protocols, network management, and quality of service. It will serve as an essential reference for computer scientists and practitioners.
Distributed Space-Time Coding (DSTC) is a cooperative relaying scheme that enables high reliability in wireless networks. This brief presents the basic concept of DSTC, its achievable performance, generalizations, code design, and differential use. Recent results on training design and channel estimation for DSTC and the performance of training-based DSTC are also discussed.
This book constitutes the proceedings of the 27th European Conference on Object-Oriented Programming, ECOOP 2013, held in Montpellier, France, in July 2013. The 29 papers presented in this volume were carefully reviewed and selected from 116 submissions. They are organized in topical sections on aspects, components, and modularity; types; language design; concurrency, parallelism, and distribution; analysis and verification; modelling and refactoring; testing, profiling, and empirical studies; and implementation.
The first Annual Working Conference ofWG11.4oftheInter nationalFederationforInformation Processing (IFIP),focuseson variousstate of the art concepts in the field of Network and Dis tributedSystemsSecurity. Oursocietyisrapidly evolvingand irreversibly set onacourse governedby electronicinteractions. Wehave seen thebirthofe mail in the early seventies, and are now facing new challenging applicationssuchase commerce, e government,...Themoreour societyrelies on electronicforms ofcommunication,themorethe securityofthesecommunicationnetworks isessentialforitswell functioning. Asaconsequence,researchonmethodsandtechniques toimprove network security iso fparam ount importance. ThisWorking Conference bringstogetherresearchersandprac tionersofvariousdisciplines,organisationsandcountries,todiscuss thelatestdevelopmentsinsecurity protocols, secure software engin eering,mobileagentsecurity,e commercesecurityandsecurityfor distributedcomputing. Wearealsopleasedtohaveattractedtwointernationalspeakers topresenttwo case studies,one dealing withBelgium'sintentionto replacetheidentity card ofitscitizensbyanelectronicversion,and theotherdiscussingtheimplicationsofthesecuritycertificationin amultinationalcorporation. ThisWorking Conference s houldalsobeconsideredasthekick off activity ofWG11.4, the aimsof which can be summarizedas follows: topromoteresearch on technical measures forsecuringcom puternetworks, including bothhardware andsoftware based techniques. to promote dissemination of research results in the field of network security in real lifenetworks in industry, academia and administrative ins titutions. viii topromoteeducationintheapplicationofsecuritytechniques, andtopromotegeneral awarenessa boutsecurityproblems in thebroadfieldofinformationtechnology. Researchers and practioners who want to get involved in this Working Group, are kindlyrequestedtocontactthechairman. MoreinformationontheworkingsofWG11.4isavailable from the officialIFIP website:http://www.ifip.at.org/. Finally,wewish toexpressour gratitudetoallthosewho have contributedtothisconference in one wayoranother. Wearegr ate fultothe internationalrefereeboard whoreviewedallthe papers andtotheauthorsandinvitedspeakers,whosecontributionswere essential to the successof the conference. We would alsoliketo thanktheparticipantswhosepresenceand interest, togetherwith thechangingimperativesofsociety,willprovea drivingforce for futureconferencestocome.
Formal Methods for Protocol Engineering and Distributed Systems addresses formal description techniques (FDTs) applicable to distributed systems and communication protocols. It aims to present the state of the art in theory, application, tools an industrialization of FDTs. Among the important features presented are: FDT-based system and protocol engineering; FDT application to distributed systems; Protocol engineeering; Practical experience and case studies. Formal Methods for Protocol Engineering and Distributed Systems contains the proceedings of the Joint International Conference on Formal Description Techniques for Distributed Systems and Communication Protocols and Protocol Specification, Testing, and Verification, which was sponsored by the International Federation for Information Processing (IFIP) and was held in Beijing, China, in October 1999. This volume is suitable as a secondary text for a graduate level course on Distributed Systems or Communications, and as a reference for researchers and industry practitioners.
Error-correction coding is being used on an almost routine basis in most new communication systems. Not only is coding equipment being used to increase the energy efficiency of communication links, but coding ideas are also providing innovative solutions to many related communication problems. Among these are the elimination of intersymbol interference caused by filtering and multipath and the improved demodulation of certain frequency modulated signals by taking advantage of the "natural" coding provided by a continuous phase. Although several books and nu merous articles have been written on coding theory, there are still noticeable deficiencies. First, the practical aspects of translating a specific decoding algorithm into actual hardware have been largely ignored. The information that is available is sketchy and is widely dispersed. Second, the information required to evaluate a particular technique under situations that are en countered in practice is available for the most part only in private company reports. This book is aimed at correcting both of these problems. It is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. The book U"Ses a minimum of mathematics and entirely avoids the classical theorem/proof approach that is often seen in coding texts."
The main purpose of this paper is to contribute to the discussion about the design of computer and communication systems that can aid the management process. 1.1 Historical Overview We propose that Decision Support System can be considered as a design conception conceived within the computer industry to facilitate the use of computer technology in organisations (Keen, 1991). This framework, built during the late 1970s, offers computer and communication technology as support to the decision process which constitutes, in this view, the core of the management process. The DSS framework offers the following capabilities: * Access: ease of use, wide variety of data, analysis and modelling capacity. * Technological: software gel)eration tools. * Development modes: interactive and evolutionary. Within this perspective, computer and communication technologies are seen as an amplification of the human data processing capabilities which limit the decision process. Thus, the human being is understood metaphorically as a data processing machine. Mental processes are associated with the manipulation of symbols aOO human communication to signal transmission.
We arehappy to welcome you to the IFIP Protocols for High-Speed Networks '96 workshop hosted by INRIA Sophia Antipolis. This is the fifth event in a series initiated in Zurich in 1989 followed by Palo Alto (1990), Stockholm (1993), and Vancouver (1994). This workshop provides an international forum for the exchange of information on protocols for high-speed networks. The workshop focus on problems related to the e:fficient transmission of multimedia application data using high-speed networks and internetworks. Protocol for High-Speed Networks is a "working conference". That explains we have privileged high quality papers describing on-going research and novel ideas. The number of selected papers was kept low in order to leave room for discussion on each paper. Together with the technical sessions, working sessions were organized on hot topics. We would like to thank all the authors for their interest. We also thank the Program Committee members for the Ievel of effort in the reviewing process and in the workshop technical program organization. We finally thank INRIA and DRET for their financial support to the organization of the workshop.
Enterprises all over the world are experiencing a rapid development of networked computing for applications that are required for the daily survival of an organization. Client-server computing offers great potential for cost-effective networked computing. However, many organizations have now learned that the cost of maintenance and support of these networked distributed systems far exceeds the cost of buying them. Computer Supported Creative Work (CSCW) is the new evolving area that promotes the understanding of business processes and relevant communication technologies. Cooperative Management of Enterprise Networks uses CSCW as the medium for conveying ideas on the integration of business processes with network and systems management. This book will be useful for systems management professionals wishing to know about business process integration; business managers wishing to integrate their tasks with network/systems management; software system developers wishing to adopt participatory design practices; and students and researchers.
This book presents a simple, yet complete, approach to the design and performance analysis of distributed processing algorithms and techniques suitable for IEEE 802.15.4 networks. In particular, the book focuses on the bottom two layers of the ISO/OSI stack (Physical and Medium Access Control), discussing also a few issue related to routing. The book is a the synergistic combination of signal processing aspects on the one hand and MAC and connectivity issues on the other hand. The goal of the book is to clearly link physical layer aspects with medium access and topology aspects, in order to provide the reader with a clear understanding of how to approach the design of proper distributed signal processing and medium access algorithms in this context.
The communication of information is a crucial point in the development of our future way of life. We are living more and more in an information society. Perhaps the more obvious applications are those devoted to distributed cooperative multimedia systems. In both industry and academia, people are involved in such projects. HPN'95 is an international forum where both communities can find a place for dialogues and interchanges. The conference is targeted to the new mechanisms, protocols, services and architectures derived from the need of emerging applications, as well as from the requirements of new communication environments. This workshop belongs to the series started in 1987 in Aachen (Germany), followed by Liege (Belgium) in 1988, Berlin (Germany) in 1991, Liege (Belgium) again in 1992 and Grenoble (France) in 1994. HPN'95 is the sixth event of the series sponsored by IFIP WG 6.4 and will be held at the Arxiduc Lluis Salvador building on the campus of the University of the Balearic Islands in Palma de Mallorca (Spain) from September 13 to 15.
Security and privacy are paramount concerns in information processing systems, which are vital to business, government and military operations and, indeed, society itself. Meanwhile, the expansion of the Internet and its convergence with telecommunication networks are providing incredible connectivity, myriad applications and, of course, new threats. Data and Applications Security XVII: Status and Prospects describes original research results, practical experiences and innovative ideas, all focused on maintaining security and privacy in information processing systems and applications that pervade cyberspace. The areas of coverage include: -Information Warfare, -Information Assurance, -Security and Privacy, -Authorization and Access Control in Distributed Systems, -Security Technologies for the Internet, -Access Control Models and Technologies, -Digital Forensics. This book is the seventeenth volume in the series produced by the International Federation for Information Processing (IFIP) Working Group 11.3 on Data and Applications Security. It presents a selection of twenty-six updated and edited papers from the Seventeenth Annual IFIP TC11 / WG11.3 Working Conference on Data and Applications Security held at Estes Park, Colorado, USA in August 2003, together with a report on the conference keynote speech and a summary of the conference panel. The contents demonstrate the richness and vitality of the discipline, and other directions for future research in data and applications security. Data and Applications Security XVII: Status and Prospects is an invaluable resource for information assurance researchers, faculty members and graduate students, as well as for individuals engaged in research and development in the information technology sector.
Information Systems and Data Compression presents a uniform approach and methodology for designing intelligent information systems. A framework for information concepts is introduced for various types of information systems such as communication systems, information storage systems and systems for simplifying structured information. The book introduces several new concepts and presents a novel interpretation of a wide range of topics in communications, information storage, and information compression. Numerous illustrations for designing information systems for compression of digital data and images are used throughout the book.
This brief investigates distributed medium access control (MAC) with QoS provisioning for both single- and multi-hop wireless networks including wireless local area networks (WLANs), wireless ad hoc networks, and wireless mesh networks. For WLANs, an efficient MAC scheme and a call admission control algorithm are presented to provide guaranteed QoS for voice traffic and, at the same time, increase the voice capacity significantly compared with the current WLAN standard. In addition, a novel token-based scheduling scheme is proposed to provide great flexibility and facility to the network service provider for service class management. Also proposed is a novel busy-tone based distributed MAC scheme for wireless ad hoc networks and a collision-free MAC scheme for wireless mesh networks, respectively, taking the different network characteristics into consideration. The proposed schemes enhance the QoS provisioning capability to real-time traffic and, at the same time, significantly improve the system throughput and fairness performance for data traffic, as compared with the most popular IEEE 802.11 MAC scheme.
Information Highways are widely considered as the next generation of high speed communication systems. These highways will be based on emerging Broadband Integrated Services Digital Networks (B-ISDN), which - at least in principle - are envisioned to support not only all the kinds of networking applications known today but also future applications which are not as yet understood fully or even anticipated. Thus, B-ISDNs release networking processes from the limitations which the communications medium has imposed historically. The operational generality stems from the versatility of Asynchronous Transfer Mode (ATM) which is the transfer mode adopted by ITU-T for broadband public ISDN as well as wide area private ISDN. A transfer mode which provides the transmission, multiplexing and switching core that lies at the foundations of a communication network. ATM is designed to integrate existing and future voice, audio, image and data services. Moreover, ATM aims to minimise the complexity of switching and buffer management, to optimise intermediate node processing and buffering and to bound transmission delays. These design objectives are met at high transmission speeds by keeping the basic unit of ATM transmission - the ATM cell - short and of fixed length.
Most everything in our experience requires management in some form or other: our gardens, our automobiles, our minds, our bodies, our love lives, our businesses, our forests, our countries, etc. Sometimes we don't call it "management" per se. We seldom talk about managing our minds or automobiles. But if we think of management in terms of monitoring, maintaining, and cultivating with respect to some goal, then it makes sense. We certainly monitor an automobile, albeit unconsciously, to make sure that it doesn't exhibit signs of trouble. And we certainly try to cultivate our minds. This book is about managing networks. That itself is not a new concept. We've been managing the networks that support our telephones for about 100 years, and we've been managing the networks that support our computers for about 20 years. What is new (and what motivated me to write this book) is the following: (i) the enormous advancements in networking technology as we transition th st from the 20 century to the 21 century, (ii) the increasing dependence of human activities on networking technology, and (iii) the commercialization of services that depend on networking technology (e.g., email and electronic commerce).
Many real-time systems rely on static scheduling algorithms. This includes cyclic scheduling, rate monotonic scheduling and fixed schedules created by off-line scheduling techniques such as dynamic programming, heuristic search, and simulated annealing. However, for many real-time systems, static scheduling algorithms are quite restrictive and inflexible. For example, highly automated agile manufacturing, command, control and communications, and distributed real-time multimedia applications all operate over long lifetimes and in highly non-deterministic environments. Dynamic real-time scheduling algorithms are more appropriate for these systems and are used in such systems. Many of these algorithms are based on earliest deadline first (EDF) policies. There exists a wealth of literature on EDF-based scheduling with many extensions to deal with sophisticated issues such as precedence constraints, resource requirements, system overload, multi-processors, and distributed systems.Deadline Scheduling for Real-Time Systems: EDF and Related Algorithms aims at collecting a significant body of knowledge on EDF scheduling for real-time systems, but it does not try to be all-inclusive (the literature is too extensive). The book primarily presents the algorithms and associated analysis, but guidelines, rules, and implementation considerations are also discussed, especially for the more complicated situations where mathematical analysis is difficult. In general, it is very difficult to codify and taxonomize scheduling knowledge because there are many performance metrics, task characteristics, and system configurations. Also, adding to the complexity is the fact that a variety of algorithms have been designed for different combinations of these considerations. In spite of the recent advances there are still gaps in the solution space and there is a need to integrate the available solutions.For example, a list of issues to consider includes: * preemptive versus non-preemptive tasks, * uni-processors versus multi-processors, * using EDF at dispatch time versus EDF-based planning, * precedence constraints among tasks, * resource constraints, * periodic versus aperiodic versus sporadic tasks, * scheduling during overload, * fault tolerance requirements, and * providing guarantees and levels of guarantees (meeting quality of service requirements). Deadline Scheduling for Real-Time Systems: EDF and Related Algorithms should be of interest to researchers, real-time system designers, and instructors and students, either as a focussed course on deadline-based scheduling for real-time systems, or, more likely, as part of a more general course on real-time computing. The book serves as an invaluable reference in this fast-moving field.
An invited collection of peer-reviewed papers surveying key areas of Roger Needham's distinguished research career at Cambridge University and Microsoft Research. From operating systems to distributed computing, many of the world's leading researchers provide insight into the latest concepts and theoretical insights--many of which are based upon Needham's pioneering research work. A critical collection of edited-survey research papers spanning the entire range of Roger Needham's distinguished scientific career, from operating systems to distributed computing and security. Many of the world's leading researchers survey their topics' latest developments and acknowledge the theoretical foundations of Needham's work. Introduction to book written by Rick Rashid, Director of Microsoft Research Worldwide.
Communication protocols are rules whereby meaningful communication can be exchanged between different communicating entities. In general, they are complex and difficult to design and implement. Specifications of communication protocols written in a natural language (e.g. English) can be unclear or ambiguous, and may be subject to different interpretations. As a result, independent implementations of the same protocol may be incompatible. In addition, the complexity of protocols make them very hard to analyze in an informal way. There is, therefore, a need for precise and unambiguous specification using some formal languages. Many protocol implementations used in the field have almost suffered from failures, such as deadlocks. When the conditions in which the protocols work correctly have been changed, there has been no general method available for determining how they will work under the new conditions. It is necessary for protocol designers to have techniques and tools to detect errors in the early phase of design, because the later in the process that a fault is discovered, the greater the cost of rectifying it. Protocol verification is a process of checking whether the interactions of protocol entities, according to the protocol specification, do indeed satisfy certain properties or conditions which may be either general (e.g., absence of deadlock) or specific to the particular protocol system directly derived from the specification. In the 80s, an ISO (International Organization for Standardization) working group began a programme of work to develop formal languages which were suitable for Open Systems Interconnection (OSI). This group called such languages Formal Description Techniques (FDTs). Some of the objectives of ISO in developing FDTs were: enabling unambiguous, clear and precise descriptions of OSI protocol standards to be written, and allowing such specifications to be verified for correctness. There are two FDTs standardized by ISO: LOTOS and Estelle. Communication Protocol Specification and Verification is written to address the two issues discussed above: the needs to specify a protocol using an FDT and to verify its correctness in order to uncover specification errors in the early stage of a protocol development process. The readership primarily consists of advanced undergraduate students, postgraduate students, communication software developers, telecommunication engineers, EDP managers, researchers and software engineers. It is intended as an advanced undergraduate or postgraduate textbook, and a reference for communication protocol professionals.
Mobile communications havepermeated the globe in both business and social cultures. In only af ew short years, Japan aloneh ash ad more than ten million subscribers enter the mobilem arket. Such explosive popularity is an indication ofa strong commercial demand for communications in both the tethered and tetherless environments. Accompanying the vibrant growth in mobile communications is the growth in multimedia communications, includingthe Internet. Mobile and multime dia communications technologies are merging, making mobile computing ak ey phrasei n the coming advanced information communication era. Thegrowth i n these dynamic industries shows that achange in our chosen method of commu nications is already well advanced. Reading e mail and connecting to various information feeds have already become a part ofdaily business activities. We are trying to grasp theo verall picture of mobile computing. Its shape and form are just starting to appear as personal digital assistants (PDA), handheld personal computers (HPC), wireless data communication services, and com mercial software designed for mobile environments. We are at the cusp of vast popularization of "computers on the go. " "Any time Anywhere Computing" provides the reader with an understand able explanationo ft he current developments and commercialization of mobile computing. Thec oret ec hnologies and applications needed to un derstand the industry are comprehensively addressed. Thebook emphasizes three infrastruc tures: (1) wireless communication network infrastructure, (2) terminal devices (or "computers on the go"), and (3) software middleware and architectures that support wireless and mobile computing.
Bayesian Approach to Image Interpretation will interest anyone working in image interpretation. It is complete in itself and includes background material. This makes it useful for a novice as well as for an expert. It reviews some of the existing probabilistic methods for image interpretation and presents some new results. Additionally, there is extensive bibliography covering references in varied areas. For a researcher in this field, the material on synergistic integration of segmentation and interpretation modules and the Bayesian approach to image interpretation will be beneficial. For a practicing engineer, the procedure for generating knowledge base, selecting initial temperature for the simulated annealing algorithm, and some implementation issues will be valuable. New ideas introduced in the book include: New approach to image interpretation using synergism between the segmentation and the interpretation modules. A new segmentation algorithm based on multiresolution analysis. Novel use of the Bayesian networks (causal networks) for image interpretation. Emphasis on making the interpretation approach less dependent on the knowledge base and hence more reliable by modeling the knowledge base in a probabilistic framework. Useful in both the academic and industrial research worlds, Bayesian Approach to Image Interpretation may also be used as a textbook for a semester course in computer vision or pattern recognition.
Load Balancing in Parallel Computers: Theory and Practice is about the essential software technique of load balancing in distributed memory message-passing parallel computers, also called multicomputers. Each processor has its own address space and has to communicate with other processors by message passing. In general, a direct, point-to-point interconnection network is used for the communications. Many commercial parallel computers are of this class, including the Intel Paragon, the Thinking Machine CM-5, and the IBM SP2. Load Balancing in Parallel Computers: Theory and Practice presents a comprehensive treatment of the subject using rigorous mathematical analyses and practical implementations. The focus is on nearest-neighbor load balancing methods in which every processor at every step is restricted to balancing its workload with its direct neighbours only. Nearest-neighbor methods are iterative in nature because a global balanced state can be reached through processors' successive local operations. Since nearest-neighbor methods have a relatively relaxed requirement for the spread of local load information across the system, they are flexible in terms of allowing one to control the balancing quality, effective for preserving communication locality, and can be easily scaled in parallel computers with a direct communication network. Load Balancing in Parallel Computers: Theory and Practice serves as an excellent reference source and may be used as a text for advanced courses on the subject.
This book is an expanded third edition of the book Performance Analysis of Digital Transmission Systems, originally published in 1990. Second edition of the book titled Digital Transmission Systems: Performance Analysis and Modeling was published in 1998. The book is intended for those who design communication systems and networks. A computer network designer is interested in selecting communication channels, error protection schemes, and link control protocols. To do this efficiently, one needs a mathematical model that accurately predicts system behavior. Two basic problems arise in mathematical modeling: the problem of identifying a system and the problem of applying a model to the system analysis. System identification consists of selecting a class of mathematical objects to describe fundamental properties of the system behavior. We use a specific class of hidden Markov models (HMMs) to model communication systems. This model was introduced by C. E. Shannon more than 50 years ago as a Noisy Discrete Channel with a finite number of states. The model is described by a finite number of matrices whose elements are estimated on the basis of experimental data. We develop several methods of model identification and show their relationship to other methods of data analysis, such as spectral methods, autoregressive moving average CARMA) approximations, and rational transfer function approximations.
This book constitutes the refereed proceedings of the 15th International Conference on Passive and Active Measurement, PAM 2014, held in Los Angeles, CA, USA, in 2014. The 24 revised full papers presented were carefully reviewed and selected from 76 submissions. The papers have been organized in the following topical sections: internet wireless and mobility; measurement design, experience and analysis; performance measurement; protocol and application behavior; characterization of network behavior; and network security and privacy. In addition 7 poster papers have been included.
Active networking is an exciting new paradigm in digital networking that has the potential to revolutionize the manner in which communication takes place. It is an emerging technology, one in which new ideas are constantly being formulated and new topics of research are springing up even as this book is being written. This technology is very likely to appeal to a broad spectrum of users from academia and industry. Therefore, this book was written in a way that enables all these groups to understand the impact of active networking in their sphere of interest. Information services managers, network administrators, and e-commerce developers would like to know the potential benefits of the new technology to their businesses, networks, and applications. The book introduces the basic active networking paradigm and its potential impacts on the future of information handling in general and on communications in particular. This is useful for forward-looking businesses that wish to actively participate in the development of active networks and ensure a head start in the integration of the technology in their future products, be they applications or networks. Areas in which active networking is likely to make significant impact are identified, and the reader is pointed to any related ongoing research efforts in the area. The book also provides a deeper insight into the active networking model for students and researchers, who seek challenging topics that define or extend frontiers of the technology. It describes basic components of the model, explains some of the terms used by the active networking community, and provides the reader with taxonomy of the research being conducted at the time this book was written. Current efforts are classified based on typical research areas such as mobility, security, and management. The intent is to introduce the serious reader to the background regarding some of the models adopted by the community, to outline outstanding issues concerning active networking, and to provide a snapshot of the fast-changing landscape in active networking research. Management is a very important issue in active networks because of its open nature. The latter half of the book explains the architectural concepts of a model for managing active networks and the motivation for a reference model that addresses limitations of the current network management framework by leveraging the powerful features of active networking to develop an integrated framework. It also describes a novel application enabled by active network technology called the Active Virtual Network Management Prediction (AVNMP) algorithm. AVNMP is a pro-active management system; in other words, it provides the ability to solve a potential problem before it impacts the system by modeling network devices within the network itself and running that model ahead of real time. |
![]() ![]() You may like...
Advances in Wireless Communications and…
Roumen Kountchev, Aniket Mahanti, …
Hardcover
R4,370
Discovery Miles 43 700
Research Trends in Graph Theory and…
Daniela Ferrero, Leslie Hogben, …
Hardcover
R3,543
Discovery Miles 35 430
Handbook of Integration of Cloud…
Rajiv Ranjan, Karan Mitra, …
Hardcover
R6,309
Discovery Miles 63 090
Advances in Communication Systems and…
J. Jayakumari, George K. Karagiannidis, …
Hardcover
R5,777
Discovery Miles 57 770
Information Technology in Disaster Risk…
Yuko Murayama, Dimiter Velev, …
Hardcover
R2,915
Discovery Miles 29 150
|