![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer hardware & operating systems
An Interdisciplinary Approach to Modern Network Security presents the latest methodologies and trends in detecting and preventing network threats. Investigating the potential of current and emerging security technologies, this publication is an all-inclusive reference source for academicians, researchers, students, professionals, practitioners, network analysts and technology specialists interested in the simulation and application of computer network protection. It presents theoretical frameworks and the latest research findings in network security technologies, while analyzing malicious threats which can compromise network integrity. It discusses the security and optimization of computer networks for use in a variety of disciplines and fields. Touching on such matters as mobile and VPN security, IP spoofing and intrusion detection, this edited collection emboldens the efforts of researchers, academics and network administrators working in both the public and private sectors. This edited compilation includes chapters covering topics such as attacks and countermeasures, mobile wireless networking, intrusion detection systems, next-generation firewalls, web security and much more. Information and communication systems are an essential component of our society, forcing us to become dependent on these infrastructures. At the same time, these systems are undergoing a convergence and interconnection process that has its benefits, but also raises specific threats to user interests. Citizens and organizations must feel safe when using cyberspace facilities in order to benefit from its advantages. This book is interdisciplinary in the sense that it covers a wide range of topics like network security threats, attacks, tools and procedures to mitigate the effects of malware and common network attacks, network security architecture and deep learning methods of intrusion detection.
Welcome to 1M 2003, the eighth in a series of the premier international technical conference in this field. As IT management has become mission critical to the economies of the developed world, our technical program has grown in relevance, strength and quality. Over the next few years, leading IT organizations will gradually move from identifying infrastructure problems to providing business services via automated, intelligent management systems. To be successful, these future management systems must provide global scalability, for instance, to support Grid computing and large numbers of pervasive devices. In Grid environments, organizations can pool desktops and servers, dynamically creating a virtual environment with huge processing power, and new management challenges. As the number, type, and criticality of devices connected to the Internet grows, new innovative solutions are required to address this unprecedented scale and management complexity. The growing penetration of technologies, such as WLANs, introduces new management challenges, particularly for performance and security. Management systems must also support the management of business processes and their supporting technology infrastructure as integrated entities. They will need to significantly reduce the amount of adventitious, bootless data thrown at consoles, delivering instead a cogent view of the system state, while leaving the handling of lower level events to self-managed, multifarious systems and devices. There is a new emphasis on "autonomic" computing, building systems that can perform routine tasks without administrator intervention and take prescient actions to rapidly recover from potential software or hardware failures.
It has been widely recognized that artificial intelligence computations offer large potential for distributed and parallel processing. Unfortunately, not much is known about designing parallel AI algorithms and efficient, easy-to-use parallel computer architectures for AI applications. The field of parallel computation and computers for AI is in its infancy, but some significant ideas have appeared and initial practical experience has become available. The purpose of this book has been to collect in one volume contributions from several leading researchers and pioneers of AI that represent a sample of these ideas and experiences. This sample does not include all schools of thought nor contributions from all leading researchers, but it covers a relatively wide variety of views and topics and in this sense can be helpful in assessing the state ofthe art. We hope that the book will serve, at least, as a pointer to more specialized literature and that it will stimulate interest in the area of parallel AI processing. It has been a great pleasure and a privilege to cooperate with all contributors to this volume. They have my warmest thanks and gratitude. Mrs. Birgitta Knapp has assisted me in the editorial task and demonstrated a great deal of skill and patience. Janusz S. Kowalik vii INTRODUCTION Artificial intelligence (AI) computer programs can be very time-consuming.
Discusses open principles, methods, and research problems Presents a vision for how IIoT could change the world in the distant future Covers how industry automation is projected to be the largest and fastest growing segment of the IIoT market Explores the collaboratively development of high performance telecommunications, military, industrial, and general purpose embedded computing applications Offers a systematic overview of the state-of-the-art research efforts and potential research directions for dealing with IIoT challenges.
This book covers several aspects of the operational amplifier and includes theoretical explanations with simplified expressions and derivations. The book is designed to serve as a textbook for courses offered to undergraduate and postgraduate students enrolled in electronics and communication engineering. The topics included are DC amplifier, AC/DC analysis of DC amplifier, relevant derivations, a block diagram of the operational amplifier, positive and negative feedbacks, amplitude modulator, current to voltage and voltage to current converters, DAC and ADC, integrator, differentiator, active filters, comparators, sinusoidal and non-sinusoidal waveform generators, phase lock loop (PLL), etc. This book contains two parts-sections A and B. Section A includes theory, methodology, circuit design and derivations. Section B explains the design and study of experiments for laboratory practice. Laboratory experiments enable students to perform a practical activity that demonstrates applications of the operational amplifier. A simplified description of the circuits, working principle and practical approach towards understanding the concept is a unique feature of this book. Simple methods and easy steps of the derivation and lucid presentation are some other traits of this book for readers that do not have any background information about electronics. This book is student-centric towards the basics of the operational amplifier and its applications. The detailed coverage and pedagogical tools make this an ideal textbook for students and researchers enrolled in senior undergraduate and beginning postgraduate electronics and communication engineering courses.
Based on the Lectures given during the Eurocourse on 'Computing with Parallel Architectures' held at the Joint Research Centre Ispra, Italy, September 10-14, 1990
Peer-to-peer (P2P) technology, or peer computing, is a paradigm that is viewed as a potential technology for redesigning distributed architectures and, consequently, distributed processing. Yet the scale and dynamism that characterize P2P systems demand that we reexamine traditional distributed technologies. A paradigm shift that includes self-reorganization, adaptation and resilience is called for. On the other hand, the increased computational power of such networks opens up completely new applications, such as in digital content sharing, scientific computation, gaming, or collaborative work environments. In this book, Vu, Lupu and Ooi present the technical challenges offered by P2P systems, and the means that have been proposed to address them. They provide a thorough and comprehensive review of recent advances on routing and discovery methods; load balancing and replication techniques; security, accountability and anonymity, as well as trust and reputation schemes; programming models and P2P systems and projects. Besides surveying existing methods and systems, they also compare and evaluate some of the more promising schemes. The need for such a book is evident. It provides a single source for practitioners, researchers and students on the state of the art. For practitioners, this book explains best practice, guiding selection of appropriate techniques for each application. For researchers, this book provides a foundation for the development of new and more effective methods. For students, it is an overview of the wide range of advanced techniques for realizing effective P2P systems, and it can easily be used as a text for an advanced course on Peer-to-Peer Computing and Technologies, or as a companion text for courses on various subjects, such as distributed systems, and grid and cluster computing.
With the end of Dennard scaling and Moore's law, IC chips, especially large-scale ones, now face more reliability challenges, and reliability has become one of the mainstay merits of VLSI designs. In this context, this book presents a built-in on-chip fault-tolerant computing paradigm that seeks to combine fault detection, fault diagnosis, and error recovery in large-scale VLSI design in a unified manner so as to minimize resource overhead and performance penalties. Following this computing paradigm, we propose a holistic solution based on three key components: self-test, self-diagnosis and self-repair, or "3S" for short. We then explore the use of 3S for general IC designs, general-purpose processors, network-on-chip (NoC) and deep learning accelerators, and present prototypes to demonstrate how 3S responds to in-field silicon degradation and recovery under various runtime faults caused by aging, process variations, or radical particles. Moreover, we demonstrate that 3S not only offers a powerful backbone for various on-chip fault-tolerant designs and implementations, but also has farther-reaching implications such as maintaining graceful performance degradation, mitigating the impact of verification blind spots, and improving chip yield. This book is the outcome of extensive fault-tolerant computing research pursued at the State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences over the past decade. The proposed built-in on-chip fault-tolerant computing paradigm has been verified in a broad range of scenarios, from small processors in satellite computers to large processors in HPCs. Hopefully, it will provide an alternative yet effective solution to the growing reliability challenges for large-scale VLSI designs.
This book intends to unite studies in different fields related to the development of the relations between logic, law and legal reasoning. Combining historical and philosophical studies on legal reasoning in Civil and Common Law, and on the often neglected Arabic and Talmudic traditions of jurisprudence, this project unites these areas with recent technical developments in computer science. This combination has resulted in renewed interest in deontic logic and logic of norms that stems from the interaction between artificial intelligence and law and their applications to these areas of logic. The book also aims to motivate and launch a more intense interaction between the historical and philosophical work of Arabic, Talmudic and European jurisprudence. The publication discusses new insights in the interaction between logic and law, and more precisely the study of different answers to the question: what role does logic play in legal reasoning? Varying perspectives include that of foundational studies (such as logical principles and frameworks) to applications, and historical perspectives.
This book puts in focus various techniques for checking modeling fidelity of Cyber Physical Systems (CPS), with respect to the physical world they represent. The authors' present modeling and analysis techniques representing different communities, from very different angles, discuss their possible interactions, and discuss the commonalities and differences between their practices. Coverage includes model driven development, resource-driven development, statistical analysis, proofs of simulator implementation, compiler construction, power/temperature modeling of digital devices, high-level performance analysis, and code/device certification. Several industrial contexts are covered, including modeling of computing and communication, proof architectures models and statistical based validation techniques.
Analog Integrated Circuits deals with the design and analysis of modem analog circuits using integrated bipolar and field-effect transistor technologies. This book is suitable as a text for a one-semester course for senior level or first-year graduate students as well as a reference work for practicing engin eers. Advanced students will also find the text useful in that some of the material presented here is not covered in many first courses on analog circuits. Included in this is an extensive coverage of feedback amplifiers, current-mode circuits, and translinear circuits. Suitable background would be fundamental courses in electronic circuits and semiconductor devices. This book contains numerous examples, many of which include commercial analog circuits. End-of-chapter problems are given, many illustrating practical circuits. Chapter 1 discuses the models commonly used to represent devices used in modem analog integrated circuits. Presented are models for bipolar junction transistors, junction diodes, junction field-effect transistors, and metal-oxide semiconductor field-effect transistors. Both large-signal and small-signal models are developed as well as their implementation in the SPICE circuit simulation program. The basic building blocks used in a large variety of analog circuits are analyzed in Chapter 2; these consist of current sources, dc level-shift stages, single-transistor gain stages, two-transistor gain stages, and output stages. Both bipolar and field-effect transistor implementations are presented. Chapter 3 deals with operational amplifier circuits. The four basic op-amp circuits are analyzed: (1) voltage-feedback amplifiers, (2) current-feedback amplifiers, (3) current-differencing amplifiers, and (4) transconductance ampli fiers. Selected applications are also presented."
This book describes the design and implementation of energy-efficient smart (digital output) temperature sensors in CMOS technology. To accomplish this, a new readout topology, namely the zoom-ADC, is presented. It combines a coarse SAR-ADC with a fine Sigma-Delta (SD) ADC. The digital result obtained from the coarse ADC is used to set the reference levels of the SD-ADC, thereby zooming its full-scale range into a small region around the input signal. This technique considerably reduces the SD-ADC's full-scale range, and notably relaxes the number of clock cycles needed for a given resolution, as well as the DC-gain and swing of the loop-filter. Both conversion time and power-efficiency can be improved, which results in a substantial improvement in energy-efficiency. Two BJT-based sensor prototypes based on 1st-order and 2nd-order zoom-ADCs are presented. They both achieve inaccuracies of less than +/-0.2 DegreesC over the military temperature range (-55 DegreesC to 125 DegreesC). A prototype capable of sensing temperatures up to 200 DegreesC is also presented. As an alternative to BJTs, sensors based on dynamic threshold MOSTs (DTMOSTs) are also presented. It is shown that DTMOSTs are capable of achieving low inaccuracy (+/-0.4 DegreesC over the military temperature range) as well as sub-1V operation, making them well suited for use in modern CMOS processes.
First of all, I would like to congratulate Gabriella Pasi and Gloria Bordogna for the work they accomplished in preparing this new book in the series "Study in Fuzziness and Soft Computing." "Recent Issues on the Management of Fuzziness in Databases" is undoubtedly a token of their long-lasting and active involvement in the area of Fuzzy Information Retrieval and Fuzzy Database Systems. This book is really welcome in the area of fuzzy databases where they are not numerous although the first works at the crossroads of fuzzy sets and databases were initiated about twenty years ago by L. Zadeh. Only five books have been published since 1995, when the first volume dedicated to fuzzy databases published in the series "Study in Fuzziness and Soft Computing" edited by J. Kacprzyk and myself appeared. Going beyond books strictly speaking, let us also mention the existence of review papers that are part of a couple of handbooks related to fuzzy sets published since 1998. The area known as fuzzy databases covers a bunch of topics among which: -flexible queries addressed to regular databases, -the extension of the notion of a functional dependency, -data mining and fuzzy summarization, -querying databases containing imperfect attribute values represented thanks to possibility distributions.
This book describes automated debugging approaches for the bugs and the faults which appear in different abstraction levels of a hardware system. The authors employ a transaction-based debug approach to systems at the transaction-level, asserting the correct relation of transactions. The automated debug approach for design bugs finds the potential fault candidates at RTL and gate-level of a circuit. Debug techniques for logic bugs and synchronization bugs are demonstrated, enabling readers to localize the most difficult bugs. Debug automation for electrical faults (delay faults)finds the potentially failing speedpaths in a circuit at gate-level. The various debug approaches described achieve high diagnosis accuracy and reduce the debugging time, shortening the IC development cycle and increasing the productivity of designers. Describes a unified framework for debug automation used at both pre-silicon and post-silicon stages; Provides approaches for debug automation of a hardware system at different levels of abstraction, i.e., chip, gate-level, RTL and transaction level; Includes techniques for debug automation of design bugs and electrical faults, as well as an infrastructure to debug NoC-based multiprocessor SoCs.
This textbook aims to help the reader develop an in-depth understanding of logical reasoning and gain knowledge of the theory of computation. The book combines theoretical teaching and practical exercises; the latter is realised in Isabelle/HOL, a modern theorem prover, and PAT, an industry-scale model checker. I also give entry-level tutorials on the two software to help the reader get started. By the end of the book, the reader should be proficient in both software. Content-wise, this book focuses on the syntax, semantics and proof theory of various logics; automata theory, formal languages, computability and complexity. The final chapter closes the gap with a discussion on the insight that links logic with computation. This book is written for a high-level undergraduate course or a Master's course. The hybrid skill set of practical theorem proving and model checking should be helpful for the future of readers should they pursue a research career or engineering in formal methods.
In programming, Gotcha is a well known term. A gotcha is a language feature, which, if misused, causes unexpected - and, in hardware design, potentially disastrous - behavior. The purpose of this book is to enable engineers to write better Verilog/SystemVerilog design and verification code, and to deliver digital designs to market more quickly. This book shows over 100 common coding mistakes that can be made with the Verilog and SystemVerilog languages. Each example explains in detail the symptoms of the error, the languages rules that cover the error, and the correct coding style to avoid the error. The book helps digital design and verification engineers to recognize these common coding mistakes, and know how to avoid them. Many of these errors are very subtle, and can potentially cost hours or days of lost engineering time trying to find and debug the errors. This book is unique because while there are many books that teach the language, and a few that try to teach coding style, no other book addresses how to recognize and avoid coding errors with these languages.
Edsger Wybe Dijkstra (1930-2002) was one of the most influential researchers in the history of computer science, making fundamental contributions to both the theory and practice of computing. Early in his career, he proposed the single-source shortest path algorithm, now commonly referred to as Dijkstra's algorithm. He wrote (with Jaap Zonneveld) the first ALGOL 60 compiler, and designed and implemented with his colleagues the influential THE operating system. Dijkstra invented the field of concurrent algorithms, with concepts such as mutual exclusion, deadlock detection, and synchronization. A prolific writer and forceful proponent of the concept of structured programming, he convincingly argued against the use of the Go To statement. In 1972 he was awarded the ACM Turing Award for "fundamental contributions to programming as a high, intellectual challenge; for eloquent insistence and practical demonstration that programs should be composed correctly, not just debugged into correctness; for illuminating perception of problems at the foundations of program design." Subsequently he invented the concept of self-stabilization relevant to fault-tolerant computing. He also devised an elegant language for nondeterministic programming and its weakest precondition semantics, featured in his influential 1976 book A Discipline of Programming in which he advocated the development of programs in concert with their correctness proofs. In the later stages of his life, he devoted much attention to the development and presentation of mathematical proofs, providing further support to his long-held view that the programming process should be viewed as a mathematical activity. In this unique new book, 31 computer scientists, including five recipients of the Turing Award, present and discuss Dijkstra's numerous contributions to computing science and assess their impact. Several authors knew Dijkstra as a friend, teacher, lecturer, or colleague. Their biographical essays and tributes provide a fascinating multi-author picture of Dijkstra, from the early days of his career up to the end of his life.
This open access book summarizes knowledge about several file systems and file formats commonly used in mobile devices. In addition to the fundamental description of the formats, there are hints about the forensic value of possible artefacts, along with an outline of tools that can decode the relevant data. The book is organized into two distinct parts: Part I describes several different file systems that are commonly used in mobile devices. * APFS is the file system that is used in all modern Apple devices including iPhones, iPads, and even Apple Computers, like the MacBook series. * Ext4 is very common in Android devices and is the successor of the Ext2 and Ext3 file systems that were commonly used on Linux-based computers. * The Flash-Friendly File System (F2FS) is a Linux system designed explicitly for NAND Flash memory, common in removable storage devices and mobile devices, which Samsung Electronics developed in 2012. * The QNX6 file system is present in Smartphones delivered by Blackberry (e.g. devices that are using Blackberry 10) and modern vehicle infotainment systems that use QNX as their operating system. Part II describes five different file formats that are commonly used on mobile devices. * SQLite is nearly omnipresent in mobile devices with an overwhelming majority of all mobile applications storing their data in such databases. * The second leading file format in the mobile world are Property Lists, which are predominantly found on Apple devices. * Java Serialization is a popular technique for storing object states in the Java programming language. Mobile application (app) developers very often resort to this technique to make their application state persistent. * The Realm database format has emerged over recent years as a possible successor to the now ageing SQLite format and has begun to appear as part of some modern applications on mobile devices. * Protocol Buffers provide a format for taking compiled data and serializing it by turning it into bytes represented in decimal values, which is a technique commonly used in mobile devices. The aim of this book is to act as a knowledge base and reference guide for digital forensic practitioners who need knowledge about a specific file system or file format. It is also hoped to provide useful insight and knowledge for students or other aspiring professionals who want to work within the field of digital forensics. The book is written with the assumption that the reader will have some existing knowledge and understanding about computers, mobile devices, file systems and file formats.
Coding is one of the most in-demand skills in the job market. Whether you're a recent graduate or a professional, Confident Coding offers the career insights and technical knowledge you need for success. A unique combination of technical insights and fascinating career guidance, this book highlights the importance of coding, whatever your professional profile. For entrepreneurs, being able to create your own website or app can grant you valuable freedom and revolutionize your business. For aspiring developers, this book will give you the building blocks to embark on your career path. This new and improved third edition of the award-winning book gives you a step-by-step learning guide to HTML, CSS, JavaScript, Python, building iPhone and Android apps and debugging. Confident Coding is the essential guide to mastering the fundamentals of coding. About the Confident series... From coding and data science to cloud and cyber security, the Confident books are perfect for building your technical knowledge and enhancing your professional career.
Neural network and artificial intelligence algorithrns and computing have increased not only in complexity but also in the number of applications. This in turn has posed a tremendous need for a larger computational power that conventional scalar processors may not be able to deliver efficiently. These processors are oriented towards numeric and data manipulations. Due to the neurocomputing requirements (such as non-programming and learning) and the artificial intelligence requirements (such as symbolic manipulation and knowledge representation) a different set of constraints and demands are imposed on the computer architectures/organizations for these applications. Research and development of new computer architectures and VLSI circuits for neural networks and artificial intelligence have been increased in order to meet the new performance requirements. This book presents novel approaches and trends on VLSI implementations of machines for these applications. Papers have been drawn from a number of research communities; the subjects span analog and digital VLSI design, computer design, computer architectures, neurocomputing and artificial intelligence techniques. This book has been organized into four subject areas that cover the two major categories of this book; the areas are: analog circuits for neural networks, digital implementations of neural networks, neural networks on multiprocessor systems and applications, and VLSI machines for artificial intelligence. The topics that are covered in each area are briefly introduced below.
There is nO' dDubt that the mioroprooessor (~p) revDlutiDn will cDntinue intO' the future and many will be required to' specify and integrate mi- crDprDceSSDrs intO' prDducts Dr systems in their Dwn disciplines. There- fDre, well-designed flexible interfaoes will be required to' ensure CDm- patibility with Dther equipments and to' extend design DptiDns. AlthDugh there are several bDDks Dn micrDcDmputers and micrDprDcessDrs, Dnly few Df thDse devDte but a small part Dn the impDrtant aspects Df interfaces. It was with this in mind that the present bDDk was written as a selfcDn- tained vDlume to' be part Df the mDre general series : Mioroprooessors- Based Systems Engineering. It fills an existing gap in technDIDgy, as in- terfaces are the last items to' be seriDusly cDnsidered in the race Df new technDIDgy, and it deals with the systematic study Df micrDprDcessDr interfaces and their applicatiDns in many diversified fields. This bDDk is aimed at engineers in industry and engineering stu- dents whO' need to' learn hDW to' interface micrDprDcessDrs, and hence mi- crDcDmputers and Dther related equipments, to' external digital Dr analDg devices. It is suitable fDr use as a textbDDk Dr fDr supplementary read- ing, either in an applied undergraduate CDurse in electrical engineering Dr in the last year Df three-year-curriculum technical cDlleges.
It has become clear in recent years from such major forums as the various international conferences on flexible manufacruring systems (FMSs) that the computer-controlled and -integrated "factory of the furure" is now being considered as a commercially viable and technically achievable goal. To date, most attention has been given to the design, development, and evalu ation of flexible machining systems. Now, with the essential support of increasing numbers of industrial examples, the general concepts, technical requirements, and cost-effectiveness of responsive, computer-integrated, flexible machining systems are fast becoming established knowledge. There is, of course, much still to be done in the development of modular com puter hardware and software, and the scope for cost-effective developments in pro gramming systems, workpiece handling, and quality control will ensure that contin uing development will occur over the next decade. However, international attention is now increasingly rurning toward the flexible computer control of the assembly process as the next logical step in progressive factory automation. It is here at this very early stage that Tony Owen has bravely set out to encompass the future field of flexible assembly systems (FASs) in his own distinctive, wide-ranging style."
Knowledge discovery is an area of computer science that attempts to uncover interesting and useful patterns in data that permit a computer to perform a task autonomously or assist a human in performing a task more efficiently. Soft Computing for Knowledge Discovery provides a self-contained and systematic exposition of the key theory and algorithms that form the core of knowledge discovery from a soft computing perspective. It focuses on knowledge representation, machine learning, and the key methodologies that make up the fabric of soft computing - fuzzy set theory, fuzzy logic, evolutionary computing, and various theories of probability (e.g. naive Bayes and Bayesian networks, Dempster-Shafer theory, mass assignment theory, and others). In addition to describing many state-of-the-art soft computing approaches to knowledge discovery, the author introduces Cartesian granule features and their corresponding learning algorithms as an intuitive approach to knowledge discovery. This new approach embraces the synergistic spirit of soft computing and exploits uncertainty in order to achieve tractability, transparency and generalization. Parallels are drawn between this approach and other well known approaches (such as naive Bayes and decision trees) leading to equivalences under certain conditions. The approaches presented are further illustrated in a battery of both artificial and real-world problems. Knowledge discovery in real-world problems, such as object recognition in outdoor scenes, medical diagnosis and control, is described in detail. These case studies provide further examples of how to apply the presented concepts and algorithms to practical problems. The author provides web page access to an online bibliography, datasets, source codes for several algorithms described in the book, and other information. Soft Computing for Knowledge Discovery is for advanced undergraduates, professionals and researchers in computer science, engineering and business information systems who work or have an interest in the dynamic fields of knowledge discovery and soft computing.
The growing complexity of projects today, as well as the uncertainty inherent in innovative projects, is making obsolete traditional project management practices and procedures, which are based on the notion that much about a project is known at its start. The current high level of change and complexity confronting organizational leaders and managers requires a new approach to projects so they can be managed flexibly to embrace and exploit change. What once used to be considered extreme uncertainty is now the norm, and managing planned projects is being replaced by managing projects as they evolve. Successfully managing projects in extreme situations, such as polar and military expeditions, shows how to manage successfully projects in today's turbulent environment. Executed under the harshest and most unpredictable conditions, these projects are great sources for learning about how to manage unexpected and unforeseen situations as they occur. This book presents multiple case studies of managing extreme events as they happened during polar, mountain climbing, military, and rescue expeditions. A boat accident in the Artic is a lesson on how an effective project manager must be ambidextrous: on one hand able to follow plans and on the other hand able to abandon those plans when disaster strikes and improvise new ones in response. Polar expeditions also illustrate how a team can use "weak links" to go beyond its usual information network to acquire strategic information. Fire and rescues operations illustrate how one team member's knowledge can be transferred to the entire team. Military operations provide case material on how teams coordinate and make use of both individual and collective competencies. This groundbreaking work pushes the definitions of a project and project management to reveal new insight that benefits researchers, academics, and the practitioners managing projects in today's challenging and uncertain times.
|
![]() ![]() You may like...
Practical TCP/IP and Ethernet Networking…
Deon Reynders, Edwin Wright
Paperback
R1,581
Discovery Miles 15 810
|