![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming
This textbook provides an accessible introduction to the most important features of Fortran 2008. Features: presents a complete discussion of all the basic features needed to write complete Fortran programs; makes extensive use of examples and case studies to illustrate the practical use of features of Fortran 08, and supplies simple problems for the reader; provides a detailed exploration of control constructs, modules, procedures, arrays, character strings, data structures and derived types, pointer variables, and object-oriented programming; includes coverage of such major new features in Fortran 08 as coarrays, submodules, parameterized derived types, and derived-type input and output; highlights the topic of modules as the framework for organizing data and procedures for a Fortran program; investigates the excellent input/output facilities available in Fortran; contains appendices listing the many intrinsic procedures and providing a brief informal syntax specification for the language.
Nash equilibrium is the central solution concept in Game Theory. Since Nash's original paper in 1951, it has found countless applications in modeling strategic behavior of traders in markets, (human) drivers and (electronic) routers in congested networks, nations in nuclear disarmament negotiations, and more. A decade ago, the relevance of this solution concept was called into question by computer scientists, who proved (under appropriate complexity assumptions) that computing a Nash equilibrium is an intractable problem. And if centralized, specially designed algorithms cannot find Nash equilibria, why should we expect distributed, selfish agents to converge to one? The remaining hope was that at least approximate Nash equilibria can be efficiently computed.Understanding whether there is an efficient algorithm for approximate Nash equilibrium has been the central open problem in this field for the past decade. In this book, we provide strong evidence that even finding an approximate Nash equilibrium is intractable. We prove several intractability theorems for different settings (two-player games and many-player games) and models (computational complexity, query complexity, and communication complexity). In particular, our main result is that under a plausible and natural complexity assumption ("Exponential Time Hypothesis for PPAD"), there is no polynomial-time algorithm for finding an approximate Nash equilibrium in two-player games. The problem of approximate Nash equilibrium in a two-player game poses a unique technical challenge: it is a member of the class PPAD, which captures the complexity of several fundamental total problems, i.e., problems that always have a solution; and it also admits a quasipolynomial time algorithm. Either property alone is believed to place this problem far below NP-hard problems in the complexity hierarchy; having both simultaneously places it just above P, at what can be called the frontier of intractability. Indeed, the tools we develop in this book to advance on this frontier are useful for proving hardness of approximation of several other important problems whose complexity lies between P and NP: Brouwer's fixed point, market equilibrium, CourseMatch (A-CEEI), densest k-subgraph, community detection, VC dimension and Littlestone dimension, and signaling in zero-sum games.
From the reviews of the 1st edition: "This book provides a comprehensive and detailed account of different topics in algorithmic 3-dimensional topology, culminating with the recognition procedure for Haken manifolds and including the up-to-date results in computer enumeration of 3-manifolds. Originating from lecture notes of various courses given by the author over a decade, the book is intended to combine the pedagogical approach of a graduate textbook (without exercises) with the completeness and reliability of a research monograph... All the material, with few exceptions, is presented from the peculiar point of view of special polyhedra and special spines of 3-manifolds. This choice contributes to keep the level of the exposition really elementary. In conclusion, the reviewer subscribes to the quotation from the back cover: "the book fills a gap in the existing literature and will become a standard reference for algorithmic 3-dimensional topology both for graduate students and researchers." Zentralblatt fur Mathematik 2004 For this 2nd edition, new results, new proofs, and commentaries for a better orientation of the reader have been added. In particular, in Chapter 7 several new sections concerning applications of the computer program "3-Manifold Recognizer" have been included. "
A formal method is not the main engine of a development process, its contribution is to improve system dependability by motivating formalisation where useful. This book summarizes the results of the DEPLOY research project on engineering methods for dependable systems through the industrial deployment of formal methods in software development. The applications considered were in automotive, aerospace, railway, and enterprise information systems, and microprocessor design. The project introduced a formal method, Event-B, into several industrial organisations and built on the lessons learned to provide an ecosystem of better tools, documentation and support to help others to select and introduce rigorous systems engineering methods. The contributing authors report on these projects and the lessons learned. For the academic and research partners and the tool vendors, the project identified improvements required in the methods and supporting tools, while the industrial partners learned about the value of formal methods in general. A particular feature of the book is the frank assessment of the managerial and organisational challenges, the weaknesses in some current methods and supporting tools, and the ways in which they can be successfully overcome. The book will be of value to academic researchers, systems and software engineers developing critical systems, industrial managers, policymakers, and regulators.
This book describes the benefits that emerge when the fields of constraint programming and concurrency meet. On the one hand, constraints can be used in concurrency theory to increase the conciseness and the expressive power of concurrent languages from a pragmatic point of view. On the other hand, problems modeled by using constraints can be solved faster and more efficiently using a concurrent system. Both directions are explored providing two separate lines of development. Firstly the expressive power of a concurrent language is studied, namely Constraint Handling Rules, that supports constraints as a primitive construct. The features of this language which make it Turing powerful are shown. Then a framework is proposed to solve constraint problems that is intended to be deployed on a concurrent system. For the development of this framework the concurrent language Jolie following the Service Oriented paradigm is used. Based on this experience, an extension to Service Oriented Languages is also proposed in order to overcome some of their limitations and to improve the development of concurrent applications.
This volume explains how advances in computer technology will augment communication in person-to-person, organizational, and educational settings. It describes the convergence of virtual reality and group decision support, and how these will serve educational and organizational effectiveness. Contributors--experts from business and academia--examine what the computing/communications world will look like in the near future, what the specific needs of various industries will be, and how innovations will fit into organizations and society. These three topics are addressed with attention to the following questions: What will be the size of initial and future markets for advanced computer and communications technology? What will be the future computing environment in manufacturing operations, in the executive suite, in the office, in the field and on the road, at the point of service, for the computer-integrated enterprise, at home, in the school, and in the global marketplace?
Locally computable (NC0) functions are "simple" functions for which every bit of the output can be computed by reading a small number of bits of their input. The study of locally computable cryptography attempts to construct cryptographic functions that achieve this strong notion of simplicity and simultaneously provide a high level of security. Such constructions are highly parallelizable and they can be realized by Boolean circuits of constant depth. This book establishes, for the first time, the possibility of local implementations for many basic cryptographic primitives such as one-way functions, pseudorandom generators, encryption schemes and digital signatures. It also extends these results to other stronger notions of locality, and addresses a wide variety of fundamental questions about local cryptography. The author's related thesis was honorably mentioned (runner-up) for the ACM Dissertation Award in 2007, and this book includes some expanded sections and proofs, and notes on recent developments. The book assumes only a minimal background in computational complexity and cryptography and is therefore suitable for graduate students or researchers in related areas who are interested in parallel cryptography. It also introduces general techniques and tools which are likely to interest experts in the area.
Integrating Security and Software Engineering: Advances and Future Vision provides the first step towards narrowing the gap between security and software engineering. This book introduces the field of secure software engineering, which is a branch of research investigating the integration of security concerns into software engineering practices. ""Integrating Security and Software Engineering: Advances and Future Vision"" discusses problems and challenges of considering security during the development of software systems, and also presents the predominant theoretical and practical approaches that integrate security and software engineering.
th I3E 2010 marked the 10 anniversary of the IFIP Conference on e-Business, e- Services, and e-Society, continuing a tradition that was invented in 1998 during the International Conference on Trends in Electronic Commerce, TrEC 1998, in Hamburg (Germany). Three years later the inaugural I3E 2001 conference was held in Zurich (Switzerland). Since then I3E has made its journey through the world: 2002 Lisbon (Portugal), 2003 Sao Paulo (Brazil), 2004 Toulouse (France), 2005 Poznan (Poland), 2006 Turku (Finland), 2007 Wuhan (China), 2008 Tokyo (Japan), and 2009 Nancy (France). I3E 2010 took place in Buenos Aires (Argentina) November 3-5, 2010. Known as "The Pearl" of South America, Buenos Aires is a cosmopolitan, colorful, and vibrant city, surprising its visitors with a vast variety of cultural and artistic performances, European architecture, and the passion for tango, coffee places, and football disc- sions. A cultural reference in Latin America, the city hosts 140 museums, 300 theaters, and 27 public libraries including the National Library. It is also the main educational center in Argentina and home of renowned universities including the U- versity of Buenos Aires, created in 1821. Besides location, the timing of I3E 2010 is th also significant--it coincided with the 200 anniversary celebration of the first local government in Argentina.
This book offers a coherent and comprehensive approach to feature subset selection in the scope of classification problems, explaining the foundations, real application problems and the challenges of feature selection for high-dimensional data. The authors first focus on the analysis and synthesis of feature selection algorithms, presenting a comprehensive review of basic concepts and experimental results of the most well-known algorithms. They then address different real scenarios with high-dimensional data, showing the use of feature selection algorithms in different contexts with different requirements and information: microarray data, intrusion detection, tear film lipid layer classification and cost-based features. The book then delves into the scenario of big dimension, paying attention to important problems under high-dimensional spaces, such as scalability, distributed processing and real-time processing, scenarios that open up new and interesting challenges for researchers. The book is useful for practitioners, researchers and graduate students in the areas of machine learning and data mining.
The demand for large-scale dependable, systems, such as Air Traffic Management, industrial plants and space systems, is attracting efforts of many word-leading European companies and SMEs in the area, and is expected to increase in the near future. The adoption of Off-The-Shelf (OTS) items plays a key role in such a scenario. OTS items allow mastering complexity and reducing costs and time-to-market; however, achieving these goals by ensuring dependability requirements at the same time is challenging. CRITICAL STEP project establishes a strategic collaboration between academic and industrial partners, and proposes a framework to support the development of dependable, OTS-based, critical systems. The book introduces methods and tools adopted by the critical systems industry, and surveys key achievements of the CRITICAL STEP project along four directions: fault injection tools, V&V of critical systems, runtime monitoring and evaluation techniques, and security assessment.
In recent years Genetic Algorithms (GA) and Artificial Neural
Networks (ANN) have progressively increased in importance amongst
the techniques routinely used in chemometrics. This book contains
contributions from experts in the field is divided in two sections
(GA and ANN). In each part, tutorial chapters are included in which
the theoretical bases of each technique are expertly (but simply)
described. These are followed by application chapters in which
special emphasis will be given to the advantages of the application
of GA or ANN to that specific problem, compared to classical
techniques, and to the risks connected with its misuse.
Dynamically Reconfigurable Systems is the first ever to focus on the emerging field of Dynamically Reconfigurable Computing Systems. While programmable logic and design-time configurability are well elaborated and covered by various texts, this book presents a unique overview over the state of the art and recent results for dynamic and run-time reconfigurable computing systems. Reconfigurable hardware is not only of utmost importance for large manufacturers and vendors of microelectronic devices and systems, but also a very attractive technology for smaller and medium-sized companies. Hence, Dynamically Reconfigurable Systems also addresses researchers and engineers actively working in the field and provides them with information on the newest developments and trends in dynamic and run-time reconfigurable systems.
The present book is the result of a three year research project which investigated the creative act of composing by means of algorithmic composition. Central to the investigation are the compositional strategies of 12 composers, which were documented through a dialogic and cyclic process of modelling and evaluating musical materials. The aesthetic premises and compositional approaches configure a rich spectrum of diverse positions, which is reflected also in the kinds of approaches and methods used. These approaches and methods include the generation and evaluation of chord sequences using genetic algorithms, the application of morphing strategies to research harmonic transformations, an automatic classification of personal preferences via machine learning, and an application of mathematical music theory to the analysis and resynthesis of musical material. The second part of the book features contributions by Sandeep Bhagwati, William Brooks, David Cope, Darla Crispin, Nicolas Donin, and Guerino Mazzola. These authors variously consider the project from different perspectives, offer independent approaches, or provide more general reflections from their respective research fields.
This book contains the collection of papers presented at the conference of the International Federation for Information Processing Working Group 8.2 "Information and Organizations." The conference took place during June 21-24, 2009 at the Universidade do Minho in Guimaraes, Portugal. The conference entitled "CreativeSME - The Role of IS in Leveraging the Intelligence and Creativity of SME's" attracted high-quality submissions from across the world. Each paper was reviewed by at least two reviewers in a double-blind review process. In addition to the 19 papers presented at the conference, there were five panels and four workshops, which covered a range of issues relevant to SMEs, creativity and information systems. We would like to show our appreciation of the efforts of our two invited keynote speakers, Michael Dowling of the University of Regensburg, Germany and Carlos Zorrinho, Portuguese coordinator of the Lisbon Strategy and the Technological Plan. The following organizations supported the conference through financial or other contributions and we would like to thank them for their engagement: "
In recent years, cloud computing has gained a significant amount of attention by providing more flexible ways to store applications remotely. With software testing continuing to be an important part of the software engineering life cycle, the emergence of software testing in the cloud has the potential to change the way software testing is performed. Software Testing in the Cloud: Perspectives on an Emerging Discipline is a comprehensive collection of research by leading experts in the field providing an overview of cloud computing and current issues in software testing and system migration. Deserving the attention of researchers, practitioners, and managers, this book aims to raise awareness about this new field of study.
Information security and copyright protection are more important today than before. Digital watermarking is one of the widely used techniques used in the world in the area of information security. This book introduces a number of digital watermarking techniques and is divided into four parts. The first part introduces the importance of watermarking techniques and intelligent technology. The second part includes a number of watermarking techniques. The third part includes the hybrid watermarking techniques and the final part presents conclusions. This book is directed to students, professors, researchers and application engineers who are interested in the area of information security.
"Software Defined Networks" discusses the historical networking
environment that gave rise to SDN, as well as the latest advances
in SDN technology. The book gives you the state of the art
knowledge needed for successful deployment of an SDN, including:
How to explain to the non-technical business decision makers in
your organization the potential benefits, as well as the risks, in
shifting parts of a network to the SDN modelHow to make intelligent
decisions about when to integrate SDN technologies in a networkHow
to decide if your organization should be developing its own SDN
applications or looking to acquire these from an outside vendorHow
to accelerate the ability to develop your own SDN application, be
it entirely novel or a more efficient approach to a long-standing
problem
This book explores how agile development practices, in particular pair programming, code review and automated testing, help software development teams to perform better. Agile software engineering has become the standard software development paradigm over the last decade, and the insights provided here are taken from a large-scale survey of 80 professional software development teams working at SAP SE in Germany. In addition, the book introduces a novel measurement tool for assessing the performance of software development teams. No previous study has researched this topic with a similar data set comprising insights from more than 450 professional software engineers.
Fundamental Problems in Computing is in honor of Professor Daniel J. Rosenkrantz, a distinguished researcher in Computer Science. Professor Rosenkrantz has made seminal contributions to many subareas of Computer Science including formal languages and compilers, automata theory, algorithms, database systems, very large scale integrated systems, fault-tolerant computing and discrete dynamical systems. For many years, Professor Rosenkrantz served as the Editor-in-Chief of the Journal of the Association for Computing Machinery (JACM), a very prestigious archival journal in Computer Science. His contributions to Computer Science have earned him many awards including the Fellowship from ACM and the ACM SIGMOD Contributions Award.
Software Engineering Techniques Applied to Agricultural Systems presents cutting-edge software engineering techniques for designing and implementing better agricultural software systems based on the object-oriented paradigm and the Unified Modeling Language (UML). The focus is on the presentation of rigorous step-by-step approaches for modeling flexible agricultural and environmental systems, starting with a conceptual diagram representing elements of the system and their relationships. Furthermore, diagrams such as sequential and collaboration diagrams are used to explain the dynamic and static aspects of the software system. This second edition includes: a new chapter on Object Constraint Language (OCL), a new section dedicated to the Model-VIEW-Controller (MVC) design pattern, new chapters presenting details of two MDA-based tools - the Virtual Enterprise and Olivia Nova and a new chapter with exercises on conceptual modeling. It may be highly useful to undergraduate and graduate students as the first edition has proven to be a useful supplementary textbook for courses in mathematical programming in agriculture, ecology, information technology, agricultural operations research methods, agronomy and soil science and applied mathematical modeling. The book has broad appeal for anyone involved in software development projects in agriculture and to researchers in general who are interested in modeling complex systems. From the reviews of the first edition: "The book will be useful for those interested in gaining a quick understanding of current software development techniques and how they are applied in practice... this is a good introductory text on the application of OOAD, UML and design patters to the creation of agricultural systems. It is technically sound and well written." -Computing Reviews, September 2006 |
![]() ![]() You may like...
C++ How to Program: Horizon Edition
Harvey Deitel, Paul Deitel
Paperback
R1,861
Discovery Miles 18 610
Java How to Program, Late Objects…
Paul Deitel, Harvey Deitel
Paperback
|