![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming
Throughout time, scientists have looked to nature in order to understand and model solutions for complex real-world problems. In particular, the study of self-organizing entities, such as social insect populations, presents a new opportunity within the field of artificial intelligence. >Emerging Research on Swarm Intelligence and Algorithm Optimization discusses current research analyzing how the collective behavior of decentralized systems in the natural world can be applied to intelligent system design. Discussing the application of swarm principles, optimization techniques, and key algorithms being used in the field, this publication serves as an essential reference for academicians, upper-level students, IT developers, and IT theorists.
Convexity of sets in linear spaces, and concavity and convexity of functions, lie at the root of beautiful theoretical results that are at the same time extremely useful in the analysis and solution of optimization problems, including problems of either single objective or multiple objectives. Not all of these results rely necessarily on convexity and concavity; some of the results can guarantee that each local optimum is also a global optimum, giving these methods broader application to a wider class of problems. Hence, the focus of the first part of the book is concerned with several types of generalized convex sets and generalized concave functions. In addition to their applicability to nonconvex optimization, these convex sets and generalized concave functions are used in the book's second part, where decision-making and optimization problems under uncertainty are investigated. Uncertainty in the problem data often cannot be avoided when dealing with practical problems. Errors occur in real-world data for a host of reasons. However, over the last thirty years, the fuzzy set approach has proved to be useful in these situations. It is this approach to optimization under uncertainty that is extensively used and studied in the second part of this book. Typically, the membership functions of fuzzy sets involved in such problems are neither concave nor convex. They are, however, often quasiconcave or concave in some generalized sense. This opens possibilities for application of results on generalized concavity to fuzzy optimization. Despite this obvious relation, applying the interface of these two areas has been limited to date. It is hoped that the combination of ideas and results from the field of generalized concavity on the one hand and fuzzy optimization on the other hand outlined and discussed in Generalized Concavity in Fuzzy Optimization and Decision Analysis will be of interest to both communities. Our aim is to broaden the classes of problems that the combination of these two areas can satisfactorily address and solve.
Evolutionary Algorithms, in particular Evolution Strategies, Genetic Algorithms, or Evolutionary Programming, have found wide acceptance as robust optimization algorithms in the last ten years. Compared with the broad propagation and the resulting practical prosperity in different scientific fields, the theory has not progressed as much.This monograph provides the framework and the first steps toward the theoretical analysis of Evolution Strategies (ES). The main emphasis is on understanding the functioning of these probabilistic optimization algorithms in real-valued search spaces by investigating the dynamical properties of some well-established ES algorithms. The book introduces the basic concepts of this analysis, such as progress rate, quality gain, and self-adaptation response, and describes how to calculate these quantities. Based on the analysis, functioning principles are derived, aiming at a qualitative understanding of why and how ES algorithms work.
Arc Routing: Theory, Solutions and Applications is about arc traversal and the wide variety of arc routing problems, which has had its foundations in the modern graph theory work of Leonhard Euler. Arc routing methods and computation has become a fundamental optimization concept in operations research and has numerous applications in transportation, telecommunications, manufacturing, the Internet, and many other areas of modern life. The book draws from a variety of sources including the traveling salesman problem (TSP) and graph theory, which are used and studied by operations research, engineers, computer scientists, and mathematicians. In the last ten years or so, there has been extensive coverage of arc routing problems in the research literature, especially from a graph theory perspective; however, the field has not had the benefit of a uniform, systematic treatment. With this book, there is now a single volume that focuses on state-of-the-art exposition of arc routing problems, that explores its graph theoretical foundations, and that presents a number of solution methodologies in a variety of application settings. Moshe Dror has succeeded in working with an elite group of ARC routing scholars to develop the highest quality treatment of the current state-of-the-art in arc routing.
There is a myriad of different methodologies for transforming real-world scenarios into information system models. Moreover, this transformation process is critical not only for developing a successful information system, but also for helping users optimize their work and make their organizations more efficient. Tabular Application Development for Information Systems describes the workings and utility of Tabular Application Development (TAD) as an object-oriented methodology that uses tables to model the real world. Essentially, TAD entails collecting information about a real-world situation into tables, identifying and implementing changes by analyzingthe tabularized content, and then using the data gathered in the changed tables to develop the organization's information system. Given that tables can be easily surveyed and modified, analysts can locate almost immediately any information about business processes, work processes, activities, tasks, or events. In addition, the user can confidently proceed without misunderstandings and can quickly rectify any mistake or problem. Topics and features:*TAD's advantages over UML methodology in terms of simplicity, utility for either small or large information systems, and independence from the analyst*presents the subject of business process reengineering and information systems development from a new perspective*thorough descriptions of three case-study applications of TAD*briefly introduces all key object-oriented concepts*segments the TAD methodology into six clearly defined phases This book offers an essential exposition on the TAD method for information systems development and design. Practitioners and professionals in information science, computer science, and business process reengineering will find the work a highly useful resource when using TAD for rapid, efficient software development.
The series Studies in Computational Intelligence (SCI) publishes new developments and advances in the various areas of computational intelligence-quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life science, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems and hybrid intelligent systems. Critical to both contributors and readers are the short publication time and world-wide distribution-this permits a rapid and broad dissemination of research results. The purpose of the 10th International Conference on Software Engineering Research, Management and Applications(SERA 2012) held on May 3- June 1, 2012 in Shanghai, China was to bring together scientists, engineers, computer users, and students to share their experiences and exchange new ideas and research results about all aspects (theory, applications and tools) of Software Engineering Research, Management and Applications, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them. The conference organizers selected 12 outstanding papers from those papers accepted for presentation at the conference in order to publish them in this volume. The papers were chosen based on review scores submitted by members of the program committee, and further rigorous rounds of review."
This book describes a novel methodology for studying algorithmic skills, intended as cognitive activities related to rule-based symbolic transformation, and argues that some human computational abilities may be interpreted and analyzed as genuine examples of extended cognition. It shows that the performance of these abilities relies not only on innate neurocognitive systems or language-related skills, but also on external tools and general agent-environment interactions. Further, it asserts that a low-level analysis, based on a set of core neurocognitive systems linking numbers and language, is not sufficient to explain some specific forms of high-level numerical skills, like those involved in algorithm execution. To this end, it reports on the design of a cognitive architecture for modeling all the relevant features involved in the execution of algorithmic strategies, including external tools, such as paper and pencils. The first part of the book discusses the philosophical premises for endorsing and justifying a position in philosophy of mind that links a modified form of computationalism with some recent theoretical and scientific developments, like those introduced by the so-called dynamical approach to cognition. The second part is dedicated to the description of a Turing-machine-inspired cognitive architecture, expressly designed to formalize all kinds of algorithmic strategies.
Java Programmers, Preprare for Microsoft's .NET initiative while
enhancing your repertoire and marketability with C# for Java
Progammers
Contains revised, edited, cross-referenced, and thematically organized selected DumpAnalysis.org blog posts about memory dump and software trace analysis, software troubleshooting and debugging written in November 2010 - October 2011 for software engineers developing and maintaining products on Windows platforms, quality assurance engineers testing software on Windows platforms, technical support and escalation engineers dealing with complex software issues, and security researchers, malware analysts and reverse engineers. The sixth volume features: - 56 new crash dump analysis patterns including 14 new .NET memory dump analysis patterns - 4 new pattern interaction case studies - 11 new trace analysis patterns - New Debugware pattern - Introduction to UI problem analysis patterns - Introduction to intelligence analysis patterns - Introduction to unified debugging pattern language - Introduction to generative debugging, metadefect template library and DNA of software behavior - The new school of debugging - .NET memory dump analysis checklist - Software trace analysis checklist - Introduction to close and deconstructive readings of a software trace - Memory dump analysis compass - Computical and Stack Trace Art - The abductive reasoning of Philip Marlowe - Orbifold memory space and cloud computing - Memory worldview - Interpretation of cyberspace - Relationship of memory dumps to religion - Fully cross-referenced with Volume 1, Volume 2, Volume 3, Volume 4, and Volume 5
This handbook offers comprehensive coverage of recent advancements in Big Data technologies and related paradigms. Chapters are authored by international leading experts in the field, and have been reviewed and revised for maximum reader value. The volume consists of twenty-five chapters organized into four main parts. Part one covers the fundamental concepts of Big Data technologies including data curation mechanisms, data models, storage models, programming models and programming platforms. It also dives into the details of implementing Big SQL query engines and big stream processing systems. Part Two focuses on the semantic aspects of Big Data management including data integration and exploratory ad hoc analysis in addition to structured querying and pattern matching techniques. Part Three presents a comprehensive overview of large scale graph processing. It covers the most recent research in large scale graph processing platforms, introducing several scalable graph querying and mining mechanisms in domains such as social networks. Part Four details novel applications that have been made possible by the rapid emergence of Big Data technologies such as Internet-of-Things (IOT), Cognitive Computing and SCADA Systems. All parts of the book discuss open research problems, including potential opportunities, that have arisen from the rapid progress of Big Data technologies and the associated increasing requirements of application domains. Designed for researchers, IT professionals and graduate students, this book is a timely contribution to the growing Big Data field. Big Data has been recognized as one of leading emerging technologies that will have a major contribution and impact on the various fields of science and varies aspect of the human society over the coming decades. Therefore, the content in this book will be an essential tool to help readers understand the development and future of the field.
This book presents works detailing the application of processing and visualization techniques for analyzing the Earth's subsurface. The topic of the book is interactive data processing and interactive 3D visualization techniques used on subsurface data. Interactive processing of data together with interactive visualization is a powerful combination which has in the recent years become possible due to hardware and algorithm advances in. The combination enables the user to perform interactive exploration and filtering of datasets while simultaneously visualizing the results so that insights can be made immediately. This makes it possible to quickly form hypotheses and draw conclusions. Case studies from the geosciences are not as often presented in the scientific visualization and computer graphics community as e.g., studies on medical, biological or chemical data. This book will give researchers in the field of visualization and computer graphics valuable insight into the open visualization challenges in the geosciences, and how certain problems are currently solved using domain specific processing and visualization techniques. Conversely, readers from the geosciences will gain valuable insight into relevant visualization and interactive processing techniques. Subsurface data has interesting characteristics such as its solid nature, large range of scales and high degree of uncertainty, which makes it challenging to visualize with standard methods. It is also noteworthy that parallel fields of research have taken place in geosciences and in computer graphics, with different terminology when it comes to representing geometry, describing terrains, interpolating data and (example-based) synthesis of data. The domains covered in this book are geology, digital terrains, seismic data, reservoir visualization and CO2 storage. The technologies covered are 3D visualization, visualization of large datasets, 3D modelling, machine learning, virtual reality, seismic interpretation and multidisciplinary collaboration. People within any of these domains and technologies are potential readers of the book.
This book compiles contributions from renowned researchers covering all aspects of conceptual modeling, on the occasion of Arne Solvberg's 67th birthday. Friends of this pioneer in information systems modeling contribute their latest research results from such fields as data modeling, goal-oriented modeling, agent-oriented modeling, and process-oriented modeling. The book reflects the most important recent developments and application areas of conceptual modeling, and highlights trends in conceptual modeling for the next decade.
The book focuses on analyses that extract the flow of data, which imperative programming hides through its use and reuse of memory in computer systems and compilers. It will detail some program transformations that conserve this data flow and will introduce a family of analyses, called reaching definition analyses, to do this task. In addition, it shows that correctness of program transformations is guaranteed by the conservation of data flow.
JR is an extension of the Java programming language with additional concurrency mechanisms based on those in the SR (Synchronizing Resources) programming language. The JR implementation executes on UNIX-based systems (Linux, Mac OS X, and Solaris) and Windows-based systems. It is available free from the JR webpage. This book describes the JR programming language and illustrates how it can be used to write concurrent programs for a variety of applications. This text presents numerous small and large example programs. The source code for all programming examples and the given parts of all programming exercises are available on the JR webpage. Dr. Ronald A. Olsson and Dr. Aaron W. Keen, the authors of this text, are the designers and implementors of JR.
Current multimedia and telecom applications require complex, heterogeneous multiprocessor system on chip (MPSoC) architectures with specific communication infrastructure in order to achieve the required performance. Heterogeneous MPSoC includes different types of processing units (DSP, microcontroller, ASIP) and different communication schemes (fast links, non standard memory organization and access). Programming an MPSoC requires the generation of efficient software running on MPSoC from a high level environment, by using the characteristics of the architecture. This task is known to be tedious and error prone, because it requires a combination of high level programming environments with low level software design. This book gives an overview of concepts related to embedded software design for MPSoC. It details a full software design approach, allowing systematic, high-level mapping of software applications on heterogeneous MPSoC. This approach is based on gradual refinement of hardware/software interfaces and simulation models allowing to validate the software at different abstraction levels. This book combines Simulink for high level programming and SystemC for the low level software development. This approach is illustrated with multiple examples of application software and MPSoC architectures that can be used for deep understanding of software design for MPSoC.
This book contains extended and revised versions of the best papers that were p- sented during the 16th edition of the IFIP/IEEE WG10.5 International Conference on Very Large Scale Integration, a global System-on-a-Chip Design & CAD conference. The 16th conference was held at the Grand Hotel of Rhodes Island, Greece (October 13-15, 2008). Previous conferences have taken place in Edinburgh, Trondheim, V- couver, Munich, Grenoble, Tokyo, Gramado, Lisbon, Montpellier, Darmstadt, Perth, Nice and Atlanta. VLSI-SoC 2008 was the 16th in a series of international conferences sponsored by IFIP TC 10 Working Group 10.5 and IEEE CEDA that explores the state of the art and the new developments in the field of VLSI systems and their designs. The purpose of the conference was to provide a forum to exchange ideas and to present industrial and research results in the fields of VLSI/ULSI systems, embedded systems and - croelectronic design and test.
The book presents contributions on statistical models and methods applied, for both data science and SDGs, in one place. Measuring and controlling data of SDGs, data driven measurement of progress needs to be distributed to stakeholders. In this situation, the techniques used in data science, specially, in the big data analytics, play an important role rather than the traditional data gathering and manipulation techniques. This book fills this space through its twenty contributions. The contributions have been selected from those presented during the 7th International Conference on Data Science and Sustainable Development Goals organized by the Department of Statistics, University of Rajshahi, Bangladesh; and cover topics mainly on SDGs, bioinformatics, public health, medical informatics, environmental statistics, data science and machine learning. The contents of the volume would be useful to policymakers, researchers, government entities, civil society, and nonprofit organizations for monitoring and accelerating the progress of SDGs.
This book emphasizes methods, techniques and tools that can be used by typical software engineers in everyday projects. As the very popular UML language contains an assertion language (OCL), this language is presented and discussed with relation to other currently available assertion techniques. Currently these techniques are more widely used in late design and implementation phases. Here their role in analysis is emphasized. Assertion and scenario techniques are then combined into a single methodological framework. Finally a prototyping oriented model based on this framework is developed which helps to make sure that software fulfills user requirements.
This volume, the 6th volume in the DRUMS Handbook series, is part of the after math of the successful ESPRIT project DRUMS (Defeasible Reasoning and Un certainty Management Systems) which took place in two stages from 1989-1996. In the second stage (1993-1996) a work package was introduced devoted to the topics Reasoning and Dynamics, covering both the topics of 'Dynamics of Rea soning', where reasoning is viewed as a process, and 'Reasoning about Dynamics', which must be understood as pertaining to how both designers of and agents within dynamic systems may reason about these systems. The present volume presents work done in this context. This work has an emphasis on modelling and formal techniques in the investigation of the topic "Reasoning and Dynamics," but it is not mere theory that occupied us. Rather research was aimed at bridging the gap between theory and practice. Therefore also real-life applications of the modelling techniques were considered, and we hope this also shows in this volume, which is focused on the dynamics of reasoning processes. In order to give the book a broader perspective, we have invited a number of well-known researchers outside the project but working on similar topics to contribute as well. We have very pleasant recollections of the project, with its lively workshops and other meetings, with the many sites and researchers involved, both within and outside our own work package."
This book gives an overview of constraint satisfaction problems (CSPs), adapts related search algorithms and consistency algorithms for applications to multi-agent systems, and consolidates recent research devoted to cooperation in such systems. The techniques introduced are applied to various problems in multi-agent systems. Among the new approaches is a hybrid-type algorithm for weak-commitment search combining backtracking and iterative improvement; also, an extension of the basic CSP formalization called partial CSP is introduced in order to handle over-constrained CSPs.The book is written for advanced students and professionals interested in multi-agent systems or, more generally, in distributed artificial intelligence and constraint satisfaction. Researchers active in the area will appreciate this book as a valuable source of reference. |
![]() ![]() You may like...
Principles of Big Graph: In-depth…
Ripon Patgiri, Ganesh Chandra Deka, …
Hardcover
C++ How to Program: Horizon Edition
Harvey Deitel, Paul Deitel
Paperback
R1,917
Discovery Miles 19 170
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R4,186
Discovery Miles 41 860
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,341
Discovery Miles 13 410
News Search, Blogs and Feeds - A Toolkit
Lars Vage, Lars Iselid
Paperback
R1,412
Discovery Miles 14 120
|