![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming
This book presents the latest research findings and state-of-the-art solutions on optimization techniques and provides new research direction and developments. Both the theoretical and practical aspects of the book will be much beneficial to experts and students in optimization and operation research community. It selects high quality papers from The International Conference on Optimization: Techniques and Applications (ICOTA2013). The conference is an official conference series of POP (The Pacific Optimization Research Activity Group; there are over 500 active members). These state-of-the-art works in this book authored by recognized experts will make contributions to the development of optimization with its applications.
Soft City Culture and Technology: The Betaville Project discusses the complete cycle of conception, development, and deployment of the Betaville platform. Betaville is a massively participatory online environment for distributed 3D design and development of proposals for changes to the built environment an experimental integration of art, design, and software development for the public realm. Through a detailed account of Betaville from a Big Crazy Idea to a working "deep social medium," the author examines the current conditions of performance and accessibility of hardware, software, networks, and skills that can be brought together into a new form of open public design and deliberation space, for and spanning and integrating the disparate spheres of art, architecture, social media, and engineering. Betaville is an ambitious enterprise, of building compelling and constructive working relationships in situations where roles and disciplinary boundaries must be as agile as the development process of the software itself. Through a considered account and analysis of the interdependencies between Betaville's project design, development methods, and deployment, the reader can gain a deeper understanding of the potential socio-technical forms of New Soft Cities: blended virtual-physical worlds, whose "public works" must ultimately serve and succeed as massively collaborative works of art and infrastructure."
As the first book devoted to relational data mining, this coherently written multi-author monograph provides a thorough introduction and systematic overview of the area. The first part introduces the reader to the basics and principles of classical knowledge discovery in databases and inductive logic programming; subsequent chapters by leading experts assess the techniques in relational data mining in a principled and comprehensive way; finally, three chapters deal with advanced applications in various fields and refer the reader to resources for relational data mining.This book will become a valuable source of reference for R&D professionals active in relational data mining. Students as well as IT professionals and ambitioned practitioners interested in learning about relational data mining will appreciate the book as a useful text and gentle introduction to this exciting new field.
The book presents a unified treatment of integer programming and network models with topics ranging from exact and heuristic algorithms to network flows, traveling salesman tours, and traffic assignment problems. While the emphasis of the book is on models and applications, the most important methods and algorithms are described in detail and illustrated by numerical examples. The formulations and the discussion of a large variety of models provides insight into their structures that allows the user to better evaluate the solutions to the problems.
Cyberspace security is a critical subject of our times. On the one hand the development of Internet, mobile communications, distributed computing, computer software and databases storing essential enterprise information has helped to conduct business and personal communication between individual people. On the other hand it has created many opportunities for abuse, fraud and expensive damage. This book is a selection of the best papers presented at the NATO Advanced Research Workshop dealing with the Subject of Cyberspace Security and Defense. The level of the individual contributions in the volume is advanced and suitable for senior and graduate students, researchers and technologists who wish to get some feeling of the state of the art in several sub-disciplines of Cyberspace security. Several papers provide a broad-brush description of national security issues and brief summaries of technology states. These papers can be read and appreciated by technically enlightened managers and executives who want to understand security issues and approaches to technical solutions. An important question of our times is not "Should we do something for enhancing our digital assets security," the question is "How to do it."
Semantic Web services promise to automate tasks such as discovery, mediation, selection, composition, and invocation of services, enabling fully flexible automated e-business. Their usage, however, still requires a significant amount of human intervention due to the lack of support for a machine-processable description. In this book, Jos de Bruijn and his coauthors lay the foundations for understanding the requirements that shape the description of the various aspects related to Semantic Web services, such as the static background knowledge in the form of ontologies, the functional description of the service, and the behavioral description of the service. They introduce the Web Service Modeling Language (WSML), which provides means for describing the functionality and behavior of Web services, as well as the underlying business knowledge, in the form of ontologies, with a conceptual grounding in the Web Service Modeling Ontology. Academic and industrial researchers as well as professionals will find a comprehensive overview of the concepts and challenges in the area of Semantic Web services, the Web Services Modeling Language and its relation to the Web Services Modeling Ontology, and an in-depth treatment of both enabling technologies and theoretical foundations.
The book provides a platform for dealing with the flaws and failings of the soft computing paradigm through different manifestations. The different chapters highlight the necessity of the hybrid soft computing methodology in general with emphasis on several application perspectives in particular. Typical examples include (a) Study of Economic Load Dispatch by Various Hybrid Optimization Techniques, (b) An Application of Color Magnetic Resonance Brain Image Segmentation by Para Optimus LG Activation Function, (c) Hybrid Rough-PSO Approach in Remote Sensing Imagery Analysis, (d) A Study and Analysis of Hybrid Intelligent Techniques for Breast Cancer Detection using Breast Thermograms, and (e) Hybridization of 2D-3D Images for Human Face Recognition. The elaborate findings of the chapters enhance the exhibition of the hybrid soft computing paradigm in the field of intelligent computing.
This book bridges the widening gap between two crucial constituents of computational intelligence: the rapidly advancing technologies of machine learning in the digital information age, and the relatively slow-moving field of general-purpose search and optimization algorithms. With this in mind, the book serves to offer a data-driven view of optimization, through the framework of memetic computation (MC). The authors provide a summary of the complete timeline of research activities in MC - beginning with the initiation of memes as local search heuristics hybridized with evolutionary algorithms, to their modern interpretation as computationally encoded building blocks of problem-solving knowledge that can be learned from one task and adaptively transmitted to another. In the light of recent research advances, the authors emphasize the further development of MC as a simultaneous problem learning and optimization paradigm with the potential to showcase human-like problem-solving prowess; that is, by equipping optimization engines to acquire increasing levels of intelligence over time through embedded memes learned independently or via interactions. In other words, the adaptive utilization of available knowledge memes makes it possible for optimization engines to tailor custom search behaviors on the fly - thereby paving the way to general-purpose problem-solving ability (or artificial general intelligence). In this regard, the book explores some of the latest concepts from the optimization literature, including, the sequential transfer of knowledge across problems, multitasking, and large-scale (high dimensional) search, systematically discussing associated algorithmic developments that align with the general theme of memetics. The presented ideas are intended to be accessible to a wide audience of scientific researchers, engineers, students, and optimization practitioners who are familiar with the commonly used terminologies of evolutionary computation. A full appreciation of the mathematical formalizations and algorithmic contributions requires an elementary background in probability, statistics, and the concepts of machine learning. A prior knowledge of surrogate-assisted/Bayesian optimization techniques is useful, but not essential.
Over the past two decades, software engineering has come a long way from object-based to object-oriented to component-based design and development. Invasive software composition is a new technique that unifies and extends recent software engineering concepts like generic programming, aspect-oriented development, architecture systems, or subject-oriented development. To improve reuse, this new method regards software components as grayboxes and integrates them during composition. Building on a minimal set of program transformations, composition operator libraries can be developed that parameterize, extend, connect, mediate, and aspect-weave components. The book is centered around the JAVA language and the freely available demonstrator library COMPOST. It provides a wealth of materials for researchers, students, and professional software architects alike.
Recent growth in knowledge management concepts has played a vital role in the improvement of organizational performance. These knowledge management approaches have been influential in achieving the goal of efficient production of software development processes. Knowledge-Based Processes in Software Development focuses on the inherent issues to help practitioners in gaining understanding of software development processes. The best practices highlighted in this publication will be essential to software professionals working in the industry as well as students and researchers in the domain of software engineering in order to successfully employ knowledge management procedures.
The growing demand for high quality, safety, and security of software systems can only be met by rigorous application of formal methods during software design. Tools for formal methods in general, however, do not provide a sufficient level of automatic processing. This book methodically investigates the potential of first-order logic automated theorem provers for applications in software engineering.Illustrated by complete case studies on verification of communication and security protocols and logic-based component reuse, the book characterizes proof tasks to allow an assessment of the provers capabilities. Necessary techniques and extensions, e.g., for handling inductive and modal proof tasks, or for controlling the prover, are covered in detail. The book demonstrates that state-of-the-art automated theorem provers are capable of automatically handling important tasks during the development of high-quality software and it provides many helpful techniques for increasing practical usability of the automated theorem prover for successful applications.
This book provides a comprehensive and practically minded introduction into serious games for law enforcement agencies. Serious games offer wide ranging benefits for law enforcement with applications from professional trainings to command-level decision making to the preparation for crises events. This book explains the conceptual foundations of virtual and augmented reality, gamification and simulation. It further offers practical guidance on the process of serious games development from user requirements elicitation to evaluation. The chapters are intended to provide principles, as well as hands-on knowledge to plan, design, test and apply serious games successfully in a law enforcement environment. A diverse set of case studies showcases the enormous variety that is possible in serious game designs and application areas and offers insights into concrete design decisions, design processes, benefits and challenges. The book is meant for law enforcement professionals interested in commissioning their own serious games as well as game designers interested in collaborative pedagogy and serious games for the law enforcement and security sector.
The book offers a detailed guide to temporal ordering, exploring open problems in the field and providing solutions and extensive analysis. It addresses the challenge of automatically ordering events and times in text. Aided by TimeML, it also describes and presents concepts relating to time in easy-to-compute terms. Working out the order that events and times happen has proven difficult for computers, since the language used to discuss time can be vague and complex. Mapping out these concepts for a computational system, which does not have its own inherent idea of time, is, unsurprisingly, tough. Solving this problem enables powerful systems that can plan, reason about events, and construct stories of their own accord, as well as understand the complex narratives that humans express and comprehend so naturally. This book presents a theory and data-driven analysis of temporal ordering, leading to the identification of exactly what is difficult about the task. It then proposes and evaluates machine-learning solutions for the major difficulties. It is a valuable resource for those working in machine learning for natural language processing as well as anyone studying time in language, or involved in annotating the structure of time in documents.
Everything should be made as simple as possible, but not simpler. (Albert Einstein, Readers Digest, 1977) The modern practice of creating technical systems and technological processes of high effi.ciency besides the employment of new principles, new materials, new physical effects and other new solutions ( which is very traditional and plays the key role in the selection of the general structure of the object to be designed) also includes the choice of the best combination for the set of parameters (geometrical sizes, electrical and strength characteristics, etc.) concretizing this general structure, because the Variation of these parameters ( with the structure or linkage being already set defined) can essentially affect the objective performance indexes. The mathematical tools for choosing these best combinations are exactly what is this book about. With the advent of computers and the computer-aided design the pro bations of the selected variants are usually performed not for the real examples ( this may require some very expensive building of sample op tions and of the special installations to test them ), but by the analysis of the corresponding mathematical models. The sophistication of the mathematical models for the objects to be designed, which is the natu ral consequence of the raising complexity of these objects, greatly com plicates the objective performance analysis. Today, the main (and very often the only) available instrument for such an analysis is computer aided simulation of an object's behavior, based on numerical experiments with its mathematical model."
New Approaches to Circle Packing into the Square is devoted to the most recent results on the densest packing of equal circles in a square. In the last few decades, many articles have considered this question, which has been an object of interest since it is a hard challenge both in discrete geometry and in mathematical programming. The authors have studied this geometrical optimization problem for a long time, and they developed several new algorithms to solve it. The book completely covers the investigations on this topic.
The implementation of object-oriented languages has been an active topic of research since the 1960s when the first Simula compiler was written. The topic received renewed interest in the early 1980s with the growing popularity of object-oriented programming languages such as c++ and Smalltalk, and got another boost with the advent of Java. Polymorphic calls are at the heart of object-oriented languages, and even the first implementation of Simula-67 contained their classic implementation via virtual function tables. In fact, virtual function tables predate even Simula-for example, Ivan Sutherland's Sketchpad drawing editor employed very similar structures in 1960. Similarly, during the 1970s and 1980s the implementers of Smalltalk systems spent considerable efforts on implementing polymorphic calls for this dynamically typed language where virtual function tables could not be used. Given this long history of research into the implementation of polymorphic calls, and the relatively mature standing it achieved over time, why, one might ask, should there be a new book in this field? The answer is simple. Both software and hardware have changed considerably in recent years, to the point where many assumptions underlying the original work in this field are no longer true. In particular, virtual function tables are no longer sufficient to implement polymorphic calls even for statically typed languages; for example, Java's interface calls cannot be implemented this way. Furthermore, today's processors are deeply pipelined and can execute instructions out-of order, making it difficult to predict the execution time of even simple code sequences."
Overtheyears, research inthelifescienceshasbenefitedgreatlyfromthequantita tive toolsofmathematics and modeling. Many aspectsofcomplex biological systems can be more deeply understood when mathematical techniques are incorporated into a scientific investigation. Modelingcanbefruitfully applied in many typesofbiological research, from studies on the molecular, cellular, and organ level, to experiments in wholeanimalsandinpopulations. Using the field of nutrition as an example, one can find many cases of recent advances in knowledge and understanding that were facilitated by the application of mathematical modelingtokineticdata. Theavailabilityofbiologicallyimportantstable isotope-labeled compounds, developments in sensitive mass spectrometry and other analytical techniques, and advances in the powerful modeling software applied to data haveeachcontributed toourability tocarryoutevermoresophisticated kinetic studies that are relevant to nutrition and the health sciences at many levels oforganization. Furthermore, weanticipatethatmodeling isonthebrinkofanothermajoradvance: the application of kinetic modeling to clinical practice. With advances in the abilityof modelstoaccesslargedatabases(e. g., apopulationofindividualpatientrecords)andthe developmentofuserinterfaces thatare"friendly"enough tobeused byclinicians who arenotmodelers, wepredictthathealthapplicationsmodeling willbeanimportantnew 51 directionformodelinginthe21 century. This book contains manuscripts that are based on presentations at the seventh conference in a series focused on advancing nutrition and health research by fostering exchange among scientists from such disciplines as nutrition, biology, mathematics, statistics, kinetics, andcomputing. Thethemesofthesixpreviousconferencesincluded general nutritionmodeling(CanoltyandCain, 1985;Hoover-PlowandChandra, 1988), amino acids and carbohydrates (Aburnrad, 1991), minerals (Siva Subramanian and Wastney, 1995), vitamins, proteins, andmodelingtheory(CoburnandTownsend, 1996), and physiological compartmental modeling (Clifford and Muller, 1998). The seventh conference in the series was held at The Pennsylvania State University from July 29 throughAugust1,2000. Themeetingbeganwithaninstructiveandentertainingkeynote address by Professor Britton Chance, Eldridge Reeves Johnson University Professor Emeritus of Biophysics, Physical Chemistry, and Radiologic Physics, University of Pennsylvania. Dr."
This 2nd edition has been completely revised and updated, with additional new chapters. It presents state-of-the-art research in this area and focuses on key topics such as: visualization of semantic and structural information and metadata in the context of the emerging Semantic Web; Ontology-based Information Visualization and the use of graphically represented ontologies; Semantic Visualizations using Topic Maps and graph techniques; Recommender systems for filtering and recommending on the Semantic Web; SVG and X3D as new XML-based languages for 2D and 3D visualisations; methods used to construct and visualize high quality metadata and ontologies; and navigating and exploring XML documents using interactive multimedia interfaces. The design of visual interfaces for e-commerce and information retrieval is currently a challenging area of practical web development.
Basics of Software Engineering Experimentation is a practical guide to experimentation in a field which has long been underpinned by suppositions, assumptions, speculations and beliefs. It demonstrates to software engineers how Experimental Design and Analysis can be used to validate their beliefs and ideas. The book does not assume its readers have an in-depth knowledge of mathematics, specifying the conceptual essence of the techniques to use in the design and analysis of experiments and keeping the mathematical calculations clear and simple. Basics of Software Engineering Experimentation is practically oriented and is specially written for software engineers, all the examples being based on real and fictitious software engineering experiments.
Researches and developers of simulation models state that the Java program ming language presents a unique and significant opportunity for important changes in the way we develop simulation models today. The most important characteristics of the Java language that are advantageous for simulation are its multi-threading capabilities, its facilities for executing programs across the Web, and its graphics facilities. It is feasible to develop compatible and reusable simulation components that will facilitate the construction of newer and more complex models. This is possible with Java development environments. Another important trend that begun very recently is web-based simulation, i.e., and the execution of simulation models using Internet browser software. This book introduces the application of the Java programming language in discrete-event simulation. In addition, the fundamental concepts and prac tical simulation techniques for modeling different types of systems to study their general behavior and their performance are introduced. The approaches applied are the process interaction approach to discrete-event simulation and object-oriented modeling. Java is used as the implementation language and UML as the modeling language. The first offers several advantages compared to C++, the most important being: thread handling, graphical user interfaces (QUI) and Web computing. The second language, UML (Unified Modeling Language) is the standard notation used today for modeling systems as a collection of classes, class relationships, objects, and object behavior."
Actor-Network Theory (ANT) has existed as a topic of interest among social theorists for decades. Due to the prevalence of technology in modern society, discussions over the influence of actor-network theory on the changing scope of technology can assist in facilitating further research and scientific thought. Technological Advancements and the Impact of Actor-Network Theory focuses on cross-disciplinary research as well as examples of the use of actor-network theory in a variety of fields, including medicine, education, business, engineering, environmental science, computer science, and social science. This timely publication is well-suited for reference use by academicians, researchers, upper-level students, and theorists. Topics Covered The many academic areas covered in this publication include, but are not limited to: Digital Communication E-Health Human Interaction Information and Communication Technologies Online Education Online Investing Public Service Innovation Software Development
As the telecommunication industry introduces new sophisticated technologies, the nature of services and the volume of demands have changed. Indeed, a broad range of new services for users appear, combining voice, data, graphics, video, etc. This implies new planning issues. Fiber transmission systems that can carry large amounts of data on a few strands of wire were introduced. These systems have such a large bandwidth that the failure of even a single transmission link: in the network can create a severe service loss to customers. Therefore, a very high level of service reliability is becoming imperative for both system users and service providers. Since equipment failures and accidents cannot be avoided entirely, networks have to be designed so as to "survive" failures. This is done by judiciously installing spare capacity over the network so that all traffic interrupted by a failure may be diverted around that failure by way of this spare or reserve capacity. This of course translates into huge investments for network operators. Designing such survivable networks while minimizing spare capacity costs is, not surprisingly, a major concern of operating companies which gives rise to very difficult combinatorial problems. In order to make telecommunication networks survivable, one can essentially use two different strategies: protection or restoration. The protection approach preas signs spare capacity to protect each element of the network independently, while the restoration approach spreads the redundant capacity over the whole network and uses it as required in order to restore the disrupted traffic."
Templates are used to generate all kinds of text, including computer code. The last decade, the use of templates gained a lot of popularity due to the increase of dynamic web applications. Templates are a tool for programmers, and implementations of template engines are most times based on practical experience rather than based on a theoretical background. This book reveals the mathematical background of templates and shows interesting findings for improving the practical use of templates. First, a framework to determine the necessary computational power for the template metalanguage is presented. The template metalanguage does not need to be Turing-complete to be useful. A non-Turing-complete metalanguage enforces separation of concerns between the view and model. Second, syntactical correctness of all languages of the templates and generated code is ensured. This includes the syntactical correctness of the template metalanguage and the output language. Third, case studies show that the achieved goals are applicable in practice. It is even shown that syntactical correctness helps to prevent cross-site scripting attacks in web applications. The target audience of this book is twofold. The first group exists of researcher interested in the mathematical background of templates. The second group exists of users of templates. This includes designers of template engines on one side and programmers and web designers using templates on the other side" |
You may like...
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R3,940
Discovery Miles 39 400
Hardware Accelerator Systems for…
Shiho Kim, Ganesh Chandra Deka
Hardcover
R3,950
Discovery Miles 39 500
Object-Oriented Technology and Computing…
H.S.M. Zedan, A. Cau
Hardcover
R1,422
Discovery Miles 14 220
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
|