![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming
It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name "polynomial-time interior-point methods", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12].
Ontology Learning for the Semantic Web explores techniques for
applying knowledge discovery techniques to different web data
sources (such as HTML documents, dictionaries, etc.), in order to
support the task of engineering and maintaining ontologies. The
approach of ontology learning proposed in Ontology Learning for the
Semantic Web includes a number of complementary disciplines that
feed in different types of unstructured and semi-structured data.
This data is necessary in order to support a semi-automatic
ontology engineering process.
In recent years, digital technologies have become more ubiquitous and integrated into everyday life. While once reserved mostly for personal uses, video games and similar innovations are now implemented across a variety of fields. Transforming Gaming and Computer Simulation Technologies across Industries is a pivotal reference source for the latest research on emerging simulation technologies and gaming innovations to enhance industry performance and dependency. Featuring extensive coverage across a range of relevant perspectives and topics, such as user research, player identification, and multi-user virtual environments, this book is ideally designed for engineers, professionals, practitioners, upper-level students, and academics seeking current research on gaming and computer simulation technologies across different industries. Topics Covered: Digital vs. Non-Digital Platforms Ludic Simulations Mathematical Simulations Medical Gaming Multi-User Virtual Environments Player Experiences Player Identification User Research
Created by the Joint Photographic Experts Group (JPEG), the JPEG standard is the first color still image data compression international standard. This new guide to JPEG and its technologies offers detailed information on the new JPEG signaling conventions and the structure of JPEG compressed data.
This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The contributions are based on the results of the FP7 European Project ROMEO, which focuses on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the future Internet. The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user's context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of consistent video quality to fixed and mobile users. ROMEO will present hybrid networking solutions that combine the DVB-T2 and DVB-NGH broadcast access network technologies together with a QoE aware Peer-to-Peer (P2P) distribution system that operates over wired and wireless links. Live streaming 3D media needs to be received by collaborating users at the same time or with imperceptible delay to enable them to watch together while exchanging comments as if they were all in the same location. This book is the last of a series of three annual volumes devoted to the latest results of the FP7 European Project ROMEO. The present volume provides state-of-the-art information on 3D multi-view video, spatial audio networking protocols for 3D media, P2P 3D media streaming, and 3D Media delivery across heterogeneous wireless networks among other topics. Graduate students and professionals in electrical engineering and computer science with an interest in 3D Future Internet Media will find this volume to be essential reading.
Looking to become more efficient using Unity? How to Cheat in Unity 5 takes a no-nonsense approach to help you achieve fast and effective results with Unity 5. Geared towards the intermediate user, HTC in Unity 5 provides content beyond what an introductory book offers, and allows you to work more quickly and powerfully in Unity. Packed full with easy-to-follow methods to get the most from Unity, this book explores time-saving features for interface customization and scene management, along with productivity-enhancing ways to work with rendering and optimization. In addition, this book features a companion website at www.alanthorn.net, where you can download the book's companion files and also watch bonus tutorial video content. Learn bite-sized tips and tricks for effective Unity workflows Become a more powerful Unity user through interface customization Enhance your productivity with rendering tricks, better scene organization and more Better understand Unity asset and import workflows Learn techniques to save you time and money during development
Computer Science Project Work: Principles and Pragmatics is essential reading for lecturers and course designers who want to improve their handling of project work on specific courses, and deans and department heads who are interested in strategic issues and comparative practices. It explores working practices within the curriculum and provides a resource of guidelines and practical advice, including tried and tested "good ideas" and case studies of innovative practices.It looks at different approaches to key aspects of project work such as:- Allocation- Supervision- Assessment Integration with the curriculumand allows readers to "mix and match" approaches to create a system which suits their individual needs."Computer Science Project Work: Principles and Pragmatics is passionate, well-researched, and well-written...I wish I had this book from the beginning of my teaching career, and you will too."Susan Fowler, Professor of Technical Communication and Usability, Polytechnic University, Brooklyn, New York"Sally Fincher and her colleagues have assembled a cornucopia of practical advice and case studies, solidly referenced. This is the source book on using projects in computer science education."David Baume, Director of Teaching Development, Centre for Higher Education Practice, The Open University, UK"...very well-researched, it covers all the aspects, from the allocation of projects and teams, to managing the project process, assessing projects, and so on.....It will prove invaluable to all lecturers involved in teaching computing...."Professor Mike Holcombe, University of Sheffield, UK
Thisvolumecontainstheinvitedandregularpaperspresentedat TCS 2010, the 6thIFIP International Conference on Theoretical Computer Science, organised by IFIP Tech- cal Committee 1 (Foundations of Computer Science) and IFIP WG 2.2 (Formal - scriptions of Programming Concepts) in association with SIGACT and EATCS. TCS 2010 was part of the World Computer Congress held in Brisbane, Australia, during September 20-23, 2010 ( ). TCS 2010 is composed of two main areas: (A) Algorithms, Complexity and Models of Computation, and (B) Logic, Semantics, Speci?cation and Veri?cation. The selection process led to the acceptance of 23 papers out of 39 submissions, eachofwhichwasreviewedbythreeProgrammeCommitteemembers.TheProgramme Committee discussion was held electronically using Easychair. The invited speakers at TCS 2010 are: Rob van Glabbeek (NICTA, Australia) Bart Jacobs (Nijmegen, The Netherlands) Catuscia Palamidessi (INRIA and LIX, Paris, France) Sabina Rossi (Venice, Italy) James Harland (Australia) and Barry Jay (Australia) acted as TCS 2010 Chairs. We take this occasion to thank the members of the Programme Committees and the external reviewers for the professional and timely work; the conference Chairs for their support; the invited speakers for their scholarly contribution; and of course the authors for submitting their work to TCS 2010
Graph theory is a specific concept that has numerous applications throughout many industries. Despite the advancement of this technique, graph theory can still yield ambiguous and imprecise results. In order to cut down on these indeterminate factors, neutrosophic logic has emerged as an applicable solution that is gaining significant attention in solving many real-life decision-making problems that involve uncertainty, impreciseness, vagueness, incompleteness, inconsistency, and indeterminacy. However, empirical research on this specific graph set is lacking. Neutrosophic Graph Theory and Algorithms is a collection of innovative research on the methods and applications of neutrosophic sets and logic within various fields including systems analysis, economics, and transportation. While highlighting topics including linear programming, decision-making methods, and homomorphism, this book is ideally designed for programmers, researchers, data scientists, mathematicians, designers, educators, researchers, academicians, and students seeking current research on the various methods and applications of graph theory.
This book constitutes the refereed proceedings of the 27th IFIP TC 11 International Information Security Conference, SEC 2012, held in Heraklion, Crete, Greece, in June 2012. The 42 revised full papers presented together with 11 short papers were carefully reviewed and selected from 167 submissions. The papers are organized in topical sections on attacks and malicious code, security architectures, system security, access control, database security, privacy attitudes and properties, social networks and social engineering, applied cryptography, anonymity and trust, usable security, security and trust models, security economics, and authentication and delegation.
Knowledge sharing within an organization is essential to its continued success and growth, though remaining aware of new communication technologies is a difficult task. Web Engineered Applications for Evolving Organizations: Emerging Knowledge explores integrated approaches to IT and Web engineering, offering solutions and best practices for knowledge exchange within organizations. This publication focuses on research in a number of related disciplines, including data knowledge storage and retrieval, intelligent information systems, IT education and training, and IT readiness.
I am very pleased to play even a small part in the publication of this book on the SIGNAL language and its environment POLYCHRONY. I am sure it will be a s- ni?cant milestone in the development of the SIGNAL language, of synchronous computing in general, and of the data?ow approach to computation. In data?ow, the computation takes place in a producer-consumer network of - dependent processing stations. Data travels in streams and is transformed as these streams pass through the processing stations (often called ?lters). Data?ow is an attractive model for many reasons, not least because it corresponds to the way p- duction, transportation, andcommunicationare typicallyorganizedin the real world (outside cyberspace). I myself stumbled into data?ow almost against my will. In the mid-1970s, Ed Ashcroft and I set out to design a "super" structured programming language that, we hoped, would radically simplify proving assertions about programs. In the end, we decided that it had to be declarative. However, we also were determined that iterative algorithms could be expressed directly, without circumlocutions such as the use of a tail-recursive function. The language that resulted, which we named LUCID, was much less traditional then we would have liked. LUCID statements are equations in a kind of executable temporallogic thatspecifythe (time)sequencesof variablesinvolvedin aniteration.
The book is a collection of invited papers on Computational Intelligence for Privacy and Security. The majority of the chapters are extended versions of works presented at the special session on Computational Intelligence for Privacy and Security of the International Joint Conference on Neural Networks (IJCNN-2010) held July 2010 in Barcelona, Spain. The book is devoted to Computational Intelligence for Privacy and Security. It provides an overview of the most recent advances on the Computational Intelligence techniques being developed for Privacy and Security. The book will be of interest to researchers in industry and academics and to post-graduate students interested in the latest advances and developments in the field of Computational Intelligence for Privacy and Security.
"Practical Mono" offers you a detailed portrait of Mono and its many facets. You'll learn about building GUI-based applications with Gtk#, database interaction with ADO.NET, and powerful applications with XML and web services. By embracing this implementation, you can take advantage of the powerful development paradigm, building Internet-enabled cross-platform applications based on open source technologies. This book includes a primer on C#, so even if you're a novice .NET programmer, you will still gain plenty from this practical guide.
Data mining is a very active research area with many successful real-world app- cations. It consists of a set of concepts and methods used to extract interesting or useful knowledge (or patterns) from real-world datasets, providing valuable support for decision making in industry, business, government, and science. Although there are already many types of data mining algorithms available in the literature, it is still dif cult for users to choose the best possible data mining algorithm for their particular data mining problem. In addition, data mining al- rithms have been manually designed; therefore they incorporate human biases and preferences. This book proposes a new approach to the design of data mining algorithms. - stead of relying on the slow and ad hoc process of manual algorithm design, this book proposes systematically automating the design of data mining algorithms with an evolutionary computation approach. More precisely, we propose a genetic p- gramming system (a type of evolutionary computation method that evolves c- puter programs) to automate the design of rule induction algorithms, a type of cl- si cation method that discovers a set of classi cation rules from data. We focus on genetic programming in this book because it is the paradigmatic type of machine learning method for automating the generation of programs and because it has the advantage of performing a global search in the space of candidate solutions (data mining algorithms in our case), but in principle other types of search methods for this task could be investigated in the future.
LANCELOT is a software package for solving large-scale nonlinear optimization problems. This book is our attempt to provide a coherent overview of the package and its use. This includes details of how one might present examples to the package, how the algorithm tries to solve these examples and various technical issues which may be useful to implementors of the software. We hope this book will be of use to both researchers and practitioners in nonlinear programming. Although the book is primarily concerned with a specific optimization package, the issues discussed have much wider implications for the design and im plementation of large-scale optimization algorithms. In particular, the book contains a proposal for a standard input format for large-scale optimization problems. This proposal is at the heart of the interface between a user's problem and the LANCE LOT optimization package. Furthermore, a large collection of over five hundred test ex amples has already been written in this format and will shortly be available to those who wish to use them. We would like to thank the many people and organizations who supported us in our enterprise. We first acknowledge the support provided by our employers, namely the the Facultes Universitaires Notre-Dame de la Paix (Namur, Belgium), Harwell Laboratory (UK), IBM Corporation (USA), Rutherford Appleton Laboratory (UK) and the University of Waterloo (Canada). We are grateful for the support we obtained from NSERC (Canada), NATO and AMOCO (UK)."
Software Engineering with OBJ: Algebraic Specification in Action is a comprehensive introduction to OBJ, the most widely used algebraic specification system. As a formal specification language, OBJ makes specifications and designs more precise and easier to read, as well as making maintenance easier and more accurate. OBJ differs from most other specification languages not just in having a formal semantics, but in being executable, either through symbolic execution with term rewriting, or more generally through theorem proving. One problem with specifications is that they are often wrong. OBJ can help validate specifications by executing test cases, and by proving properties. As well as providing a detailed introduction to the language and the OBJ system that implements it, Software Engineering with OBJ: Algebraic Specification in Action provides case studies by leading practitioners in the field, in areas such as computer graphics standards, hardware design, and parallel computation. The case studies demonstrate that OBJ can be used in a wide variety of ways to achieve a wide variety of practical aims in the system development process. The papers on various OBJ systems also demonstrate that the language is relatively easy to understand, implement, and use, and that it supports formal reasoning in a straightforward but powerful way. Software Engineering with OBJ: Algebraic Specification in Action will be of interest to students and teachers in the areas of data types, programming languages, semantics, theorem proving, and algebra, as well as to researchers and practitioners in software engineering.
The 7th ACIS International Conference on Software Engineering Research, Management and Applications (SERA 2009) was held on Hainan Island, China from December 2 - 4. SERA '09 featured excellent theoretical and practical contributions in the areas of formal methods and tools, requirements engineering, software process models, communication systems and networks, software quality and evaluation, software engineering, networks and mobile computing, parallel/distributed computing, software testing, reuse and metrics, database retrieval, computer security, software architectures and modeling. Our conference officers selected the best 17 papers from those papers accepted for presentation at the conference in order to publish them in this volume. The papers were chosen based on review scores submitted by members or the program committee, and underwent further rigorous rounds of review.
In operations research and computer science it is common practice to evaluate the performance of optimization algorithms on the basis of computational results, and the experimental approach should follow accepted principles that guarantee the reliability and reproducibility of results. However, computational experiments differ from those in other sciences, and the last decade has seen considerable methodological research devoted to understanding the particular features of such experiments and assessing the related statistical methods. This book consists of methodological contributions on different scenarios of experimental analysis. The first part overviews the main issues in the experimental analysis of algorithms, and discusses the experimental cycle of algorithm development; the second part treats the characterization by means of statistical distributions of algorithm performance in terms of solution quality, runtime and other measures; and the third part collects advanced methods from experimental design for configuring and tuning algorithms on a specific class of instances with the goal of using the least amount of experimentation. The contributor list includes leading scientists in algorithm design, statistical design, optimization and heuristics, and most chapters provide theoretical background and are enriched with case studies. This book is written for researchers and practitioners in operations research and computer science who wish to improve the experimental assessment of optimization algorithms and, consequently, their design.
This book presents a guide to the core features of Java - and some more recent innovations - enabling the reader to build skills and confidence though tried-and-trusted stages, supported by exercises that reinforce key learning points. All of the most useful and commonly applied Java syntax and libraries are introduced, along with many example programs that can provide the basis for more substantial applications. Use of the Eclipse IDE and the JUnit testing framework is integral to the book, ensuring maximum productivity and code quality, although to ensure that skills are not confined to one environment the fundamentals of the Java compiler and run time are also explained. Additionally, coverage of the Ant tool will equip the reader with the skills to automatically build, test and deploy applications independent of an IDE. Features: presents information on Java 7; contains numerous code examples and exercises; provides source code, self-test questions and PowerPoint slides at an associated website.
The VLISP project showed how to produce a comprehensively verified implemen tation for a programming language, namely Scheme [4, 15). Some of the major elements in this verification were: * The proof was based on the Clinger-Rees denotational semantics of Scheme given in [15). Our goal was to produce a "warts-and-all" verification of a real language. With very few exceptions, we constrained ourselves to use the se mantic specification as published. The verification was intended to be rigorous, but. not. complet.ely formal, much in the style of ordinary mathematical discourse. Our goal was to verify the algorithms and data types used in the implementat.ion, not their embodiment. in code. See Section 2 for a more complete discussion ofthese issues. Our decision to be faithful to the published semantic specification led to the most difficult portions ofthe proofs; these are discussed in [13, Section 2.3-2.4). * Our implementation was based on the Scheme48 implementation of Kelsey and Rees [17). This implementation t.ranslates Scheme into an intermediate-level "byte code" language, which is interpreted by a virtual machine. The virtual machine is written in a subset of Scheme called PreScheme. The implementationissufficient.ly complete and efficient to allow it to bootstrap itself. We believe that this is the first. verified language implementation with these properties.
This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; reviews the latest research on the DataFlow architecture and its applications; introduces a new method for the rapid handling of real-world challenges involving large datasets; provides a case study on the use of the new approach to accelerate the Cooley-Tukey algorithm on a DataFlow machine; includes a step-by-step guide to the web-based integrated development environment WebIDE.
1.1. What This Book is About This book is a study of * subrecursive programming systems, * efficiency/program-size trade-offs between such systems, and * how these systems can serve as tools in complexity theory. Section 1.1 states our basic themes, and Sections 1.2 and 1.3 give a general outline of the book. Our first task is to explain what subrecursive programming systems are and why they are of interest. 1.1.1. Subrecursive Programming Systems A subrecursive programming system is, roughly, a programming language for which the result of running any given program on any given input can be completely determined algorithmically. Typical examples are: 1. the Meyer-Ritchie LOOP language [MR67,DW83], a restricted assem- bly language with bounded loops as the only allowed deviation from straight-line programming; 2. multi-tape 'lUring Machines each explicitly clocked to halt within a time bound given by some polynomial in the length ofthe input (see [BH79,HB79]); 3. the set of seemingly unrestricted programs for which one can prove 1 termination on all inputs (see [Kre51,Kre58,Ros84]); and 4. finite state and pushdown automata from formal language theory (see [HU79]). lOr, more precisely, the collection of programs, p, ofsome particular general-purpose programming language (e. g., Lisp or Modula-2) for which there is a proof in some par- ticular formal system (e.g., Peano Arithmetic) that p halts on all inputs.
Computational Issues in High Performance Software for Nonlinear Research brings together in one place important contributions and up-to-date research results in this important area. Computational Issues in High Performance Software for Nonlinear Research serves as an excellent reference, providing insight into some of the most important research issues in the field. |
You may like...
Disruptive Technology - Concepts…
Information Reso Management Association
Hardcover
R8,224
Discovery Miles 82 240
Theoretical and Computational Models of…
Lakshmi Gogate, George Hollich
Hardcover
R4,519
Discovery Miles 45 190
SAS Text Analytics for Business…
Teresa Jade, Biljana Belamaric-Wilsey, …
Hardcover
R2,569
Discovery Miles 25 690
Hardware Accelerator Systems for…
Shiho Kim, Ganesh Chandra Deka
Hardcover
R3,950
Discovery Miles 39 500
Let's Ask AI - A Non-Technical Modern…
Ingrid Seabra, Pedro Seabra, …
Hardcover
R709
Discovery Miles 7 090
Cognitive Data Models for Sustainable…
Siddhartha Bhattacharyya, Naba Kumar Mondal, …
Paperback
R2,770
Discovery Miles 27 700
Right Way to Select Technology - Get the…
Tony Byrne, Jarrod Gingras
Paperback
R699
Discovery Miles 6 990
Jump into JMP Scripting, Second Edition…
Wendy Murphrey, Rosemary Lucas
Hardcover
R1,530
Discovery Miles 15 300
|