![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming
The scientific monograph of a survey kind presented to the reader's attention deals with fundamental ideas and basic schemes of optimization methods that can be effectively used for solving strategic planning and operations manage ment problems related, in particular, to transportation. This monograph is an English translation of a considerable part of the author's book with a similar title that was published in Russian in 1992. The material of the monograph embraces methods of linear and nonlinear programming; nonsmooth and nonconvex optimization; integer programming, solving problems on graphs, and solving problems with mixed variables; rout ing, scheduling, solving network flow problems, and solving the transportation problem; stochastic programming, multicriteria optimization, game theory, and optimization on fuzzy sets and under fuzzy goals; optimal control of systems described by ordinary differential equations, partial differential equations, gen eralized differential equations (differential inclusions), and functional equations with a variable that can assume only discrete values; and some other methods that are based on or adjoin to the listed ones."
"Hands-On Database "uses a scenario-based approach that shows readers how to build a database by providing them with the context of a running case throughout each step of the process.
The first textbook ever to cover multi-relational data mining and inductive logic programming, this book fully explores logical and relational learning. Ideal for graduate students and researchers, it also looks at statistical relational learning.
The resilience of computing systems includes their dependability as well as their fault tolerance and security. It defines the ability of a computing system to perform properly in the presence of various kinds of disturbances and to recover from any service degradation. These properties are immensely important in a world where many aspects of our daily life depend on the correct, reliable and secure operation of often large-scale distributed computing systems. Wolter and her co-editors grouped the 20 chapters from leading researchers into seven parts: an introduction and motivating examples, modeling techniques, model-driven prediction, measurement and metrics, testing techniques, case studies, and conclusions. The core is formed by 12 technical papers, which are framed by motivating real-world examples and case studies, thus illustrating the necessity and the application of the presented methods. While the technical chapters are independent of each other and can be read in any order, the reader will benefit more from the case studies if he or she reads them together with the related techniques. The papers combine topics like modeling, benchmarking, testing, performance evaluation, and dependability, and aim at academic and industrial researchers in these areas as well as graduate students and lecturers in related fields. In this volume, they will find a comprehensive overview of the state of the art in a field of continuously growing practical importance.
Among the most important problems confronting computer science is that of developing a paradigm appropriate to the discipline. Proponents of formal methods - such as John McCarthy, C.A.R. Hoare, and Edgar Dijkstra - have advanced the position that computing is a mathematical activity and that computer science should model itself after mathematics. Opponents of formal methods - by contrast, suggest that programming is the activity which is fundamental to computer science and that there are important differences that distinguish it from mathematics, which therefore cannot provide a suitable paradigm. Disagreement over the place of formal methods in computer science has recently arisen in the form of renewed interest in the nature and capacity of program verification as a method for establishing the reliability of software systems. A paper that appeared in Communications of the ACM entitled, Program Verification: The Very Idea', by James H. Fetzer triggered an extended debate that has been discussed in several journals and that has endured for several years, engaging the interest of computer scientists (both theoretical and applied) and of other thinkers from a wide range of backgrounds who want to understand computer science as a domain of inquiry. The editors of this collection have brought together many of the most interesting and important studies that contribute to answering questions about the nature and the limits of computer science. These include early papers advocating the mathematical paradigm by McCarthy, Naur, R. Floyd, and Hoare (in Part I), others that elaborate the paradigm by Hoare, Meyer, Naur, and Scherlis and Scott (in Part II), challenges, limits and alternatives explored by C. Floyd, Smith, Blum, and Naur (in Part III), and recent work focusing on formal verification by DeMillo, Lipton, and Perlis, Fetzer, Cohn, and Colburn (in Part IV). It provides essential resources for further study. This volume will appeal to scientists, philosophers, and laypersons who want to understand the theoretical foundations of computer science and be appropriately positioned to evaluate the scope and limits of the discipline.
Digital Intermediation offers a new framework for understanding content creation and distribution across automated media platforms - a new mediatisation process. The book draws on empirical and theoretical research to carefully identify and describe a number of unseen digital infrastructures that contribute to a predictive media production process through technologies, institutions and automation. Field data is drawn from several international sites, including Los Angeles, San Francisco, Portland, London, Amsterdam, Munich, Berlin, Hamburg, Sydney and Cartagena. By highlighting an increasingly automated content production and distribution process, the book responds to a number of regulatory debates on the societal impact of social media platforms. It highlights emerging areas of key importance that shape the production and distribution of social media content, including micro-platformization and digital first personalities. The book explains how technologies, institutions and automation are used within agencies to increase exposure for the talent they manage, while providing inside access to the processes and requirements of producers who create content for platform algorithms. Finally, it outlines user agency as a strategy for those who seek diversity in the information they access on automated social media content distribution platforms. The findings in this book provide key recommendations for policymakers working within digital media platforms, and will be invaluable reading for students and academics interested in automated media environments.
This book investigates the susceptibility of intrinsic physically unclonable function (PUF) implementations on reconfigurable hardware to optical semi-invasive attacks from the chip backside. It explores different classes of optical attacks, particularly photonic emission analysis, laser fault injection, and optical contactless probing. By applying these techniques, the book demonstrates that the secrets generated by a PUF can be predicted, manipulated or directly probed without affecting the behavior of the PUF. It subsequently discusses the cost and feasibility of launching such attacks against the very latest hardware technologies in a real scenario. The author discusses why PUFs are not tamper-evident in their current configuration, and therefore, PUFs alone cannot raise the security level of key storage. The author then reviews the potential and already implemented countermeasures, which can remedy PUFs' security-related shortcomings and make them resistant to optical side-channel and optical fault attacks. Lastly, by making selected modifications to the functionality of an existing PUF architecture, the book presents a prototype tamper-evident sensor for detecting optical contactless probing attempts.
The book emphasizes the design of full-fledged, fully
normalizing lambda calculus
This book opens the "black box" of software sourcing by explaining how dynamic software alignment is established and how it impacts business performance outcomes. By investigating how software-sourcing modes are related to value generation in the post-implementation phase, it shows researchers and managers the impact logic of on-demand, on-premises, and in-house software on dynamic fit and process-level performance outcomes in a client organization. It describes dynamic IT alignment as the key to success in a fast-moving digital world with software-as-a-service on the rise and highlights the fact that today companies can choose between developing software in-house (make) or sourcing packaged systems in an on-premises (buy) or an on-demand (lease) mode. This book is the first to explicitly compare these sourcing arrangements with each other in terms of alignment and business performance.
As is true of most technological fields, the software industry is constantly advancing and becoming more accessible to a wider range of people. The advancement and accessibility of these systems creates a need for understanding and research into their development. Optimizing Contemporary Application and Processes in Open Source Software is a critical scholarly resource that examines the prevalence of open source software systems as well as the advancement and development of these systems. Featuring coverage on a wide range of topics such as machine learning, empirical software engineering and management, and open source, this book is geared toward academicians, practitioners, and researchers seeking current and relevant research on the advancement and prevalence of open source software systems.
Mixed-Signal Embedded Microcontrollers are commonly used in integrating analog components needed to control non-digital electronic systems. They are used in automatically controlled devices and products, such as automobile engine control systems, wireless remote controllers, office machines, home appliances, power tools, and toys. Microcontrollers make it economical to digitally control even more devices and processes by reducing the size and cost, compared to a design that uses a separate microprocessor, memory, and input/output devices. In many undergraduate and post-graduate courses, teaching of mixed-signal microcontrollers and their use for project work has become compulsory. Students face a lot of difficulties when they have to interface a microcontroller with the electronics they deal with. This book addresses some issues of interfacing the microcontrollers and describes some project implementations with the Silicon Lab C8051F020 mixed-signal microcontroller. The intended readers are college and university students specializing in electronics, computer systems engineering, electrical and electronics engineering; researchers involved with electronics based system, practitioners, technicians and in general anybody interested in microcontrollers based projects.
Term rewriting techniques are applicable to various fields of computer science, including software engineering, programming languages, computer algebra, program verification, automated theorem proving and Boolean algebra. These powerful techniques can be successfully applied in all areas that demand efficient methods for reasoning with equations. One of the major problems encountered is the characterization of classes of rewrite systems that have a desirable property, like confluence or termination. In a system that is both terminating and confluent, every computation leads to a result that is unique, regardless of the order in which the rewrite rules are applied. This volume provides a comprehensive and unified presentation of termination and confluence, as well as related properties. Topics and features: *unified presentation and notation for important advanced topics *comprehensive coverage of conditional term-rewriting systems *state-of-the-art survey of modularity in term rewriting *presentation of unified framework for term and graph rewriting *up-to-date discussion of transformational methods for proving termination of logic programs, including the TALP system This unique book offers a comprehensive and unified view of the subject that is suitable for all computer scientists, program designers, and software engineers who study and use term rewriting techniques. Practitioners, researchers and professionals will find the book an essential and authoritative resource and guide for the latest developments and results in the field.
Many questions dealing with solvability, stability and solution methods for va- ational inequalities or equilibrium, optimization and complementarity problems lead to the analysis of certain (perturbed) equations. This often requires a - formulation of the initial model being under consideration. Due to the specific of the original problem, the resulting equation is usually either not differ- tiable (even if the data of the original model are smooth), or it does not satisfy the assumptions of the classical implicit function theorem. This phenomenon is the main reason why a considerable analytical inst- ment dealing with generalized equations (i.e., with finding zeros of multivalued mappings) and nonsmooth equations (i.e., the defining functions are not c- tinuously differentiable) has been developed during the last 20 years, and that under very different viewpoints and assumptions. In this theory, the classical hypotheses of convex analysis, in particular, monotonicity and convexity, have been weakened or dropped, and the scope of possible applications seems to be quite large. Briefly, this discipline is often called nonsmooth analysis, sometimes also variational analysis. Our book fits into this discipline, however, our main intention is to develop the analytical theory in close connection with the needs of applications in optimization and related subjects. Main Topics of the Book 1. Extended analysis of Lipschitz functions and their generalized derivatives, including "Newton maps" and regularity of multivalued mappings. 2. Principle of successive approximation under metric regularity and its - plication to implicit functions.
The IFIP working group 2.3 (Programming Methodology) is made up of internationally prominent computing academics and industrialists, and broadly its purpose is to invent, discuss and assess new and emerging techniques for improving the quality of software and systems. The group's membership has been influential in topics such as program correctness, object orientation, operating systems and distributed computing; indeed many thriving areas of research nowadays are based on ideas which were once scrutinized by the 2.3 working committee. This is a volume of chapters written by the membership which will form a reference and guide to the front line of research activity in programming methodology. The range of subjects reflects the current interests of the members, and will offer insightful and controversial opinions on modern programming methods and practice. The material is arranged in thematic sections, each one introduced by a problem which epitomizes the spirit of that topic. The exemplary problem will encourage vigorous discussion and will form the basis for an introduction/tutorial for its section.
Current IT developments like competent-based development and Web services have emerged as new effective ways of building complex enterprise systems and providing enterprise allocation integration. To aid this process, platforms like.Net and Websphere have become standards in web-based systems development. However, there is still much that needs to be researched before service-oriented software engineering (SOSE) becomes a prominent source for enterprise system development. Service-Oriented Software System Engineering: Challenges and Practices provides a comprehensive view of SOSE through a number of different perspectives. Some of those perspectives include; service concepts, modeling and documentation, service discovery and composition, model-driven development of service-oriented applications, and service-orientation in mobile settings. This book provides readers with an in-depth knowledge of the main challenges and practices in the exciting, new world of service-oriented software engineering. Addressing both technical and organizational aspects of this new field, this book offers a balance, making it valuable to a variety of readers including: software developers, managers, and analysts.
PHP is rapidly becoming the language of choice for dynamic Web development, in particular for e-commerce and on-line database systems. It is open source software and easy to install, and can be used with a variety of operating systems, including Microsoft Windows and UNIX. This comprehensive manual covers the basic core of the language, with lots of practical examples of some of the more recent and useful features available in version 5.0. MySQL database creation and development is also covered, as it is the developer database most commonly used alongside PHP. It will be an invaluable book for professionals wanting to use PHP to develop their own dynamic web pages. Key Topics: - Basic Language Constructs - Manipulating Arrays and Strings - Errors and Buffering - Graphic Manipulation - PDF Library Extension - MySQL Database Management - Classes and Objects Concepts Features and Benefits: Explains how to use PHP to its full extent - covering the latest features and functions of PHP version 5.0, including the use of object-oriented programming Describes how to link a database to a web site, using the MySQL database management system Shows how to connect PHP to other systems and provides many examples, so that you can create powerful and dynamic web pages and applications Contains lots of illustrated, practical, real-world examples - including an e-commerce application created in PHP using many of the features described within the book The scripts used in the examples are available for download from www.phpmysql-manual.com
The goal of this book is to present the most advanced research works in realistic computer generated images. It is made up of the papers presented during a Eurographics workshop that has been held in Rennes (France) on June 1990. Although realism in computer graphics has existed for many years, we have considered that two research directions can now clearly be identified. One makes use of empirical methods to efficiently create images that look real. As opposed to this approach, the other orientation makes use of physics to produce images that are exact representations of the real world (at the expense of additional processing time), hence the term photosimulation which indeed was the subject of this book. The objectives of this workshop were to assemble experts from physics and computer graphics in order to contribute to the introduction of physics-based approaches in the field of computer generated images. The fact that this workshop was the first entirely devoted to this topic was a bet and fortunately it turned out that it was a success. The contents of this book is organized in five chapters: Efficient Ray Tracing Meth ods, Theory of Global Illumination Models, Photometric Algorithms, Form-Factor Cal culations and Physics-Based Methods.
In part the book creates and motivates the notion of metamodelling and how it can be used to standardise the creation of industry-strength design. At its heart, the book presents an analysis of the main object-oriented design methodologies, including: Booch, OMT, Coad, and Martin/Odell. Based on these descriptions, a proposal is made for a core metamodel framework into which the leading methodologies may be fitted. As a result, software engineers and software managers will find this a valuable "road map" in the future development of software standards.
Since I started working in the area of nonlinear programming and, later on, variational inequality problems, I have frequently been surprised to find that many algorithms, however scattered in numerous journals, monographs and books, and described rather differently, are closely related to each other. This book is meant to help the reader understand and relate algorithms to each other in some intuitive fashion, and represents, in this respect, a consolidation of the field. The framework of algorithms presented in this book is called Cost Approxi mation. (The preface of the Ph.D. thesis Pat93d] explains the background to the work that lead to the thesis, and ultimately to this book.) It describes, for a given formulation of a variational inequality or nonlinear programming problem, an algorithm by means of approximating mappings and problems, a principle for the update of the iteration points, and a merit function which guides and monitors the convergence of the algorithm. One purpose of this book is to offer this framework as an intuitively appeal ing tool for describing an algorithm. One of the advantages of the framework, or any reasonable framework for that matter, is that two algorithms may be easily related and compared through its use. This framework is particular in that it covers a vast number of methods, while still being fairly detailed; the level of abstraction is in fact the same as that of the original problem statement."
The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.
Software Visualization: From Theory to Practice was initially
selected as a special volume for "The Annals of Software
Engineering (ANSE) Journal," which has been discontinued. This
special edited volume, is the first to discuss software
visualization in the perspective of software engineering. It is a
collection of 14 chapters on software visualization, covering the
topics from theory to practical systems. The chapters are divided
into four Parts: Visual Formalisms, Human Factors, Architectural
Visualization, and Visualization in Practice. They cover a
comprehensive range of software visualization topics, including
Software Visualization: From Theory to Practice is designed to meet the needs of both an academic and a professional audience composed of researchers and software developers. This book is also suitable for seniorundergraduate and graduate students in software engineering and computer science, as a secondary text or a reference.
As the world becomes increasingly dependent on the use of computers, the need for quality software which can be produced at reasonable cost increases. This IFIP proceedings brings together the work of leading researchers and practitioners who are concerned with the efficient production of quality software.
How do you design personalized user experiences that delight and
provide value to the customers of an eCommerce site?
Personalization does not guarantee high quality user experience: a
personalized user experience has the best chance of success if it
is developed using a set of best practices in HCI. In this book 35
experts from academia, industry and government focus on issues in
the design of personalized web sites. The topics range from the
design and evaluation of user interfaces and tools to information
architecture and computer programming related to commercial web
sites. The book covers four main areas:
Automatic Re-engineering of Software Using Genetic Programming describes the application of Genetic Programming to a real world application area - software re-engineering in general and automatic parallelization specifically. Unlike most uses of Genetic Programming, this book evolves sequences of provable transformations rather than actual programs. It demonstrates that the benefits of this approach are twofold: first, the time required for evaluating a population is drastically reduced, and second, the transformations can subsequently be used to prove that the new program is functionally equivalent to the original. Automatic Re-engineering of Software Using Genetic Programming shows that there are applications where it is more practical to use GP to assist with software engineering rather than to entirely replace it. It also demonstrates how the author isolated aspects of a problem that were particularly suited to GP, and used traditional software engineering techniques in those areas for which they were adequate. Automatic Re-engineering of Software Using Genetic Programming is an excellent resource for researchers in this exciting new field. |
You may like...
Research Anthology on Agile Software…
Information R Management Association
Hardcover
R14,534
Discovery Miles 145 340
Dark Silicon and Future On-chip Systems…
Suyel Namasudra, Hamid Sarbazi-Azad
Hardcover
R3,940
Discovery Miles 39 400
|