![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > General
This book is concerned with topological and differential properties of multivalued mappings and marginal functions. Beside this applica- tions to the sensitivity analysis of optimization problems, in particular nonlinear programming problems with perturbations, are studied. The elaborated methods are primarily obtained by theories and concepts of two former Soviet Union researchers, Demyanov and Rubinov. Con- sequently, a significant part of the presented results have never been published in English before. Based on the use of directional derivatives as a key tool in studying nonsmooth functions and multifunctions, these results can be considered as a further development of quasidifferential calculus created by Demyanov and Rubinov. In contrast to other research in this field, especially the recent publica- tion by Bonnans and Shapiro, this book analyses properties of marginal functions associated with optimization problems under quite general con- straints defined by means of multivalued mappings. A unified approach to directional differentiability of functions and multifunctions forms the base of the volume.
In a model-based development of software systems different views on a system are elaborated using appropriate modeling languages and techniques. Because of the unavoidable heterogeneity of the viewpoint models, a semantic integration is required, to establish the correspondences of the models and allow checking of their relative consistency. The integration approach introduced in this book is based on a common semantic domain of abstract systems, their composition and development. Its applicability is shown through semantic interpretations and compositional comparisons of different specification approaches. These range from formal specification techniques like process calculi, Petri nets and rule-based formalisms to semiformal software modeling languages like those in the UML family.
A definitive reference for resolving the dilemma of application testing and debugging--one of the biggest time commitments in a programmer's daily routine--this book rescues readers from substandard application testing practices. It commences with several chapters that provide an overview the debugger's basic features, then covers common debugging scenarios.
SMIL 3.0: Multimedia for the Web, Mobile Devices and Daisy Talking Books is a revised introduction to - and resource guide for - the W3C SMIL language. It covers all aspects of the SMIL specification and covers all of SMIL's implem- tation profiles, from the desktop through the world of mobile SMIL devices. Based on the first version of the book, which covered SMIL 2.0, this edition has been updated with information from the past two releases of the SMIL l- guage. We have benefitted from comments and suggestions from many readers of the first edition, and have produced what we feel is the most comprehensive guide to SMIL available anywhere. Motivation for this Book While we were working on various phases of the SMIL recommendations, it became clear to us that the richness of the SMIL language could easily ov- whelm many Web authors and designers. In the 500+ pages that the SYMM working group needed to describe the 30+ SMIL elements and the 150+ SMIL attributes, there was not much room for background information or extensive examples. The focus of the specification was on implementation aspects of the SMIL language, not on the rationale or the potential uses of SMIL's declarative power.
IFIP's Working Group 2.7(13.4)* has, since its establishment in 1974, con centrated on the software problems of user interfaces. From its original interest in operating systems interfaces the group has gradually shifted em phasis towards the development of interactive systems. The group has orga nized a number of international working conferences on interactive software technology, the proceedings of which have contributed to the accumulated knowledge in the field. The current title of the Working Group is 'User Interface Engineering', with the aim of investigating the nature, concepts, and construction of user interfaces for software systems. The scope of work involved is: - to increase understanding of the development of interactive systems; - to provide a framework for reasoning about interactive systems; - to provide engineering models for their development. This report addresses all three aspects of the scope, as further described below. In 1986 the working group published a report (Beech, 1986) with an object-oriented reference model for describing the components of operating systems interfaces. The modelwas implementation oriented and built on an object concept and the notion of interaction as consisting of commands and responses. Through working with that model the group addressed a number of issues, such as multi-media and multi-modal interfaces, customizable in terfaces, and history logging. However, a conclusion was reached that many software design considerations and principles are independent of implemen tation models, but do depend on the nature of the interaction process."
The emergence of Web 2.0 has triggered a trend towards global online social interactions and has brought sociology into the global interactive picture, creating educational issues relating to individual and social learning for the internalization and externalization of information and knowledge. ""Educational Social Software for Context-Aware Learning: Collaborative Methods and Human Interaction"" examines socio-cultural elements in educational computing focused on design and theory where learning and setting are intertwined. This advanced publication addresses real-life case studies where evaluations have been applied and validated in computational systems.
The development of successful, usable Web-based systems and applications requires careful consideration of problems, needs, and unique circumstances within and among organizations. Uniting research from a number of different disciplines, Web engineering seeks to develop solutions and uncover new trends in the rapidly growing body of literature on Web system design, modeling, and methodology. Models for Capitalizing on Web Engineering Advancements: Trends and Discoveries contains research on new developments and existing applications made possible by the principles of Web engineering. With selections focused on a broad range of applications from telemedicine to geographic information retrieval this book provides a foundation for further study of the unique challenges faced by Web application designers.
This book proposes a purely classical first-order logical approach to the theory of programming. The authors, leading members of the famous "Hungarian school," use this approach to give a unified and systematic presentation of the theory. This approach provides formal methods and tools for reasoning about computer programs and programming languages by allowing the syntactic and semantic characterization of programs, the description of program properties, and ways to check whether a given program satisfies certain properties. The basic methods are logical extension, inductive definition and their combination, all of which admit an appropriate first-order representation of data and time. The framework proposed by the authors allows the investigation and development of different programming theories and logics from a unified point of view. Dynamic and temporal logics, for example, are investigated and compared with respect to their expressive and proof-theoretic powers. The book should appeal to both theoretical researchers and students. For researchers in computer science the book provides a coherent presentation of a new approach which permits the solution of various problems in programming theory in a unified manner by the use of first-order logical tools. The book may serve as a basis for graduate courses in programming theory and logic as it covers all important questions arising between the theory of computation and formal descriptive languages and presents an appropriate derivation system.
Spring Security in Action shows you how to use Spring Security to create applications you can be confident will withstand even the most dedicated attacks. Starting with essential "secure by design" principles, you'll learn common software vulnerabilities and how to avoid them right from the design stage. Through hands-on projects, you'll learn to manage system users, configure secure endpoints, and use, OAuth2 and OpenID Connect for authentication and authorization. As you go, you'll learn how to adapt Spring Security to different architectures, such as configuring Spring Security for Reactive applications and container-based applications orchestrated with Kubernetes. When you're done, you'll have a complete understanding of how to use Spring Security to protect your Java enterprise applications from common threats and attacks. Key Features * The principles of secure by design * The architecture of Spring Security * Spring Security contracts for password encoding, cryptography, and authentication * Applying Spring Security to different architecture styles For experienced Java developers with knowledge of other Spring tools. About the technology Your applications, along with the data they manage, are one of your organization's most valuable assets. No company wants their applications easily cracked by malicious attackers or left vulnerable by avoidable errors. The specialized Spring Security framework reduces the time and manpower required to create reliable authorization, authentication, and other security features for your Java enterprise software. Thanks to Spring Security, you can easily bake security into your applications, from design right through to implementation. Laurentiu Spilca is a dedicated development lead and trainer at Endava, where he leads the development of a project in the financial market of European Nordic countries. He has over ten years experience as a Java developer and technology teacher.
At the heart of the topology of global optimization lies Morse Theory: The study of the behaviour of lower level sets of functions as the level varies. Roughly speaking, the topology of lower level sets only may change when passing a level which corresponds to a stationary point (or Karush-Kuhn Tucker point). We study elements of Morse Theory, both in the unconstrained and constrained case. Special attention is paid to the degree of differentiabil ity of the functions under consideration. The reader will become motivated to discuss the possible shapes and forms of functions that may possibly arise within a given problem framework. In a separate chapter we show how certain ideas may be carried over to nonsmooth items, such as problems of Chebyshev approximation type. We made this choice in order to show that a good under standing of regular smooth problems may lead to a straightforward treatment of "just" continuous problems by means of suitable perturbation techniques, taking a priori nonsmoothness into account. Moreover, we make a focal point analysis in order to emphasize the difference between inner product norms and, for example, the maximum norm. Then, specific tools from algebraic topol ogy, in particular homology theory, are treated in some detail. However, this development is carried out only as far as it is needed to understand the relation between critical points of a function on a manifold with structured boundary. Then, we pay attention to three important subjects in nonlinear optimization."
Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet. This book is the second installment of a two-book collection covering the state-of-the-art of both theoretical and practical aspects of Web services and SOC research and deployments. Advanced Web Services specifically focuses on advanced topics of Web services and SOC and covers topics including Web services transactions, security and trust, Web service management, real-world case studies, and novel perspectives and future directions. The editors present foundational topics in the first book of the collection, Web Services Foundations (Springer, 2013). Together, both books comprise approximately 1400 pages and are the result of an enormous community effort that involved more than 100 authors, comprising the world's leading experts in this field.
The technique of randomization has been employed to solve numerous prob lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often is a collection of articles written by renowned experts in practice. This book in the area of randomized parallel computing. A brief introduction to randomized algorithms In the analysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O(nlogn). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at the O(n logn) average run time for quicksort is that each input permutation is equally likely. Clearly, any average case analysis is only as good as how valid the assumption made on the input space is. Randomized algorithms achieve superior performances without making any assumptions on the inputs by making coin flips within the algorithm. Any analysis done of randomized algorithms will be valid for all possible inputs.
A resource like no other—the first comprehensive guide to phase unwrapping Phase unwrapping is a mathematical problem-solving technique increasingly used in synthetic aperture radar (SAR) interferometry, optical interferometry, adaptive optics, and medical imaging. In Two-Dimensional Phase Unwrapping, two internationally recognized experts sort through the multitude of ideas and algorithms cluttering current research, explain clearly how to solve phase unwrapping problems, and provide practicable algorithms that can be applied to problems encountered in diverse disciplines. Complete with case studies and examples as well as hundreds of images and figures illustrating the concepts, this book features:
Two-Dimensional Phase Unwrapping skillfully integrates concepts, algorithms, software, and examples into a powerful benchmark against which new ideas and algorithms for phase unwrapping can be tested. This unique introduction to a dynamic, rapidly evolving field is essential for professionals and graduate students in SAR interferometry, optical interferometry, adaptive optics, and magnetic resonance imaging (MRI).
This book shows C# developers how to use C# 2008 and ADO.NET 3.5 to develop database applications the way the best professionals do. After an introductory section, section 2 shows how to use data sources and datasets for Rapid Application Development and prototyping of Windows Forms applications. Section 3 shows how to build professional 3-layer applications that consist of presentation, business, and database classes. Section 4 shows how to use the new LINQ feature to work with data structures like datasets, SQL Server databases, and XML documents. And section 5 shows how to build database applications by using the new Entity Framework to map business objects to database objects. To ensure mastery, this book presents 23 complete database applications that demonstrate best programming practices. And it's all done in the distinctive Murach style that has been training professional developers for 35 years.
In January 1992, the Sixth Workshop on Optimization and Numerical Analysis was held in the heart of the Mixteco-Zapoteca region, in the city of Oaxaca, Mexico, a beautiful and culturally rich site in ancient, colonial and modern Mexican civiliza tion. The Workshop was organized by the Numerical Analysis Department at the Institute of Research in Applied Mathematics of the National University of Mexico in collaboration with the Mathematical Sciences Department at Rice University, as were the previous ones in 1978, 1979, 1981, 1984 and 1989. As were the third, fourth, and fifth workshops, this one was supported by a grant from the Mexican National Council for Science and Technology, and the US National Science Foundation, as part of the joint Scientific and Technical Cooperation Program existing between these two countries. The participation of many of the leading figures in the field resulted in a good representation of the state of the art in Continuous Optimization, and in an over view of several topics including Numerical Methods for Diffusion-Advection PDE problems as well as some Numerical Linear Algebraic Methods to solve related pro blems. This book collects some of the papers given at this Workshop."
Three powerful technologies are combined in a single book: Remoting, Reflection, and Threading. When these technologies come together, readers are faced with a powerful range of tools that allows them to run code faster, more securely, and more flexibly, so they'll be able to code applications across the spectrum--from a single machine to an entire network.
Content-based multimedia retrieval is a challenging research field with many unsolved problems. This monograph details concepts and algorithms for robust and efficient information retrieval of two different types of multimedia data: waveform-based music data and human motion data. It first examines several approaches in music information retrieval, in particular general strategies as well as efficient algorithms. The book then introduces a general and unified framework for motion analysis, retrieval, and classification, highlighting the design of suitable features, the notion of similarity used to compare data streams, and data organization.
It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name "polynomial-time interior-point methods", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12].
Learn AngularJS, JavaScript and jQuery Web Application Development In just a short time, you can learn the basics of the JavaScript language, jQuery library, and AngularJS framework - and find out how to use them to build well-designed, reusable components for web applications. Sams Teach Yourself AngularJS, JavaScript, and jQuery All in One assumes absolutely no previous knowledge of JavaScript or jQuery. The authors begin by helping students gain the relevant JavaScript skills they need, introducing JavaScript in a way specifically designed for modern AngularJS web development. Each short, easy lesson builds on all that's come before, teaching new concepts and techniques from the ground up, through practical examples and hands-on problem solving. As you complete the lessons in this book, you'll gain a practical understanding of how to provide rich user interactions in your web pages, adding dynamic code that allows web pages to instantly react to mouse clicks and finger swipes, and interact with back-end services to store and retrieve data from the web server. Learn how to: Create powerful, highly interactive single-page web applications Leverage AngularJS's innovative MVC approach to web development Use JavaScript in modern frameworks Implement JavaScript, jQuery, and AngularJS together in web pages Dynamically modify page elements in the browser Use browser events to interact with the user directly Implement client-side services that interact with web servers Integrate rich user interface components, including zoomable images and expandable lists Enhance user experience by creating AngularJS templates with built-in directives Bind user interface elements and events to the data model to add flexibility and support more robust interactivity Define custom AngularJS directives to extend HTML's capabilities Build dynamic browser views to provide richer user interaction Create custom services you can integrate into many AngularJS applications Develop a well-structured code base that's easy to reuse and maintain Contents at a Glance Part I: An Introduction to AngularJS, jQuery, and JavaScript Development 1 Introduction to Dynamic Web Programming 2 Debugging JavaScript in Web Pages 3 Understanding Dynamic Web Page Anatomy 4 Adding CSS/CSS3 Styles to Allow Dynamic Design and Layout 5 Jumping into jQuery and JavaScript Syntax 6 Understanding and Using JavaScript Objects Part II: Implementing jQuery and JavaScript in Web Pages 7 Accessing DOM Elements Using JavaScript and jQuery Objects 8 Navigating and Manipulating jQuery Objects an
Looking to become more efficient using Unity? How to Cheat in Unity 5 takes a no-nonsense approach to help you achieve fast and effective results with Unity 5. Geared towards the intermediate user, HTC in Unity 5 provides content beyond what an introductory book offers, and allows you to work more quickly and powerfully in Unity. Packed full with easy-to-follow methods to get the most from Unity, this book explores time-saving features for interface customization and scene management, along with productivity-enhancing ways to work with rendering and optimization. In addition, this book features a companion website at www.alanthorn.net, where you can download the book's companion files and also watch bonus tutorial video content. Learn bite-sized tips and tricks for effective Unity workflows Become a more powerful Unity user through interface customization Enhance your productivity with rendering tricks, better scene organization and more Better understand Unity asset and import workflows Learn techniques to save you time and money during development
Ontology Learning for the Semantic Web explores techniques for
applying knowledge discovery techniques to different web data
sources (such as HTML documents, dictionaries, etc.), in order to
support the task of engineering and maintaining ontologies. The
approach of ontology learning proposed in Ontology Learning for the
Semantic Web includes a number of complementary disciplines that
feed in different types of unstructured and semi-structured data.
This data is necessary in order to support a semi-automatic
ontology engineering process.
In recent years, digital technologies have become more ubiquitous and integrated into everyday life. While once reserved mostly for personal uses, video games and similar innovations are now implemented across a variety of fields. Transforming Gaming and Computer Simulation Technologies across Industries is a pivotal reference source for the latest research on emerging simulation technologies and gaming innovations to enhance industry performance and dependency. Featuring extensive coverage across a range of relevant perspectives and topics, such as user research, player identification, and multi-user virtual environments, this book is ideally designed for engineers, professionals, practitioners, upper-level students, and academics seeking current research on gaming and computer simulation technologies across different industries. Topics Covered: Digital vs. Non-Digital Platforms Ludic Simulations Mathematical Simulations Medical Gaming Multi-User Virtual Environments Player Experiences Player Identification User Research
Created by the Joint Photographic Experts Group (JPEG), the JPEG standard is the first color still image data compression international standard. This new guide to JPEG and its technologies offers detailed information on the new JPEG signaling conventions and the structure of JPEG compressed data.
This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The contributions are based on the results of the FP7 European Project ROMEO, which focuses on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the future Internet. The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user's context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of consistent video quality to fixed and mobile users. ROMEO will present hybrid networking solutions that combine the DVB-T2 and DVB-NGH broadcast access network technologies together with a QoE aware Peer-to-Peer (P2P) distribution system that operates over wired and wireless links. Live streaming 3D media needs to be received by collaborating users at the same time or with imperceptible delay to enable them to watch together while exchanging comments as if they were all in the same location. This book is the last of a series of three annual volumes devoted to the latest results of the FP7 European Project ROMEO. The present volume provides state-of-the-art information on 3D multi-view video, spatial audio networking protocols for 3D media, P2P 3D media streaming, and 3D Media delivery across heterogeneous wireless networks among other topics. Graduate students and professionals in electrical engineering and computer science with an interest in 3D Future Internet Media will find this volume to be essential reading. |
You may like...
Practical Guide to Usability Testing
Joseph S. Dumas, Janice C. Redish
Paperback
R984
Discovery Miles 9 840
Research Anthology on Agile Software…
Information R Management Association
Hardcover
R14,542
Discovery Miles 145 420
News Search, Blogs and Feeds - A Toolkit
Lars Vage, Lars Iselid
Paperback
R1,332
Discovery Miles 13 320
Research Anthology on Agile Software…
Information R Management Association
Hardcover
R14,534
Discovery Miles 145 340
Creativity in Computing and DataFlow…
Suyel Namasudra, Veljko Milutinovic
Hardcover
R4,204
Discovery Miles 42 040
|