Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Books > Computing & IT > Computer programming
Get more out of your legacy systems: more performance, functionality, reliability, and manageability Is your code easy to change? Can you get nearly instantaneous feedback when you do change it? Do you understand it? If the answer to any of these questions is no, you have legacy code, and it is draining time and money away from your development efforts. In this book, Michael Feathers offers start-to-finish strategies for working more effectively with large, untested legacy code bases. This book draws on material Michael created for his renowned Object Mentor seminars: techniques Michael has used in mentoring to help hundreds of developers, technical managers, and testers bring their legacy systems under control. The topics covered include Understanding the mechanics of software change: adding features, fixing bugs, improving design, optimizing performance Getting legacy code into a test harness Writing tests that protect you against introducing new problems Techniques that can be used with any language or platform--with examples in Java, C++, C, and C# Accurately identifying where code changes need to be made Coping with legacy systems that aren't object-oriented Handling applications that don't seem to have any structure This book also includes a catalog of twenty-four
dependency-breaking techniques that help you work with program
elements in isolation and make safer changes.
Content-based multimedia retrieval is a challenging research field with many unsolved problems. This monograph details concepts and algorithms for robust and efficient information retrieval of two different types of multimedia data: waveform-based music data and human motion data. It first examines several approaches in music information retrieval, in particular general strategies as well as efficient algorithms. The book then introduces a general and unified framework for motion analysis, retrieval, and classification, highlighting the design of suitable features, the notion of similarity used to compare data streams, and data organization.
Rule-Based Programming is a broad presentation of the rule-based programming method with many example programs showing the strengths of the rule-based approach. The rule-based approach has been used extensively in the development of artificial intelligence systems, such as expert systems and machine learning. This rule-based programming technique has been applied in such diverse fields as medical diagnostic systems, insurance and banking systems, as well as automated design and configuration systems. Rule-based programming is also helpful in bridging the semantic gap between an application and a program, allowing domain specialists to understand programs and participate more closely in their development. Over sixty programs are presented and all programs are available from an ftp site. Many of these programs are presented in several versions allowing the reader to see how realistic programs are elaborated from back of envelope' models. Metaprogramming is also presented as a technique for bridging the semantic gap'. Rule-Based Programming will be of interest to programmers, systems analysts and other developers of expert systems as well as to researchers and practitioners in artificial intelligence, computer science professionals and educators.
This book contains some selected papers from the International Conference on Extreme Learning Machine (ELM) 2017, held in Yantai, China, October 4-7, 2017. The book covers theories, algorithms and applications of ELM. Extreme Learning Machines (ELM) aims to enable pervasive learning and pervasive intelligence. As advocated by ELM theories, it is exciting to see the convergence of machine learning and biological learning from the long-term point of view. ELM may be one of the fundamental `learning particles' filling the gaps between machine learning and biological learning (of which activation functions are even unknown). ELM represents a suite of (machine and biological) learning techniques in which hidden neurons need not be tuned: inherited from their ancestors or randomly generated. ELM learning theories show that effective learning algorithms can be derived based on randomly generated hidden neurons (biological neurons, artificial neurons, wavelets, Fourier series, etc) as long as they are nonlinear piecewise continuous, independent of training data and application environments. Increasingly, evidence from neuroscience suggests that similar principles apply in biological learning systems. ELM theories and algorithms argue that "random hidden neurons" capture an essential aspect of biological learning mechanisms as well as the intuitive sense that the efficiency of biological learning need not rely on computing power of neurons. ELM theories thus hint at possible reasons why the brain is more intelligent and effective than current computers. This conference will provide a forum for academics, researchers and engineers to share and exchange R&D experience on both theoretical studies and practical applications of the ELM technique and brain learning. It gives readers a glance of the most recent advances of ELM.
The technique of randomization has been employed to solve numerous prob lems of computing both sequentially and in parallel. Examples of randomized algorithms that are asymptotically better than their deterministic counterparts in solving various fundamental problems abound. Randomized algorithms have the advantages of simplicity and better performance both in theory and often is a collection of articles written by renowned experts in practice. This book in the area of randomized parallel computing. A brief introduction to randomized algorithms In the analysis of algorithms, at least three different measures of performance can be used: the best case, the worst case, and the average case. Often, the average case run time of an algorithm is much smaller than the worst case. 2 For instance, the worst case run time of Hoare's quicksort is O(n ), whereas its average case run time is only O(nlogn). The average case analysis is conducted with an assumption on the input space. The assumption made to arrive at the O(n logn) average run time for quicksort is that each input permutation is equally likely. Clearly, any average case analysis is only as good as how valid the assumption made on the input space is. Randomized algorithms achieve superior performances without making any assumptions on the inputs by making coin flips within the algorithm. Any analysis done of randomized algorithms will be valid for all possible inputs.
With this book, Onn Shehory and Arnon Sturm, together with further contributors, introduce the reader to various facets of agent-oriented software engineering (AOSE). They provide a selected collection of state-of-the-art findings, which combines research from information systems, artificial intelligence, distributed systems and software engineering and covers essential development aspects of agent-based systems. The book chapters are organized into five parts. The first part introduces the AOSE domain in general, including introduction to agents and the peculiarities of software engineering for developing MAS. The second part describes general aspects of AOSE, like architectural models, design patterns and communication. Next, part three discusses AOSE methodologies and associated research directions and elaborates on Prometheus, O-MaSE and INGENIAS. Part four then addresses agent-oriented programming languages. Finally, the fifth part presents studies related to the implementation of agents and multi-agent systems. The book not only provides a comprehensive review of design approaches for specifying agent-based systems, but also covers implementation aspects such as communication, standards and tools and environments for developing agent-based systems. It is thus of interest to researchers, practitioners and students who are interested in exploring the agent paradigm for developing software systems.
In January 1992, the Sixth Workshop on Optimization and Numerical Analysis was held in the heart of the Mixteco-Zapoteca region, in the city of Oaxaca, Mexico, a beautiful and culturally rich site in ancient, colonial and modern Mexican civiliza tion. The Workshop was organized by the Numerical Analysis Department at the Institute of Research in Applied Mathematics of the National University of Mexico in collaboration with the Mathematical Sciences Department at Rice University, as were the previous ones in 1978, 1979, 1981, 1984 and 1989. As were the third, fourth, and fifth workshops, this one was supported by a grant from the Mexican National Council for Science and Technology, and the US National Science Foundation, as part of the joint Scientific and Technical Cooperation Program existing between these two countries. The participation of many of the leading figures in the field resulted in a good representation of the state of the art in Continuous Optimization, and in an over view of several topics including Numerical Methods for Diffusion-Advection PDE problems as well as some Numerical Linear Algebraic Methods to solve related pro blems. This book collects some of the papers given at this Workshop."
by Maq Mannan President and CEO, DSM Technologies Chairman of the IEEE 1364 Verilog Standards Group Past Chairman of Open Verilog International One of the major strengths of the Verilog language is the Programming Language Interface (PLI), which allows users and Verilog application developers to infinitely extend the capabilities of the Verilog language and the Verilog simulator. In fact, the overwhelming success of the Verilog language can be partly attributed to the exi- ence of its PLI. Using the PLI, add-on products, such as graphical waveform displays or pre and post simulation analysis tools, can be easily developed. These products can then be used with any Verilog simulator that supports the Verilog PLI. This ability to create thi- party add-on products for Verilog simulators has created new markets and provided the Verilog user base with multiple sources of software tools. Hardware design engineers can, and should, use the Verilog PLI to customize their Verilog simulation environment. A Company that designs graphics chips, for ex- ple, may wish to see the simulation results of a new design in some custom graphical display. The Verilog PLI makes it possible, and even trivial, to integrate custom so- ware, such as a graphical display program, into a Verilog simulator. The simulation results can then dynamically be displayed in the custom format during simulation. And, if the company uses Verilog simulators from multiple simulator vendors, this integrated graphical display will work with all the simulators.
Despite its increasing importance, the verification and validation of the human-machine interface is perhaps the most overlooked aspect of system development. Although much has been written about the design and developmentprocess, very little organized information is available on how to verifyand validate highly complex and highly coupled dynamic systems. Inability toevaluate such systems adequately may become the limiting factor in our ability to employ systems that our technology and knowledge allow us to design. This volume, based on a NATO Advanced Science Institute held in 1992, is designed to provide guidance for the verification and validation of all highly complex and coupled systems. Air traffic control isused an an example to ensure that the theory is described in terms that will allow its implementation, but the results can be applied to all complex and coupled systems. The volume presents the knowledge and theory ina format that will allow readers from a wide variety of backgrounds to apply it to the systems for which they are responsible. The emphasis is on domains where significant advances have been made in the methods of identifying potential problems and in new testing methods and tools. Also emphasized are techniques to identify the assumptions on which a system is built and to spot their weaknesses.
This book presents new concepts, techniques and promising programming models for designing software for chips with "many" (hundreds to thousands) processor cores. Given the scale of parallelism inherent to these chips, software designers face new challenges in terms of operating systems, middleware and applications. This will serve as an invaluable, single-source reference to the state-of-the-art in programming many-core chips. Coverage includes many-core architectures, operating systems, middleware, and programming models.
This book focuses on flight vehicles and their navigational systems, discussing different forms of flight structures and their control systems, from fixed wings to rotary crafts. Software simulation enables testing of the hardware without actual implementation, and the flight simulators, mechanics, glider development and navigation systems presented here are suitable for lab-based experimentation studies. It explores laboratory testing of flight navigational sensors, such as the magnetic, acceleration and Global Positioning System (GPS) units, and illustrates the six-axis inertial measurement unit (IMU) instrumentation as well as its data acquisition methodology. The book offers an introduction to the various unmanned aerial vehicle (UAV) systems and their accessories, including the linear quadratic regulator (LQR) method for controlling the rotorcraft. It also describes a Matrix Laboratory (MATLAB) control algorithm that simulates and runs the lab-based 3 degrees of freedom (DOF) helicopter, as well as LabVIEW software used to validate controller design and data acquisition. Lastly, the book explores future developments in aviation techniques.
Rather than deciding whether or not to get involved in global sourcing, many companies are facing decisions about whether or not to apply agile methods in their distributed projects. These companies are often motivated by the opportunities to solve the coordination and communication difficulties associated with global software development. Yet while agile principles prescribe close interaction and co-location, the very nature of distributed software development does not support these prerequisites. Smite, Moe, and Agerfalk structured the book into five parts. In "Motivation" the editors introduce the fundamentals of agile distributed software development and explain the rationale behind the application of agile practices in globally distributed software projects. " Transition" describes implementation strategies, adoption of particular agile practices for distributed projects, and general concepts of agility. "Management" details practical implications for project planning, time management, and customer and subcontractor interaction. "Teams" discusses agile distributed team configuration, effective communication and knowledge transfer, and allocation of roles and responsibilities. Finally, in the "Epilogue" the editors summarize all contributions and present future trends for research and practice in agile distributed development. This book is primarily targeted at researchers, lecturers, and students in empirical software engineering, and at practitioners involved in globally distributed software projects. The contributions are based on sound empirical research and identify gaps and commonalities in both the existing state of the art and state of the practice. In addition, they also offer practical advice through many hints, checklists, and experience reports. Questions answered in this book include: What should companies expect from merging agile and distributed strategies? What are the stumbling blocks that prevent companies from realizing the benefits of the agile approach in distributed environments, and how can we recognize infeasible strategies and unfavorable circumstances? What helps managers cope with the challenges of implementing agile approaches in distributed software development projects? How can distributed teams survive the decisions taken by management and become efficient through the application of agile approaches?
Algorithmic Learning in a Random World describes recent theoretical and experimental developments in building computable approximations to Kolmogorov's algorithmic notion of randomness. Based on these approximations, a new set of machine learning algorithms have been developed that can be used to make predictions and to estimate their confidence and credibility in high-dimensional spaces under the usual assumption that the data are independent and identically distributed (assumption of randomness). Another aim of this unique monograph is to outline some limits of predictions: The approach based on algorithmic theory of randomness allows for the proof of impossibility of prediction in certain situations. The book describes how several important machine learning problems, such as density estimation in high-dimensional spaces, cannot be solved if the only assumption is randomness.
A collection of surveys and research papers on mathematical software and algorithms. The common thread is that the field of mathematical applications lies on the border between algebra and geometry. Topics include polyhedral geometry, elimination theory, algebraic surfaces, Gröbner bases, triangulations of point sets and the mutual relationship. This diversity is accompanied by the abundance of available software systems which often handle only special mathematical aspects. This is why the volume also focuses on solutions to the integration of mathematical software systems. This includes low-level and XML based high-level communication channels as well as general frameworks for modular systems.
Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library presents ARCH, a library built as an extension to MPI. ARCH relies on a small set of programming abstractions that allow the writing of well-structured multi-threaded parallel codes according to the object-oriented programming style. ARCH has been written with C++. The book describes the built-in classes, and illustrates their use through several template application cases in several fields of interest: Distributed Algorithms (global completion detection, distributed process serialization), Parallel Combinatorial Optimization (A* procedure), Parallel Image-Processing (segmentation by region growing). It shows how new application-level distributed data types - such as a distributed tree and a distributed graph - can be derived from the built-in classes. A feature of interest to readers is that both the library and the application codes used for illustration purposes are available via the Internet. The material can be downloaded for installation and personal parallel code development on the reader's computer system. ARCH can be run on Unix/Linux as well as Windows NT-based platforms. Current installations include the IBM-SP2, the CRAY-T3E, the Intel Paragon, PC-networks under Linux or Windows NT. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library is aimed at scientists who need to implement parallel/distributed algorithms requiring complicated local and/or distributed control structures. It can also benefit parallel/distributed program developers who wish to write codes in the object-oriented style. The author has been using ARCH for several years as a medium to teach parallel and network programming. Teachers can employ the library for the same purpose while students can use it for training. Although ARCH has been used so far in an academic environment, it will be an effective tool for professionals as well. Multi-Threaded Object-Oriented MPI-Based Message Passing Interface: The ARCH Library is suitable as a secondary text for a graduate level course on Data Communications and Networks, Programming Languages, Algorithms and Computational Theory and Distributed Computing and as a reference for researchers and practitioners in industry.
This book provides an overview of the theory and application of linear and nonlinear mixed-effects models in the analysis of grouped data, such as longitudinal data, repeated measures, and multilevel data. Over 170 figures are included in the book.
Advances in Design and Specification Languages for Embedded Systems is the latest contribution to the Chip Design Languages series and it consists of selected papers presented at the Forum on Specifications and Design Languages (FDL'06), which took place in September 2006 at Technische Universit't Darmstadt, Germany. FDL, an ECSI conference, is the premier European forum to present research results, to exchange experiences, and to learn about new trends in the application of specification and design languages as well as of associated design and modelling methods and tools for integrated circuits, embedded systems, and heterogeneous systems. Modelling and specification concepts push the development of new methodologies for design and verification to system level, they thus provide the means for a model-driven design of complex information processing systems in a variety of application domains.
This unique text/reference describes an exciting and novel approach to supercomputing in the DataFlow paradigm. The major advantages and applications of this approach are clearly described, and a detailed explanation of the programming model is provided using simple yet effective examples. The work is developed from a series of lecture courses taught by the authors in more than 40 universities across more than 20 countries, and from research carried out by Maxeler Technologies, Inc. Topics and features: presents a thorough introduction to DataFlow supercomputing for big data problems; reviews the latest research on the DataFlow architecture and its applications; introduces a new method for the rapid handling of real-world challenges involving large datasets; provides a case study on the use of the new approach to accelerate the Cooley-Tukey algorithm on a DataFlow machine; includes a step-by-step guide to the web-based integrated development environment WebIDE.
Proceedings of the Centre for Software Reliability Conference entitled Software Certification, held at the Penta Hotel, Gatwick, UK, 13-16 September 1988
"Software Project Secrets: Why Software Projects Fail" offers a new path to success in the software industry. This book reaches out to managers, developers, and customers who use industry-standard methodologies, but whose projects still struggle to succeed. Author -->George Stepanek--> analyzes the project management methodology itself, a critical factor that has thus far been overlooked. He explains why it creates problems for software development projects and begins by describing 12 ways in which software projects are different from other kinds of projects. He also analyzes the project management body of knowledge to discover 10 hidden assumptions that are invalid in the context of software projects.-->Table of Contents-->IntroductionWhy Software Is DifferentProject Management AssumptionsCase Study: The Billing System ProjectThe New Agile MethodologiesBudgeting Agile ProjectsCase Study: The Billing System RevisitedAfterword
Videogames and Agency explores the trend in videogames and their marketing to offer a player higher volumes, or even more distinct kinds, of player freedom. The book offers a new conceptual framework that helps us understand how this freedom to act is discussed by designers, and how that in turn reflects in their design principles. What can we learn from existing theories around agency? How do paratextual materials reflect design intention with regards to what the player can and cannot do in a videogame? How does game design shape the possibility space for player action? Through these questions and selected case studies that include AAA and independent games alike, the book presents a unique approach to studying agency that combines game design, game studies, and game developer discourse. By doing so, the book examines what discourses around player action, as well as a game's design can reveal about the nature of agency and videogame aesthetics. This book will appeal to readers specifically interested in videogames, such as game studies scholars or game designers, but also to media studies students and media and screen studies scholars less familiar with digital games.
Learn how applying risk management to each stage of the software engineering model can help the entire development process run on time and on budget. This practical guide identifies the potential threats associated with software development, explains how to establish an effective risk management program, and details the six critical steps involved in applying the process. It also explores the pros and cons of software and organizational maturity, discusses various software metrics approaches you can use to measure software quality, and highlights procedures for implementing a successful metrics program.
It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization. The importance of this paper, containing a new polynomial-time algorithm for linear op timization problems, was not only in its complexity bound. At that time, the most surprising feature of this algorithm was that the theoretical pre diction of its high efficiency was supported by excellent computational results. This unusual fact dramatically changed the style and direc tions of the research in nonlinear optimization. Thereafter it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments. In a new rapidly develop ing field, which got the name "polynomial-time interior-point methods", such a justification was obligatory. Afteralmost fifteen years of intensive research, the main results of this development started to appear in monographs [12, 14, 16, 17, 18, 19]. Approximately at that time the author was asked to prepare a new course on nonlinear optimization for graduate students. The idea was to create a course which would reflect the new developments in the field. Actually, this was a major challenge. At the time only the theory of interior-point methods for linear optimization was polished enough to be explained to students. The general theory of self-concordant functions had appeared in print only once in the form of research monograph [12].
The art, craft, discipline, logic, practice, and science of developing large-scale software products needs a believable, professional base. The textbooks in this three-volume set combine informal, engineeringly sound practice with the rigour of formal, mathematics-based approaches. Volume 1 covers the basic principles and techniques of formal methods abstraction and modelling. First this book provides a sound, but simple basis of insight into discrete mathematics: numbers, sets, Cartesians, types, functions, the Lambda Calculus, algebras, and mathematical logic. Then it trains its readers in basic property- and model-oriented specification principles and techniques. The model-oriented concepts that are common to such specification languages as B, VDM-SL, and Z are explained here using the RAISE specification language (RSL). This book then covers the basic principles of applicative (functional), imperative, and concurrent (parallel) specification programming. Finally, the volume contains a comprehensive glossary of software engineering, and extensive indexes and references. These volumes are suitable for self-study by practicing software engineers and for use in university undergraduate and graduate courses on software engineering. Lecturers will be supported with a comprehensive guide to designing modules based on the textbooks, with solutions to many of the exercises presented, and with a complete set of lecture slides. |
You may like...
Java How to Program, Late Objects…
Paul Deitel, Harvey Deitel
Paperback
Data Abstraction and Problem Solving…
Janet Prichard, Frank Carrano
Paperback
R2,163
Discovery Miles 21 630
C++ How to Program: Horizon Edition
Harvey Deitel, Paul Deitel
Paperback
R1,779
Discovery Miles 17 790
|