![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Software engineering
Agile methods are a collection of different techniques and practices that share the same values and basic principles. ""Agile Software Development Quality Assurance"" provides in-depth coverage of the most important concepts, issues, trends, and technologies in agile software. This Premier Reference Source presents the research and instruction used to develop and implement software quickly, in small iteration cycles, and in close cooperation with the customer in an adaptive way. It is a comprehensive guide that helps researchers and practitioners in the agile software development process avoid risks and project failures that are frequently encountered in traditional software projects.
Over 12 years ago Logica started the development of TestFrame(r), a test method which enables organizations to develop and execute their tests in a structured way. Since then many new techniques have been developed, such as, most recently, "Service Oriented Architectures (SOAs)" or "Software as a Service (SaaS)," requiring updates to test procedures and processes that seemed well-established. These trends have prompted Logica to update and renew the TestFrame(r) method. Chris Schotanus new book takes into account the recent developments and his presentation is focused on supporting daily test practice. Every step within this structured test method is dealt with exhaustively, providing the reader with the necessary details for successful software testing. Yet his book will not only help test personnel to improve effectivity, it will also serve as a way to improve efficiency through its strong focus on reuse. This makes TestFrame the practical guide to testing information systems for everyone involved in software testing test developers, test managers, and staff charged with quality assurance."
Aimed at 2nd and 3rd year/MSc courses, Model Driven Software Development using UML and Java introduces MDD, MDA and UML, and shows how UML can be used to specify, design, verify and implement software systems using an MDA approach. Structured to follow two lecture courses, one intermediate (UML, MDA, specification, design, model transformations) and one advanced (software engineering of web applications and enterprise information systems), difficult concepts are illustrated with numerous examples, and exercises with worked solutions are provided throughout.
Refinement is one of the cornerstones of a formal approach to software engineering. Refinement is all about turning an abstract description (of a soft or hardware system) into something closer to implementation. It provides that essential bridge between higher level requirements and an implementation of those requirements. This book provides a comprehensive introduction to refinement for the researcher or graduate student. It introduces refinement in different semantic models, and shows how refinement is defined and used within some of the major formal methods and languages in use today. It (1) introduces the reader to different ways of looking at refinement, relating refinement to observations(2) shows how these are realised in different semantic models (3) shows how different formal methods use different models of refinement, and (4) how these models of refinement are related.
The constantly evolving technological infrastructure of the modem world presents a great challenge of developing software systems with increasing size, complexity, and functionality. The software engineering field has seen changes and innovations to meet these and other continuously growing challenges by developing and implementing useful software engineering methodologies. Among the more recent advances are those made in the context of software portability, formal verification. techniques, software measurement, and software reuse. However, despite the introduction of some important and useful paradigms in the software engineering discipline, their technological transfer on a larger scale has been extremely gradual and limited. For example, many software development organizations may not have a well-defined software assurance team, which can be considered as a key ingredient in the development of a high-quality and dependable software product. Recently, the software engineering field has observed an increased integration or fusion with the computational intelligence (Cl) field, which is comprised of primarily the mature technologies of fuzzy logic, neural networks, genetic algorithms, genetic programming, and rough sets. Hybrid systems that combine two or more of these individual technologies are also categorized under the Cl umbrella. Software engineering is unlike the other well-founded engineering disciplines, primarily due to its human component (designers, developers, testers, etc. ) factor. The highly non-mechanical and intuitive nature of the human factor characterizes many of the problems associated with software engineering, including those observed in development effort estimation, software quality and reliability prediction, software design, and software testing."
Sophisticated development organizations worldwide are discovering the advantages of software architectures in building systems that deliver higher quality, lower development and maintenance costs, and shorter time to market. In this book, one of the field's leading experts addresses the two most important factors in making software architectures work: effective design, and leveraging architectures across product lines.KEY TOPICS:Jan Bosch begins by outlining the rationale for software architectures, and reviewing the limits of traditional approaches to software reuse. Next, Bosch introduces a comprehensive approach to software architecture design that includes explicit quality goals, is carefully optimized up front, and still accounts for the inevitability of change. In Part II, Bosch presents today's best practices for defining architectures that can be reused across entire "lines" or "families" of software. Bosch covers each phase of the software product line lifecycle, including development, usage, and evolution of software assets, showing how to manage interdependencies, and cope with new requirements that were not part of the original design. The book includes several running case studies from real companies that have achieved competitive advantage through software architecture.MARKET:For all software architects; IT managers responsible for development projects; designers; and developers.
Requirements engineering is a field of knowledge concerned with the systematic process of eliciting, analyzing and modeling requirements. Though it is usually understood in relation to software system requirements, most of its principles and some of its techniques can be adapted to other problems dealing with complex sets of requirements. The engineering vision indicates that this should be a practical and well-defined process where trade-offs have to be considered to obtain the best results. Mature software development needs mature requirements engineering. This was true ten years ago when requirements engineering became an important component of the software development process. It remains true today when the pressure to deliver code on time and on budget is increasing, and the demand for higher quality software also increases. requirements. Each chapter addresses a specific problem where the authors summarize their experiences and results to produce well-fit and traceable requirements. Chapters highlight familiar issues with recent results and experiences which are accompanied by chapters describing well-tuned new methods for specific domains. The book is designed for a professional audience, composed of researchers and practitioners in industry. It is also suitable as a secondary text for graduate-level students in computer science and engineering.
.This is the only book that demonstrates how to develop a business
rules engine. Covers user requirements, data modeling, metadata,
and more.
This state-of-the-art book aims to address problems and solutions in implementing complex and high quality systems past the year 2000. In particular, it focuses on the development of languages, methods and tools and their further evaluation. Among the issues discussed are the following: evolution of software systems; specific application domains; supporting portability and reusability of software components; the development of networking software; and software architectures for various application domains. This book comprises the proceedings of the International Conference on Systems Implementation 2000: Languages, Methods and Tools, sponsored by the International Federation for Information Processing (IFIP) and was held in Germany, in February 1998. It will be particularly relevant to researchers in the field of software engineering and to software developers working in larger companies.
The object oriented paradigm has become one of the dominant forces in the computing world. According to a recent survey, by the year 2000, more than 80% of development organizations are expected to use object technology as the basis for their distributed development strategies.
This book focuses on metamodelling as a discipline, exploring its foundations, techniques and results. It presents a comprehensive metamodel that covers process, product and quality issues under a common framework. Issues covered include: An explanation of what metamodelling is and why it is necessary in the context of software engineering. Basic concepts and principles of traditional metamodelling, and some existing results of this approach. Problems associated with traditional approaches to Metamodelling are discussed, alongside an exploration of possible solutions and alternative approaches. Advanced topics such as the extension of the object-oriented paradigm for metamodelling purposes or the foundations of powertype-based tool development will be studied. Finally, a comprehensive case study is introduced and developed, showing how to use many of the concepts explained in the previous chapters. This book provides a comprehensive conceptual framework for metamodelling and includes case studies and exercises which will demonstrate practical uses of metamodelling. For lecturers and educators, the book provides a layered repository of contents, starting from the basics of metamodelling in the first chapters, through specific issues such as trans-layer control or non-strict approaches, up to advanced topics such as universal powertyping or extensions to the object-oriented paradigm. The book also serves as an in-depth reference guide to features and technologies to consider when developing in-house software development methods or customising and adopting off-the-shelf ones. Software tool developers and vendors can benefit from the book by finding in it a comprehensive guide tothe implementation of frameworks and toolsets for computer-aided software modelling and development.
"Handbook of Open Source Tools" introduces a comprehensive collection of advanced open source tools useful in developing software applications. The book contains information on more than 200 open-source tools which include software construction utilities for compilers, virtual-machines, database, graphics, high-performance computing, OpenGL, geometry, algebra, graph theory, GUIs and more. Special highlights for software construction utilities and application libraries are included. Each tool is covered in the context of a real like application development setting. This unique handbook presents a comprehensive discussion of advanced tools, a valuable asset used by most application developers and programmers; includes a special focus on Mathematical Open Source Software not available in most Open Source Software books, and introduces several tools (eg ACL2, CLIPS, CUDA, and COIN) which are not known outside of select groups, but are very powerful. "Handbook of Open Source Tools "is designed for application developers and programmers working with Open Source Tools. Advanced-level students concentrating on Engineering, Mathematics and Computer Science will find this reference a valuable asset as well.
A Paradigm for Decentralized Process Modeling presents a novel approach to decentralized process modeling that combines both trends and suggests a paradigm for decentralized PCEs, supporting concerted efforts among geographically-dispersed teams - each local individual or team with its own autonomous process - with emphasis on flexible control over the degree of collaboration versus autonomy provided. A key guideline in this approach is to supply abstraction mechanisms whereby pre-existing processes (or workflows) can be encapsulated and retain security of their internal artifacts and status data, while agreeing with other processes on formal interfaces through which all their interactions are conducted on intentionally shared information. This book is primarily intended to provide an in-depth discussion of decentralized process modeling and enactment technology, covering both high-level concepts and a full-blown realization of these concepts in a concrete system. Either the whole book or selected chapters could be used in a graduate course on software engineering, software process, or software development environments, or even for a course on workflow systems outside computer science (e.g., in a classical engineering department for engineering design, or in a business school for business practices or enterprise-wide management, or in the medical informatics department of a health science institution concerned with computer-assistance for managed care). Selected portions of the book, such as section 2.2 on Marvel, could also be employed as a case study in advanced undergraduate software engineering courses. A Paradigm for Decentralized Process Modeling is a valuable resource for both researchers and practitioners, particularly in software engineering, software development environments, and software process and workflow management, but also in electrical, mechanical, civil and other areas of engineering which have analogous needs for design processes, environmental support and concurrent engineering, and beyond to private and public sector workflow management and control, groupware support, and heterogeneous distributed systems in general.
A survey of the state of the art of deterministic resource-constrained project scheduling with time windows. General temporal constraints and several different types of limited resources are considered. A large variety of time-based, financial, and resource-based objectives - important in practice - are studied. A thorough structural analysis of the feasible region of project scheduling problems and a classification and detailed investigation of objective functions are performed, which can be exploited for developing efficient exact and heuristic solution methods. New interesting applications of project scheduling to production and operations management as well as investment projects are discussed in the second edition.
At first glance the concepts of time and of Petri nets are quite contrary: while time determines the occurrences of events in a system, classic Petri nets consider their causal relationships and they represent events as concurrent systems. But if we take a closer look at how time and causality are intertwined we realize that there are many possible ways in which time and Petri nets interact. This book takes a closer look at three time-dependent Petri nets: Time Petri nets, Timed Petri nets, and Petri nets with time windows. The author first explains classic Petri nets and their fundamental properties. Then the pivotal contribution of the book is the introduction of different algorithms that allow us to analyze time-dependent Petri nets. For Time Petri nets, the author presents an algorithm that proves the behavioral equivalence of a net where time is designed once with real and once with natural numbers, so we can reduce the state space and consider the integer states exclusively. For Timed Petri nets, the author introduces two time-dependent state equations, providing a sufficient condition for the non-reachability of states, and she also defines a local transformation for converting these nets into Time Petri nets. Finally, she shows that Petri nets with time-windows have the ability to realize every transition sequence fired in the net omitting time restrictions. These classes of time-dependent Petri nets show that time alone does not change the power of a Petri net, in fact time may or may not be used to force firing. For Time Petri nets and Timed Petri nets we can say that they are Turing-powerful, and thus more powerful than classic Petri nets, because there is a compulsion to fire at some point in time. By contrast, Petri nets with time-windows have no compulsion to fire, their expressiveness power is less than that of Turing-machines. This book derives from advanced lectures, and the text is supported throughout withexamples and exercises. It issuitable for graduate courses in computer science, mathematics, engineering, and related disciplines, and as a reference for researchers."
This book presents the research challenges that are due to the introduction of the 3rd dimension in chips for researchers and covers the whole architectural design approach for 3D-SoCs. Nowadays the 3D-Integration technologies, 3D-Design techniques, and 3D-Architectures are emerging as interesting, truly hot, broad topics. The present book gathers the recent advances in the whole domain by renowned experts in the field to build a comprehensive and consistent book around the hot topics of three-dimensional architectures and micro-architectures. This book includes contributions from high level international teams working in this field.
With the rapid growth of networking and high-computing power, the demand for large-scale and complex software systems has increased dramatically. Many of the software systems support or supplant human control of safety-critical systems such as flight control systems, space shuttle control systems, aircraft avionics control systems, robotics, patient monitoring systems, nuclear power plant control systems, and so on. Failure of safety-critical systems could result in great disasters and loss of human life. Therefore, software used for safety critical systems should preserve high assurance properties. In order to comply with high assurance properties, a safety-critical system often shares resources between multiple concurrently active computing agents and must meet rigid real-time constraints. However, concurrency and timing constraints make the development of a safety-critical system much more error prone and arduous. The correctness of software systems nowadays depends mainly on the work of testing and debugging. Testing and debugging involve the process of de tecting, locating, analyzing, isolating, and correcting suspected faults using the runtime information of a system. However, testing and debugging are not sufficient to prove the correctness of a safety-critical system. In contrast, static analysis is supported by formalisms to specify the system precisely. Formal verification methods are then applied to prove the logical correctness of the system with respect to the specification. Formal verifica tion gives us greater confidence that safety-critical systems meet the desired assurance properties in order to avoid disastrous consequences."
Business practices are rapidly changing due to technological advances in the workplace. Organizations are challenged to implement new programs for more efficient business while maintaining their standards of excellence and achievement. Achieving Enterprise Agility through Innovative Software Development brings together the necessary methodologies and resources for organizations to understand the challenges and discover the solutions that will enhance their businesses. Including chapters on recent advances in software engineering, this publication will be an essential reference source for researchers, practitioners, students, and professionals in the areas of agile software methodologies, lean development, knowledge engineering, artificial intelligence, cloud computing, software project management, and component-based software engineering.
Software compiles, executes and runs, but often fails or gives inaccurate results, because it is not tested thoroughly prior to its release. This overview of software testing and quality assurance provides key concepts, case studies, and numerous techniques to ensure software is reliable and secure. Using a "self-teaching" format, the book covers important topics such as black, white, and gray box testing, video game testing, test management, automation, levels of testing, and quality assurance standards and procedures. Includes end of chapter multiple-choice questions/answers to increase mastering of the topics.
provides systematic solutions from formal test theory to automated test description methods, automated simulation test environment construction verifies the effectiveness of the theories, technologies and methods
Extensive research conducted by the Hasso Plattner Design Thinking Research Program at Stanford University in Palo Alto, California, USA, and the Hasso Plattner Institute in Potsdam, Germany, has yielded valuable insights on why and how design thinking works. The participating researchers have identified metrics, developed models, and conducted studies, which are featured in this book, and in the previous volumes of this series. This volume provides readers with tools to bridge the gap between research and practice in design thinking with varied real world examples. Several different approaches to design thinking are presented in this volume. Acquired frameworks are leveraged to understand design thinking team dynamics. The contributing authors lead the reader through new approaches and application fields and show that design thinking can tap the potential of digital technologies in a human-centered way. It also presents new ideas in neurodesign from Stanford University and the Hasso Plattner Institute in Potsdam, inviting the reader to consider newly developed methods and how these insights can be applied to different domains. Design thinking can be learned. It has a methodology that can be observed across multiple settings and accordingly, the reader can adopt new frameworks to modify and update existing practice. The research outcomes compiled in this book are intended to inform and provide inspiration for all those seeking to drive innovation - be they experienced design thinkers or newcomers.
This is a how-to book for solving geometric problems robustly or error free in actual practice. The contents and accompanying source code are based on the feature requests and feedback received from industry professionals and academics who want both the descriptions and source code for implementations of geometric algorithms. The book provides a framework for geometric computing using several arithmetic systems and describes how to select the appropriate system for the problem at hand. Key Features: A framework of arithmetic systems that can be applied to many geometric algorithms to obtain robust or error-free implementations Detailed derivations for algorithms that lead to implementable code Teaching the readers how to use the book concepts in deriving algorithms in their fields of application The Geometric Tools Library, a repository of well-tested code at the Geometric Tools website, https://www.geometrictools.com, that implements the book concepts
Due to increasing practical needs, software support of environmental protection and research tasks is growing in importance and scope. Software systems help to monitor basic data, to maintain and process relevant environmental information, to analyze gathered information and to carry out decision processes, which often have to take into account complex alternatives with various side effects. Therefore software is an important tool for the environmental domain. When the first software systems in the environmental domain grew - 10 to 15 years ag- users and developers were not really aware of the complexity these systems are carrying with themselves: complexity with respect to entities, tasks and procedures. I guess nobody may have figured out at that time that the environmental domain would ask for solutions which information science would not be able to provide and - in several cases - can not provide until today. Therefore environmental informatics - as we call it today - is also an important domain of computer science itself, because practical solutions need to deal with very complex, interdisciplinary, distributed, integrated, sometimes badly defined, user-centered decision processes. I doubt somebody will state that we are already capable of building such integrated systems for end users for reasonable cost on a broad range. The development of the first scientific community for environmental informatics started around 1985 in Germany, becoming a technical committee and working group of the German Computer Society in 1987.
The 6th meeting sponsored by IFIP Working Group 7.5, on reliability and optimization of structural systems, took place in September 1994 in Assisi, Italy. This book contains the papers presented at the working conference including topics such as reliability of special structures, fatigue, failure modes and time-variant systems relibility. |
You may like...
Simulation and Modeling - Current…
Asim El Sheikh, Abid Thyab Al Ajeeli, …
Hardcover
R2,655
Discovery Miles 26 550
Virtual and Augmented Reality…
Information Reso Management Association
Hardcover
R9,439
Discovery Miles 94 390
|