![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Software engineering
This book provides a coherent methodology for Model-Driven Requirements Engineering which stresses the systematic treatment of requirements within the realm of modelling and model transformations. The underlying basic assumption is that detailed requirements models are used as first-class artefacts playing a direct role in constructing software. To this end, the book presents the Requirements Specification Language (RSL) that allows precision and formality, which eventually permits automation of the process of turning requirements into a working system by applying model transformations and code generation to RSL. The book is structured in eight chapters. The first two chapters present the main concepts and give an introduction to requirements modelling in RSL. The next two chapters concentrate on presenting RSL in a formal way, suitable for automated processing. Subsequently, chapters 5 and 6 concentrate on model transformations with the emphasis on those involving RSL and UML. Finally, chapters 7 and 8 provide a summary in the form of a systematic methodology with a comprehensive case study. Presenting technical details of requirements modelling and model transformations for requirements, this book is of interest to researchers, graduate students and advanced practitioners from industry. While researchers will benefit from the latest results and possible research directions in MDRE, students and practitioners can exploit the presented information and practical techniques in several areas, including requirements engineering, architectural design, software language construction and model transformation. Together with a tool suite available online, the book supplies the reader with what it promises: the means to get from requirements to code "in a snap".
This book provides a comprehensive overview of digital signal processing for a multi-disciplinary audience. It posits that though the theory involved in digital signal processing stems from electrical, electronics, communication, and control engineering, the topic has use in other disciplinary areas like chemical, mechanical, civil, computer science, and management. This book is written about digital signal processing in such a way that it is suitable for a wide ranging audience. Readers should be able to get a grasp of the field, understand the concepts easily, and apply as needed in their own fields. It covers sampling and reconstruction of signals; infinite impulse response filter; finite impulse response filter; multi rate signal processing; statistical signal processing; and applications in multidisciplinary domains. The book takes a functional approach and all techniques are illustrated using Matlab.
Computer interfaces and documentation are notoriously difficult for any user, regardless of his or her level of experience. Advances in technology are not making applications more friendly. Introducing concepts from linguistics and language teaching, Language and Communication proposes a new approach to computer interface design. The book explains for the first time why the much hyped user-friendly interface is treated with such derision by the user community. The author argues that software and hardware designers should consider such fundamental language concepts as meaning, context, function, variety, and equivalence. She goes on to show how imagining an interface as a new language can be an invaluable design exercise, calling into question deeply held beliefs and assumptions about what users will or will not understand. Written for a wide range of computer scientists and professionals, and presuming no prior knowledge of language-related terminology, this volume is a key step in the on-going information revolution.
Component-based software development regards software construction in terms of conventional engineering disciplines where the assembly of systems from readily-available prefabricated parts is the norm. Because both component-based systems themselves and the stakeholders in component-based development projects are different from traditional software systems, component-based testing also needs to deviate from traditional software testing approaches. Gross first describes the specific challenges related to component-based testing like the lack of internal knowledge of a component or the usage of a component in diverse contexts. He argues that only built-in contract testing, a test organization for component-based applications founded on building test artifacts directly into components, can prevent catastrophic failures like the one that caused the now famous ARIANE 5 crash in 1996. Since building testing into components has implications for component development, built-in contract testing is integrated with and made to complement a model-driven development method. Here UML models are used to derive the testing architecture for an application, the testing interfaces and the component testers. The method also provides a process and guidelines for modeling and developing these artifacts. This book is the first comprehensive treatment of the intricacies of testing component-based software systems. With its strong modeling background, it appeals to researchers and graduate students specializing in component-based software engineering. Professionals architecting and developing component-based systems will profit from the UML-based methodology and the implementation hints based on the XUnit and JUnit frameworks.
Broadly-scoped requirements such as security, privacy, and response time are a major source of complexity in modern software systems. This is due to their tangled inter-relationships with and effects on other requirements. Aspect-Oriented Requirements Engineering (AORE) aims to facilitate modularisation of such broadly-scoped requirements, so that software developers are able to reason about them in isolation - one at a time. AORE also captures these inter-relationships and effects in well-defined composition specifications, and, in so doing exposes the causes for potential conflicts, trade-offs, and roots for the key early architectural decisions. Over the last decade, significant work has been carried out in the field of AORE. With this book the editors aim to provide a consolidated overview of these efforts and results. The individual contributions discuss how aspects can be identified, represented, composed and reasoned about, as well as how they are used in specific domains and in industry. Thus, the book does not present one particular AORE approach, but conveys a broad understanding of the aspect-oriented perspective on requirements engineering. The chapters are organized into five sections: concern identification in requirements, concern modelling and composition, domain-specific use of AORE, aspect interactions, and AORE in industry. This book provides readers with the most comprehensive coverage of AORE and the capabilities it offers to those grappling with the complexity arising from broadly-scoped requirements - a phenomenon that is, without doubt, universal across software systems. Software engineers and related professionals in industry, as well as advanced undergraduate and post-graduate students and researchers, will benefit from these comprehensive descriptions and the industrial case studies.
The author's aim in this textbook is to provide students with a clear understanding of the relationship between the principles of object-oriented programming and software engineering. Professor Zeigler takes an approach based on state representation to formal specification. Consequently, this book is unique through its - emphasis on formulating primitives from which all other functionality can be built; - integral use of a semi-formal behaviour specification language based on state transition concepts; -differentiation between behaviour and implementation; -a reusable heterogeneous container class library; -ability to show the elegance and power of ensemble methods with non-trivial examples. As a result, students studying software engineering will find this a distinctive and valuable approach to programming and systems engineering.
It is, indeed, widely acceptable today that nowhere is it more important to focus on the improvement of software quality than in the case of systems with requirements in the areas of safety and reliability - especially for distributed, real-time and embedded systems. Thus, much research work is under progress in these fields, since software process improvement impinges directly on achieved levels of quality, and many application experiments aim to show quantitative results demonstrating the efficacy of particular approaches. Requirements for safety and reliability - like other so-called non-functional requirements for computer-based systems - are often stated in imprecise and ambiguous terms, or not at all. Specifications focus on functional and technical aspects, with issues like safety covered only implicitly, or not addressed directly because they are felt to be obvious; unfortunately what is obvious to an end user or system user is progressively less so to others, to the extend that a software developer may not even be aware that safety is an issue. Therefore, there is a growing evidence for encouraging greater understanding of safety and reliability requirements issues, right across the spectrum from end user to software developer; not just in traditional safety-critical areas (e.g. nuclear, aerospace) but also acknowledging the need for such things as heart pacemakers and other medical and robotic systems to be highly dependable.
In any software design project, the analysis stage - documenting and designing technical requirements for the needs of users - is vital to the success of the project.This book provides a thorough introduction & survey to all aspects of analysis. This new edition provides new features including: additional chapters on system Development Life Cycle & Data Element Naming Conventions & Standards; more coverage on converting logical models to physical models, how to generate DDL & testing database functionalities; expansion of database section with concepts such as denormalization, security & change control; developments on new design & technologies, particularly in the area of web analysis and design; a revised Web/Commerce chapter, which addresses component middleware for complex systems design; and, new case studies. This book is a valuable resource and guide for all information systems students, practitioners and professionals who need an in-depth understanding of the principles of the analysis and design process.
In the recent years, fractional-order systems have been studied by many researchers in the engineering field. It was found that many systems can be described more accurately by fractional differential equations than by integer-order models. Advanced Synchronization Control and Bifurcation of Chaotic Fractional-Order Systems is a scholarly publication that explores new developments related to novel chaotic fractional-order systems, control schemes, and their applications. Featuring coverage on a wide range of topics including chaos synchronization, nonlinear control, and cryptography, this publication is geared toward engineers, IT professionals, researchers, and upper-level graduate students seeking current research on chaotic fractional-order systems and their applications in engineering and computer science.
This thesis deals with the evaluation of surface and groundwater quality changes in the periods of water scarcity in river catchment areas. The work can be divided into six parts. Existing methods of drought assessment are discussed in the first part, followed by the brief description of the software package HydroOffice, designed by the author. The software is dedicated to analysis of hydrological data (separation of baseflow, parameters of hydrological drought estimation, recession curves analysis, time series analysis). The capabilities of the software are currently used by scientist from more than 30 countries around the world. The third section is devoted to a comprehensive regional assessment of hydrological drought on Slovak rivers, followed by evaluation of the occurrence, course and character of drought in precipitation, discharges, base flow, groundwater head and spring yields in the pilot area of the Nitra River basin. The fifth part is focused on the assessment of changes in surface and groundwater quality during the drought periods within the pilot area. Finally, the results are summarized and interpreted, and rounded off with an outlook to future research.
Modern methods of filter design and controller design often yield systems of very high order, posing a problem for their implementation. Over the past two decades or so, sophisticated methods have been developed to achieve simplification of filters and controllers. Such methods often come with easy-to-use error bounds, and in the case of controller simplification methods, such error bounds will usually be related to closed-loop properties.This book is the first comprehensive treatment of approximation methods for filters and controllers. It is fully up to date, and it is authored by two leading researchers who have personally contributed to the development of some of the methods. Balanced truncation, Hankel norm reduction, multiplicative reduction, weighted methods and coprime factorization methods are all discussed.The book is amply illustrated with examples, and will equip practising control engineers and graduates for intelligent use of commercial software modules for model and controller reduction.
Genetic and evolutionary algorithms (GEAs) have often achieved an enviable success in solving optimization problems in a wide range of disciplines. This book provides effective optimization algorithms for solving a broad class of problems quickly, accurately, and reliably by employing evolutionary mechanisms.
"CASE Technology" presents a collection of papers pertaining to the automation of the software development process. The expectations for computer-aided software engineering (CASE) have been great, but the potential of CASE has not yet been fully realized. Now, with the availability of CASE tools and technologies, software automation is beginning to achieve success. This collection focuses on the integration of tools within a CASE environment.
Object-Oriented Behavioral Specifications encourages builders of complex information systems to accelerate their move to using the approach of a scientific discipline in analysis rather than the approach of a craft. The focus is on understanding customers' needs and on precise specification of understanding gained through analysis. Specifications must bridge any gaps in understanding about business rules among customers, Subject Matter Experts, and `computer people', must inform decisions about reuse of software and systems, and must enable review of semantics over time. Specifications need to describe semantics rather than syntax, and to do that in an abstract and precise manner, in order to create software systems that satisfy business rules. The papers in this book show various ways of designing elegant and clear specifications which are reusable, lead to savings of intellectual effort, time, and money, and which contribute to the reliability of software and systems. Object-Oriented Behavioral Specifications offers a fresh treatment of the object-oriented paradigm by examining the limitations of traditional OO methodologies and by describing the significance of competing trends in OO modeling. The book builds on four years of successful OOPSLA workshops (1991-1995) on behavior semantics. This book deals with precise specifications of `what' is accomplished by the business and `what' is to be done by a system. The book includes descriptions of successful use of abstract and precise specification in industry. It draws on the experience of experts from industrial and academic settings and benefits from international participation. Collective behavior, neglected in some treatment of the OO paradigm, is addressed explicitly in this book. The book does not take `reuse' of specifications or software for granted, but furnishes a foundation for taking as rigorous an approach to reuse decisions as to precise specifications in original developments.
In Part I, the impact of an integro-differential operator on parity logic engines (PLEs) as a tool for scientific modeling from scratch is presented. Part II outlines the fuzzy structural modeling approach for building new linear and nonlinear dynamical causal forecasting systems in terms of fuzzy cognitive maps (FCMs). Part III introduces the new type of autogenetic algorithms (AGAs) to the field of evolutionary computing. Altogether, these PLEs, FCMs, and AGAs may serve as conceptual and computational power tools.
There are increasing opportunities to consider the application of semantic technologies for business information systems. Semantic technologies are expected to improve business processes and information systems, and lead to savings in cost and time as well as improved efficiency. Semantic Technologies for Business and Information Systems Engineering: Concepts and Applications investigates the application of semantic technologies to business and information systems engineering. This reference work assists researchers in academia and industry, students, business process analysts, information management professionals, software engineers, and other practitioners in gaining knowledge on applying semantic technologies for advanced business information systems, in annotating semantics to business processes, and in semantically integrating advanced business information systems.
2012 Jolt Award finalist! Pioneering the Future of Software Test Do you need to get it right, too? Then, learn from Google. Legendary testing expert James Whittaker, until recently a Google testing leader, and two top Google experts reveal exactly how Google tests software, offering brand-new best practices you can use even if you're not quite Google's size...yet! Breakthrough Techniques You Can Actually Use Discover 100% practical, amazingly scalable techniques for analyzing risk and planning tests...thinking like real users...implementing exploratory, black box, white box, and acceptance testing...getting usable feedback...tracking issues...choosing and creating tools...testing "Docs & Mocks," interfaces, classes, modules, libraries, binaries, services, and infrastructure...reviewing code and refactoring...using test hooks, presubmit scripts, queues, continuous builds, and more. With these techniques, you can transform testing from a bottleneck into an accelerator-and make your whole organization more productive!
This book reviews the present understanding of the history of software and establishes an agenda for further research. By exploring this current understanding, the authors identify the fundamental elements of software. The problems and questions addressed in the book range from purely technical to societal issues. Thus, the articles presented offer a fresh view of this history with new categories and interrelated themes, comparing and contrasting software with artefacts in other disciplines, so as to ascertain in what ways software is similar to and different from other technologies.This volume is based on the international conference "Mapping the History of Computing: Software Issues", held in April 2000 at the Heinz Nixdorf Museums Forum in Paderborn, Germany.
As miniaturisation deepens, and nanotechnology and its machines become more prevalent in the real world, the need to consider using quantum mechanical concepts to perform various tasks in computation increases. Such tasks include: the teleporting of information, breaking heretofore "unbreakable" codes, communicating with messages that betray eavesdropping, and the generation of random numbers. This is the first book to apply quantum physics to the basic operations of a computer, representing the ideal vehicle for explaining the complexities of quantum mechanics to students, researchers and computer engineers, alike, as they prepare to design and create the computing and information delivery systems for the future. Both authors have solid backgrounds in the subject matter at the theoretical and more practical level. While serving as a text for senior/grad level students in computer science/physics/engineering, this book has its primary use as an up-to-date reference work in the emerging interdisciplinary field of quantum computing - the only prerequisite being knowledge of calculus and familiarity with the concept of the Turing machine.
Looking back at the years that have passed since the realization of the very first electronic, multi-purpose computers, one observes a tremendous growth in hardware and software performance. Today, researchers and engi neers have access to computing power and software that can solve numerical problems which are not fully understood in terms of existing mathemati cal theory. Thus, computational sciences must in many respects be viewed as experimental disciplines. As a consequence, there is a demand for high quality, flexible software that allows, and even encourages, experimentation with alternative numerical strategies and mathematical models. Extensibil ity is then a key issue; the software must provide an efficient environment for incorporation of new methods and models that will be required in fu ture problem scenarios. The development of such kind of flexible software is a challenging and expensive task. One way to achieve these goals is to in vest much work in the design and implementation of generic software tools which can be used in a wide range of application fields. In order to provide a forum where researchers could present and discuss their contributions to the described development, an International Work shop on Modern Software Tools for Scientific Computing was arranged in Oslo, Norway, September 16-18, 1996. This workshop, informally referred to as Sci Tools '96, was a collaboration between SINTEF Applied Mathe matics and the Departments of Informatics and Mathematics at the Uni versity of Oslo."
Business applications are designed using profound knowledge about the business domain, such as domain objects, fundamental domain-related principles, and domain patterns. Nonetheless, the pattern community's ideas for software engineering have not impacted at the application level, they are still mostly used for technical problems. This book takes exactly this step: it shows you how to apply the pattern ideas in business applications and presents more than 20 structural and behavioral business patterns that use the REA (resources, events, agents) pattern as a common backbone. If you are a developer working on business frameworks, you can use the patterns presented to derive the right abstractions (e.g., business objects) and to design and ensure that the meta-rules (e.g., process patterns) are followed by the developers of the actual applications. And if you are an application developer, you can use these patterns to design your business application, to ensure that it does not violate the domain rules, and to adapt the application to changing requirements without the need to change the overall architecture. As with patterns in general, this approach allows for both more flexible and more solid software architectures and hence better software quality. "It's a great book, marvelous in breadth and depth. An impressive achievement. I particularly liked the modeling handbook examples." Bob Haugen, Business Technology Consultant and Contributor to REA standardization in ISO, UN/CEFACT and ebXML, UK "I enjoyed reading it very much, it gave many new insights into REA and its applications." Paul Johannesson, Stockholm University and Royal Institute of Technology, Sweden "This book by Pavel Hruby is destined to become a landmark in business modeling. Pavel heralds the replacement of traditional workflow-oriented modeling with a new breed of approaches that focus on delivering change-resilient and highly reusable business models. I highly recommend this book to you " Krzysztof Czarnecki, University of Waterloo, Canada
The underlying technologies enabling the realization of recent advances in areas like mobile and enterprise computing are artificial intelligence (AI), modeling and simulation, and software engineering. A disciplined, multifaceted, and unified approach to modeling and simulation is now essential in new frontiers, such as Simulation Based Acquisition. This volume is an edited survey of international scientists, academicians, and professionals who present their latest research findings in the various fields of AI; collaborative/distributed computing; and modeling, simulation, and their integration. Whereas some of these areas continue to seek answers to basic fundamental scientific inquiries, new questions have emerged only recently due to advances in computing infrastructures, technologies, and tools. The book¿s principal goal is to provide a unifying forum for developing postmodern, AI-based modeling and simulation environments and their utilization in both traditional and modern application domains. Features and topics: * Blends comprehensive, advanced modeling and simulation theories and methodologies in a presentation founded on formal, system-theoretic and AI-based approaches * Uses detailed, real-world examples to illustrate key concepts in systems theory, modeling, simulation, object orientation, and intelligent systems * Addresses a broad range of critical topics in the areas of modeling frameworks, distributed and high-performance object-oriented simulation approaches, as well as robotics, learning, multi-scale and multi-resolution models, and multi-agent systems * Includes new results pertaining to intelligent and agent-based modeling, the relationship between AI-based reasoning and Discrete-Event System Specification, and large-scale distributed modeling and simulation frameworks * Provides cross-disciplinary insight into how computer science, computer engineering, and systems engineering can collectively provide a rich set of theories and methods enabling contemporary modeling and simulation This state-of-the-art survey on collaborative/distributed modeling and simulation computing environments is an essential resource for the latest developments and tools in the field for all computer scientists, systems engineers, and software engineers. Professionals, practitioners, and graduate students will find this reference invaluable to their work involving computer simulation, distributed modeling, discrete-event systems, AI, and software engineering.
Developing correct and efficient software is far more complex for parallel and distributed systems than it is for sequential processors. Some of the reasons for this added complexity are: the lack of a universally acceptable parallel and distributed programming paradigm, the criticality of achieving high performance, and the difficulty of writing correct parallel and distributed programs. These factors collectively influence the current status of parallel and distributed software development tools efforts. Tools and Environments for Parallel and Distributed Systems addresses the above issues by describing working tools and environments, and gives a solid overview of some of the fundamental research being done worldwide. Topics covered in this collection are: mainstream program development tools, performance prediction tools and studies; debugging tools and research; and nontraditional tools. Audience: Suitable as a secondary text for graduate level courses in software engineering and parallel and distributed systems, and as a reference for researchers and practitioners in industry.
This edited book presents scientific results of the 14th ACIS/IEEE International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2013), held in Honolulu, Hawaii, USA on July 1-3, 2013. The aim of this conference was to bring together scientists, engineers, computer users, and students to share their experiences and exchange new ideas, research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them. The conference organizers selected the 17 outstanding papers from those papers accepted for presentation at the conference.
Computer Aided Software Engineering brings together in one place important contributions and up-to-date research results in this important area. Computer Aided Software Engineering serves as an excellent reference, providing insight into some of the most important research issues in the field. |
You may like...
Africa's Business Revolution - How to…
Acha Leke, Mutsa Chironga, …
Hardcover
(1)
Eight Days In July - Inside The Zuma…
Qaanitah Hunter, Kaveel Singh, …
Paperback
(1)
Lore Of Nutrition - Challenging…
Tim Noakes, Marika Sboros
Paperback
(4)
NIV Holy Bible (Navy Blue)
Christian Media Publishing Christian Media Publishing
Paperback
HFI / NQI 2012 - Proceedings of the 4th…
Shengyun Zhu, Guilin Zhang, …
Hardcover
R6,346
Discovery Miles 63 460
|