![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming > Software engineering
Aimed at improving a programmers ability for altering code to fit changing requirements and for detecting and correcting errors, this book argues for a new way of thinking about maintaining software. It proposes the use of a set of human factors principles that govern the programmer-software-event world interactions and form the core of the maintenance process. The book is thus highly valuable for systems analysts and programmers, managers seeking to reduce costs, researchers looking at solutions to the maintenance problem, and students learning to write clear unambiguous programs.
This book contains extended and revised versions of the best papers that were presented during the fifteenth edition of the IFIP/IEEE WG10.5 International Conference on Very Large Scale Integration, a global System-on-a-Chip Design & CAD conference. The 15th conference was held at the Georgia Institute of Technology, Atlanta, USA (October 15-17, 2007). Previous conferences have taken place in Edinburgh, Trondheim, Vancouver, Munich, Grenoble, Tokyo, Gramado, Lisbon, Montpellier, Darmstadt, Perth and Nice. The purpose of this conference, sponsored by IFIP TC 10 Working Group 10.5 and by the IEEE Council on Electronic Design Automation (CEDA), is to provide a forum to exchange ideas and show industrial and academic research results in the field of microelectronics design. The current trend toward increasing chip integration and technology process advancements brings about stimulating new challenges both at the physical and system-design levels, as well in the test of these systems. VLSI-SoC conferences aim to address these exciting new issues.
Software architectures have gained wide popularity in the last decade. They generally play a fundamental role in coping with the inherent difficulties of the development of large-scale and complex software systems. Component-oriented and aspect-oriented programming enables software engineers to implement complex applications from a set of pre-defined components. Software Architectures and Component Technology collects excellent chapters on software architectures and component technologies from well-known authors, who not only explain the advantages, but also present the shortcomings of the current approaches while introducing novel solutions to overcome the shortcomings. The unique features of this book are: evaluates the current architecture design methods and component composition techniques and explains their shortcomings; presents three practical architecture design methods in detail; gives four industrial architecture design examples; presents conceptual models for distributed message-based architectures; explains techniques for refining architectures into components; presents the recent developments in component and aspect-oriented techniques; explains the status of research on Piccola, Hyper/JA(R), Pluggable Composite Adapters and Composition Filters. Software Architectures and Component Technology is a suitable text for graduate level students in computer science and engineering, and as a reference for researchers and practitioners in industry.
This edited book invites the reader to explore how the latest technologies developed in computational intelligence can be extended and applied to software engineering. Leading experts demonstrate how this recent confluence of software engineering and computational intelligence provides a powerful tool to address the increasing demand for complex applications in diversified areas, the ever-increasing complexity and size of software systems, and the inherently imperfect nature of the information. The presented treatments to software modeling and formal analysis permit the extension of computational intelligence to various phases in software life cycles, such as managing fuzziness resident in the requirements, coping with fuzzy objects and imprecise knowledge, and handling uncertainty encountered in quality prediction.
This book provides a comprehensive overview of digital signal processing for a multi-disciplinary audience. It posits that though the theory involved in digital signal processing stems from electrical, electronics, communication, and control engineering, the topic has use in other disciplinary areas like chemical, mechanical, civil, computer science, and management. This book is written about digital signal processing in such a way that it is suitable for a wide ranging audience. Readers should be able to get a grasp of the field, understand the concepts easily, and apply as needed in their own fields. It covers sampling and reconstruction of signals; infinite impulse response filter; finite impulse response filter; multi rate signal processing; statistical signal processing; and applications in multidisciplinary domains. The book takes a functional approach and all techniques are illustrated using Matlab.
In the recent years, fractional-order systems have been studied by many researchers in the engineering field. It was found that many systems can be described more accurately by fractional differential equations than by integer-order models. Advanced Synchronization Control and Bifurcation of Chaotic Fractional-Order Systems is a scholarly publication that explores new developments related to novel chaotic fractional-order systems, control schemes, and their applications. Featuring coverage on a wide range of topics including chaos synchronization, nonlinear control, and cryptography, this publication is geared toward engineers, IT professionals, researchers, and upper-level graduate students seeking current research on chaotic fractional-order systems and their applications in engineering and computer science.
The ubiquitous nature of the Internet of Things allows for enhanced connectivity between people in modern society. When applied to various industries, these current networking capabilities create opportunities for new applications. Internet of Things and Advanced Application in Healthcare is a critical reference source for emerging research on the implementation of the latest networking and technological trends within the healthcare industry. Featuring in-depth coverage across the broad scope of the Internet of Things in specialized settings, such as context-aware computing, reliability, and healthcare support systems, this publication is an ideal resource for professionals, researchers, upper-level students, practitioners, and technology developers seeking innovative material on the Internet of Things and its distinct applications. Topics Covered: Assistive Technologies Context-Aware Computing Systems Health Risk Management Healthcare Support Systems Reliability Concerns Smart Healthcare Wearable Sensors
Problem solving is an essential part of every scientific discipline. It has two components: (1) problem identification and formulation, and (2) solution of the formulated problem. One can solve a problem on its own using ad hoc techniques or follow those techniques that have produced efficient solutions to similar problems. This requires the understanding of various algorithm design techniques, how and when to use them to formulate solutions and the context appropriate for each of them. This book advocates the study of algorithm design techniques by presenting most of the useful algorithm design techniques and illustrating them through numerous examples.
Software development is a complex problem-solving activity with a high level of uncertainty. There are many technical challenges concerning scheduling, cost estimation, reliability, performance, etc, which are further aggravated by weaknesses such as changing requirements, team dynamics, and high staff turnover. Thus the management of knowledge and experience is a key means of systematic software development and process improvement. "Managing Software Engineering Knowledge" illustrates several theoretical examples of this vision and solutions applied to industrial practice. It is structured in four parts addressing the motives for knowledge management, the concepts and models used in knowledge management for software engineering, their application to software engineering, and practical guidelines for managing software engineering knowledge. This book provides a comprehensive overview of the state of the art and best practice in knowledge management applied to software engineering. While researchers and graduate students will benefit from the interdisciplinary approach leading to basic frameworks and methodologies, professional software developers and project managers will also profit from industrial experience reports and practical guidelines.
Introduction or Why I wrote this book N the fallof 1997 a dedicated troff user e-rnalled me the macros he used to typeset his books. 1took one look inside his fileand thought, "I can do I this;It'sjustcode. " Asan experiment1spent aweekand wrote a Cprogram and troff macros which formatted and typeset a membership directory for a scholarly society with approximately 2,000 members. When 1 was done, I could enter two commands, and my program and troff would convert raw membershipdata into 200 pages ofPostScriptin 35 seconds. Previously, it had taken me several days to prepare camera-readycopy for the directory using a word processor. For completeness 1sat down and tried to write 1EXmacros for the typesetting. 1failed. Although ninety-five percent of my macros worked, I was unable to prepare the columns the project required. As my frustration grew, 1began this book-mentally, in myhead-as an answer to the question, "Why is 'lEX so hard to learn?" Why use Tgx? Lest you accuse me of the old horse and cart problem, 1should address the question, "Why use 1EX at all?" before 1explain why 'lEX is hard. Iuse 'lEXfor the followingreasons. Itisstable, fast, free, and it uses ASCII. Ofcourse, the most important reason is: 'lEX does a fantastic job. Bystable, I mean it is not likely to change in the next 10 years (much less the next one or two), and it is free of bugs. Both ofthese are important.
Communication protocols form the operational basis of computer networks and tele communication systems. They are behavior conventions that describe how com munication systems inter act with each other, defining the temporal order of the interactions and the formats of the data units exchanged - essentially they determine the efficiency and reliability of computer networks. Protocol Engineering is an important discipline covering the design, validation, and implementation of communication protocols. Part I of this book is devoted to the fundamentals of communication protocols, describing their working principles and implicitly also those of computer networks. The author introduces the concepts of service, protocol, layer, and layered architecture, and introduces the main elements required in the description of protocols using a model language. He then presents the most important protocol functions. Part II deals with the description of communication proto cols, offering an overview of the various formal methods, the essence of Protocol Engineering. The author introduces the fundamental description methods, such as finite state machines, Petri nets, process calculi, and temporal logics, that are in part used as semantic models for formal description techniques. He then introduces one represen tative technique for each of the main description approaches, among others SDL and LOTOS, and surveys the use of UML for describing protocols. Part III covers the protocol life cycle and the most important development stages, presenting the reader with approaches for systematic protocol design, with various verification methods, with the main implementation techniques, and with strategies for their testing, in particular with conformance and interoperability tests, and the test description language TTCN. The author uses the simple data transfer example protocol XDT (eXample Data Transfer) throughout the book as a reference protocol to exemplify the various description techniques and to demonstrate important validation and implementation approaches. The book is an introduction to communication protocols and their development for undergraduate and graduate students of computer science and communication technology, and it is also a suitable reference for engineers and programmers. Most chapters contain exercises, and the author's accompanying website provides further online material including a complete formal description of the XDT protocol and an animated simulation visualizing its behavior.
Since the introduction of personal computers, software has emerged as a driving force in the global economy and a major industry in its own right. During this time, the U.S. government has reversed its prior policy against software patents and is now issuing thousands of such patents each year, provoking heated controversy among programmers, lawyers, scholars, and software companies. This book is the first to step outside of the highly-polarized debate and examine the current state of the law, its suitability to the realities of software development, and its implications for day-to-day software development. Written by a former lawyer and working software developer, "Inventing Software" provides a comprehensive overview of software patents, from the lofty perspectives of legal history and computing theory to the technical details and issues of actual patents. People interested in the legal aspect of software patents will find detailed technical analysis of actual patented software, the legal strategies behind the wording of the patents, and an analysis of the ease or difficulty of detecting infringements. Software developers will find ways to integrate patent planning into their standard software engineering practices, and a practical guide for studying and appraising their competitors' patents and safeguarding the value of their own. Intended primarily for programmers and software industry executives and managers, "Inventing Software" will also be useful, illuminating reading for attorneys and software company investors.
Since the early seventies concepts of specification have become central in the whole area of computer science. Especially algebraic specification techniques for abstract data types and software systems have gained considerable importance in recent years. They have not only played a central role in the theory of data type specification, but meanwhile have had a remarkable influence on programming language design, system architectures, arid software tools and environments. The fundamentals of algebraic specification lay a basis for teaching, research, and development in all those fields of computer science where algebraic techniques are the subject or are used with advantage on a conceptual level. Such a basis, however, we do not regard to be a synopsis of all the different approaches and achievements but rather a consistently developed theory. Such a theory should mainly emphasize elaboration of basic concepts from one point of view and, in a rigorous way, reach the state of the art in the field. We understand fundamentals in this context as: 1. Fundamentals in the sense of a carefully motivated introduction to algebraic specification, which is understandable for computer scientists and mathematicians. 2. Fundamentals in the sense of mathematical theories which are the basis for precise definitions, constructions, results, and correctness proofs. 3. Fundamentals in the sense of concepts from computer science, which are introduced on a conceptual level and formalized in mathematical terms.
This innovative resource provides the most comprehensive coverage of software fault tolerance techniques to guide professionals through design, operation and performance. It features an in-depth discussion on the advantages and disadvantages of specific techniques, so practitioners can decide which ones are best suited for their work. The book examines key programming techniques such as assertions, checkpointing, and atomic actions, and provides design tips and models to assist in the development of critical software fault tolerance software systems that help ensure dependable performance. From software reliability, recovery and redundancy to design- and data-diverse software fault tolerance techniques, this practical reference provides detailed insight into techniques that will improve the overall quality of software.
This book describes how to apply ICONIX Process (a minimal, use case-driven modeling process) in an agile software project. It's full of practical advice for avoiding common agile pitfalls. Further, the book defines a core agile subset so those of you who want to get agile need not spend years learning to do it. Instead, you can simply read this book and apply the core subset of techniques. The book follows a real-life .NET/C# project from inception and UML modeling, to working code through several iterations. You can then go on-line to compare the finished product with the initial set of use cases. The book also introduces several extensions to the core ICONIX Process, including combining Test-Driven Development (TDD) with up-front design to maximize both approaches (with examples using Java and JUnit). And the book incorporates persona analysis to drive the projects goals and reduce requirements churn.
Object-Process Methodology (OPM) is a comprehensive novel approach to systems engineering. Integrating function, structure and behavior in a single, unifying model, OPM significantly extends the system modeling capabilities of current object-oriented methods. Founded on a precise generic ontology and combining graphics with natural language, OPM is applicable to virtually any domain of business, engineering and science. Relieved from technical issues, system architects can use OPM to engage in the creative design of complex systems.The book presents the theory and practice of OPM with examples from various industry segments and engineering disciplines, as well as daily life. It includes a CD-ROM demo version of the award-winning OPM-supporting Object-Process CASE Tool (OPCAT). Using the numerous examples and exercises (with answers) in the book, this software enables the reader to gain hands-on experience in developing complex systems.
Today's distributed systems are characterized by interactions-often complex-between many different hardware and software components cooperating and exchanging information. To simplify development of interactive systems and facilitate communication and documentation, experts of varying disciplines employ descriptions, or specifications, of a given system's behavior and/or structure. Specification and Development of Interactive Systems offers a unique approach to program and software development suitable for large distributed systems, with an emphasis on modular system development and systems engineering. The authors build a basic method, called FOCUS, that enables interactive systems to be described by characterizing their histories of message interaction. The method covers functional requirements, timing, structure, and implementation issues of systems. In addition, the book describes how to connect the models and techniques to tables and diagram-based methods popular in practical systems engineering. Topics and features: * Specification of interface behavior and modular top-down system development * Specification of time and the modeling of hardware/software systems * Interface refinement and the modeling of development steps leading from one level of abstraction to the next * State transition diagrams and tables and the usage of common description techniques, such as found in UML This book provides a mathematical and logical foundation for the specification and development of interactive systems based on a model that describes systems in terms of their input/output behavior. The reader gains a comprehensive understanding of all fundamental models, techniques, and methods for interactive system design. The book is an essential resource for all researchers and professionals in computer science, software systems engineering and computer engineering.
S is a high-level language for manipulating, analysing and displaying data. It forms the basis of two highly acclaimed and widely used data analysis software systems, the commercial S-PLUS(R) and the Open Source R. This book provides an in-depth guide to writing software in the S language under either or both of those systems. It is intended for readers who have some acquaintance with S language and want to know how to use it more effectively, for example to build re-usable tools for streamlining routine data analysis or to implement new statistical methods. One ofhe most outstanding strengths of the S language is the ease with which it can be extended by users. S is a functional language, and functions written by users are first-class objects treated in the same way as functions provided by the system. S code is eminently readable and so a good way to document precisely what algorithms were used, and as much of the implementations are themselves written in S, they can be studied as models and to understand their subtleties. The current implementations also provide easy ways for S functions to call compiled code written in C, Fortran and similar languages; this is documented here in depth. Increasingly S is being used for statistical or graphical analysis within larger software systems or for whole vertical-market applications. The interface facilities are most developed on Windows(R) and these are covered with worked examples. The authors have written the widely adopted 'Modern Applied Statistics with S-PLUS', now in its third edition, and several software libraries that enhance S-PLUS and R; these and the examples used in both books are available on the Internet. Dr. W.N. Venables is a senior Statistician with the CSIRO/CMIS Environmentrics Project in Autralia, having been at the Department of Statistics, University of Adelaide for many years previously. Professor B.D. Ripley holds the Chair of Applied Statistics at the University of Oxford, and is the author of four other books on spatial statistics, simulation, pattern recognition and neural networks. Both authors are known and respected thorughout the international S and R communities, for their books, workshops, short courses, freely available software and through their extensive contributions to the S-news and R mailing lists.
This book presents a coherent, novel vision of Smart Cities, built around a value-driven architecture. It describes the limitations of the contemporary notion of the Smart City and argues that the next developmental step must actively include not only the physical infrastructure, but information technology and human infrastructure as well, requiring the intensive integration of technical solutions from the Internet of Things (IoT) and social computing. The book is divided into five major parts, the first of which provides both a general introduction and a coherent vision that ties together all the components that are required to realize the vision for Smart Cities. Part II then discusses the provisioning and governance of Smart City systems and infrastructures. In turn, Part III addresses the core technologies and technological enablers for managing the social component of the Smart City platform. Both parts combine state-of-the-art research with cutting-edge industrial efforts in the respective fields. Lastly, Part IV details a road map to achieving Cyber-Human Smart Cities. Rounding out the coverage, it discusses the concrete technological advances needed to move beyond contemporary Smart Cities and toward the Smart Cities of the future. Overall, the book provides an essential overview of the latest developments in the areas of IoT and social computing research, and outlines a research roadmap for a closer integration of the two areas in the context of the Smart City. As such, it offers a valuable resource for researchers and graduate students alike.
The author's aim in this textbook is to provide students with a clear understanding of the relationship between the principles of object-oriented programming and software engineering. Professor Zeigler takes an approach based on state representation to formal specification. Consequently, this book is unique through its - emphasis on formulating primitives from which all other functionality can be built; - integral use of a semi-formal behaviour specification language based on state transition concepts; -differentiation between behaviour and implementation; -a reusable heterogeneous container class library; -ability to show the elegance and power of ensemble methods with non-trivial examples. As a result, students studying software engineering will find this a distinctive and valuable approach to programming and systems engineering.
This book provides a coherent methodology for Model-Driven Requirements Engineering which stresses the systematic treatment of requirements within the realm of modelling and model transformations. The underlying basic assumption is that detailed requirements models are used as first-class artefacts playing a direct role in constructing software. To this end, the book presents the Requirements Specification Language (RSL) that allows precision and formality, which eventually permits automation of the process of turning requirements into a working system by applying model transformations and code generation to RSL. The book is structured in eight chapters. The first two chapters present the main concepts and give an introduction to requirements modelling in RSL. The next two chapters concentrate on presenting RSL in a formal way, suitable for automated processing. Subsequently, chapters 5 and 6 concentrate on model transformations with the emphasis on those involving RSL and UML. Finally, chapters 7 and 8 provide a summary in the form of a systematic methodology with a comprehensive case study. Presenting technical details of requirements modelling and model transformations for requirements, this book is of interest to researchers, graduate students and advanced practitioners from industry. While researchers will benefit from the latest results and possible research directions in MDRE, students and practitioners can exploit the presented information and practical techniques in several areas, including requirements engineering, architectural design, software language construction and model transformation. Together with a tool suite available online, the book supplies the reader with what it promises: the means to get from requirements to code "in a snap".
This thesis deals with the evaluation of surface and groundwater quality changes in the periods of water scarcity in river catchment areas. The work can be divided into six parts. Existing methods of drought assessment are discussed in the first part, followed by the brief description of the software package HydroOffice, designed by the author. The software is dedicated to analysis of hydrological data (separation of baseflow, parameters of hydrological drought estimation, recession curves analysis, time series analysis). The capabilities of the software are currently used by scientist from more than 30 countries around the world. The third section is devoted to a comprehensive regional assessment of hydrological drought on Slovak rivers, followed by evaluation of the occurrence, course and character of drought in precipitation, discharges, base flow, groundwater head and spring yields in the pilot area of the Nitra River basin. The fifth part is focused on the assessment of changes in surface and groundwater quality during the drought periods within the pilot area. Finally, the results are summarized and interpreted, and rounded off with an outlook to future research.
Computer interfaces and documentation are notoriously difficult for any user, regardless of his or her level of experience. Advances in technology are not making applications more friendly. Introducing concepts from linguistics and language teaching, Language and Communication proposes a new approach to computer interface design. The book explains for the first time why the much hyped user-friendly interface is treated with such derision by the user community. The author argues that software and hardware designers should consider such fundamental language concepts as meaning, context, function, variety, and equivalence. She goes on to show how imagining an interface as a new language can be an invaluable design exercise, calling into question deeply held beliefs and assumptions about what users will or will not understand. Written for a wide range of computer scientists and professionals, and presuming no prior knowledge of language-related terminology, this volume is a key step in the on-going information revolution.
This book constitutes the refereed proceedings of the 21st International TRIZ Future Conference on Automated Invention for Smart Industries, TFC 2021, held virtually in September 2021 and sponsored by IFIP WG 5.4. The 28 full papers and 8 short papers presented were carefully reviewed and selected from 48 submissions. They are organized in the following thematic sections: inventiveness and TRIZ for sustainable development; TRIZ, intellectual property and smart technologies; TRIZ: expansion in breadth and depth; TRIZ, data processing and artificial intelligence; and TRIZ use and divulgation for engineering design and beyond. Chapter 'Domain Analysis with TRIZ to Define an Effective "Design for Excellence' is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
It is, indeed, widely acceptable today that nowhere is it more important to focus on the improvement of software quality than in the case of systems with requirements in the areas of safety and reliability - especially for distributed, real-time and embedded systems. Thus, much research work is under progress in these fields, since software process improvement impinges directly on achieved levels of quality, and many application experiments aim to show quantitative results demonstrating the efficacy of particular approaches. Requirements for safety and reliability - like other so-called non-functional requirements for computer-based systems - are often stated in imprecise and ambiguous terms, or not at all. Specifications focus on functional and technical aspects, with issues like safety covered only implicitly, or not addressed directly because they are felt to be obvious; unfortunately what is obvious to an end user or system user is progressively less so to others, to the extend that a software developer may not even be aware that safety is an issue. Therefore, there is a growing evidence for encouraging greater understanding of safety and reliability requirements issues, right across the spectrum from end user to software developer; not just in traditional safety-critical areas (e.g. nuclear, aerospace) but also acknowledging the need for such things as heart pacemakers and other medical and robotic systems to be highly dependable. |
![]() ![]() You may like...
Computational Techniques for Structural…
Srinivasan Gopalakrishnan, Massimo Ruzzene, …
Hardcover
R4,443
Discovery Miles 44 430
Proceedings of IncoME-V & CEPE Net-2020…
Dong Zhen, Dong Wang, …
Hardcover
R8,504
Discovery Miles 85 040
Advances in Structural Vibration…
Subashisa Dutta, Esin Inan, …
Hardcover
R8,399
Discovery Miles 83 990
Applied Statistics and Data Science…
Yogendra P. Chaubey, Salim Lahmiri, …
Hardcover
R5,082
Discovery Miles 50 820
Flash Memory Integration - Performance…
Jalil Boukhobza, Pierre Olivier
Hardcover
R1,942
Discovery Miles 19 420
CABology: Value of Cloud, Analytics and…
Nitin Upadhyay
Hardcover
Contact Modeling for Solids and…
Alexander Popp, Peter Wriggers
Hardcover
R4,362
Discovery Miles 43 620
|