![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming > Software engineering
This monograph discusses software reuse and how it can be applied at different stages of the software development process, on different types of data and at different levels of granularity. Several challenging hypotheses are analyzed and confronted using novel data-driven methodologies, in order to solve problems in requirements elicitation and specification extraction, software design and implementation, as well as software quality assurance. The book is accompanied by a number of tools, libraries and working prototypes in order to practically illustrate how the phases of the software engineering life cycle can benefit from unlocking the potential of data. Software engineering researchers, experts, and practitioners can benefit from the various methodologies presented and can better understand how knowledge extracted from software data residing in various repositories can be combined and used to enable effective decision making and save considerable time and effort through software reuse. Mining Software Engineering Data for Software Reuse can also prove handy for graduate-level students in software engineering.
The advanced state of computer networking and telecommunications technology makes it possible to view computers as parts of a global computation platform, sharing their resources in terms of hardware, software and data. The possibility of exploiting the resources on a global scale has given rise to a new paradigm - the mobile computation paradigm - for computation in large scale distributed networks. The key characteristic of this paradigm is to give programmers control over the mobility of code or active computations across the network by providing appropriate language features. The dynamism and flexibility offered by mobile computation however, brings about a set of problems, the most challenging of which are relevant to safety and security. Several recent experiences prove that identifying the causes of these problems usually requires a rigorous investigation using formal methods. Functional languages are known for their well-understood computational models and their amenability to formal reasoning. They also have strong expressive power due to higher-order features. Functions can flow from one program point to another as other first-class values. These facts suggest that functional languages can provide the core of mobile computation language. Functions that represent mobile agents and formal systems for reasoning about functional programs can be further exploited to reason about the behavior of agents. Mobile Computation with Functions explores distributed computation with languages which adopt functions as the main programming abstraction and support code mobility through the mobility of functions between remote sites. It aims to highlight the benefits of using languages of this family in dealing with the challenges of mobile computation. The possibility of exploiting existing static analysis techniques suggests that having functions at the core of mobile code language is a particularly apt choice. A range of problems which have impact on the safety, security and performance are discussed. It is shown that types extended with effects and other annotations can capture a significant amount of information about the dynamic behavior of mobile functions, and offer solutions to the problems under investigation. This book includes a survey of the languages Concurrent ML, Facile and PLAN which inherit the strengths of the functional paradigm in the context of concurrent and distributed computation. The languages which are defined in the subsequent chapters have their roots in these languages. Mobile Computation with Functions is designed to meet the needs of a professional audience composed of researchers and practitioners in industry and graduate level students in Computer Science.
A large international conference in Intelligent Automation and Computer Engineering was held in Hong Kong, March 18-20, 2009, under the auspices of the International MultiConference of Engineers and Computer Scientists (IMECS 2009). The IMECS is organized by the International Association of Engineers (IAENG). Intelligent Automation and Computer Engineering contains 37 revised and extended research articles written by prominent researchers participating in the conference. Topics covered include artificial intelligence, decision supporting systems, automated planning, automation systems, control engineering, systems identification, modelling and simulation, communication systems, signal processing, and industrial applications. Intelligent Automation and Computer Engineering offers the state of the art of tremendous advances in intelligent automation and computer engineering and also serves as an excellent reference text for researchers and graduate students, working on intelligent automation and computer engineering.
This book offers a practical introduction to the use of artificial intelligence (AI) techniques to improve and optimise the various phases of the software development process, from the initial project planning to the latest deployment. All chapters were written by leading experts in the field and include practical and reproducible examples. Following the introductory chapter, Chapters 2-9 respectively apply AI techniques to the classic phases of the software development process: project management, requirement engineering, analysis and design, coding, cloud deployment, unit and system testing, and maintenance. Subsequently, Chapters 10 and 11 provide foundational tutorials on the AI techniques used in the preceding chapters: metaheuristics and machine learning. Given its scope and focus, the book represents a valuable resource for researchers, practitioners and students with a basic grasp of software engineering.
This book celebrates the 10-year anniversary of Software Center (a collaboration between 18 European companies and five Swedish universities) by presenting some of the most impactful and relevant journal or conference papers that researchers in the center have published over the last decade. The book is organized around the five themes around which research in Software Center is organized, i.e. Continuous Delivery, Continuous Architecture, Metrics, Customer Data and Ecosystems Driven Development, and AI Engineering. The focus of the Continuous Delivery theme is to help companies to continuously build high quality products with the right degree of automation. The Continuous Architecture theme addresses challenges that arise when balancing the need for architectural quality and more agile ways of working with shorter development cycles. The Metrics theme studies and provides insight to understand, monitor and improve software processes, products and organizations. The fourth theme, Customer Data and Ecosystem Driven Development, helps companies make sense of the vast amounts of data that are continuously collected from products in the field. Eventually, the theme of AI Engineering addresses the challenge that many companies struggle with in terms of deploying machine- and deep-learning models in industrial contexts with production quality. Each theme has its own part in the book and each part has an introduction chapter and then a carefully selected reprint of the most important papers from that theme. This book mainly aims at researchers and advanced professionals in the areas of software engineering who would like to get an overview about the achievement made in various topics relevant for industrial large-scale software development and management - and to see how research benefits from a close cooperation between industry and academia.
This book explores the possibility of integrating design thinking into today's technical contexts. Despite the popularity of design thinking in research and practice, this area is still too often treated in isolation without a clear, consistent connection to the world of software development. The book presents design thinking approaches and experiences that can facilitate the development of software-intensive products and services. It argues that design thinking and related software engineering practices, including requirements engineering and user-centric design (UX) approaches, are not mutually exclusive. Rather, they provide complementary methods and tools for designing software-intensive systems with a human-centric approach. Bringing together prominent experts and practitioners to share their insights, approaches and experiences, the book sheds new light on the specific interpretations and meanings of design thinking in various fields such as engineering, management, and information technology. As such, it provides a framework for professionals to demonstrate the potential of design thinking for software development, while offering academic researchers a roadmap for further research.
With the widespread use of VRML browsers, e.g., as part of the Netscape and Internet Explorer standard distributions, everyone connected to the Internet can directly enter a virtual world without installing a new kind of software. The VRML technology offers the basis for new forms of customer service such as interactive three-dimensional product configuration, spare part ordering, or customer training. Also, this technology can be used for CSCW in intranets.The reader should be familiar with programming languages and computers and, in particular, should know Java or at least an object-oriented programming language. The book not only provides and explains source code, which can be used as a starting point for own implementations, but it also describes the fundamental problems and how currently known solutions work. It discusses a variety of different techniques and trade-offs. Many illustrations help the reader to understand and memorize the underlying principles.
Elucidating the spatial and temporal dynamics of how things connect has become one of the most important areas of research in the 21st century. Network science now pervades nearly every science domain, resulting in new discoveries in a host of dynamic social and natural systems, including: how neurons connect and communicate in the brain, how information percolates within and among social networks, the evolution of science research through co-authorship networks, the spread of epidemics and many other complex phenomena. Over the past decade, advances in computational power have put the tools of network analysis in the hands of increasing numbers of scientists, enabling more explorations of our world than ever before possible. Information science, social sciences, systems biology, ecosystems ecology, neuroscience and physics all benefit from this movement, which combines graph theory with data sciences to develop and validate theories about the world around us. This book brings together cutting-edge research from the network science field and includes diverse and interdisciplinary topics such as: modeling the structure of urban systems, behavior in social networks, education and learning, data network architecture, structure and dynamics of organizations, crime and terrorism, as well as network topology, modularity and community detection.
Embedded systems have long become essential in application areas in which human control is impossible or infeasible. The development of modern embedded systems is becoming increasingly difficult and challenging because of their overall system complexity, their tighter and cross-functional integration, the increasing requirements concerning safety and real-time behavior, and the need to reduce development and operation costs. This book provides a comprehensive overview of the Software Platform Embedded Systems (SPES) modeling framework and demonstrates its applicability in embedded system development in various industry domains such as automation, automotive, avionics, energy, and healthcare. In SPES 2020, twenty-one partners from academia and industry have joined forces in order to develop and evaluate in different industrial domains a modeling framework that reflects the current state of the art in embedded systems engineering. The content of this book is structured in four parts. Part I "Starting Point" discusses the status quo of embedded systems development and model-based engineering, and summarizes the key requirements faced when developing embedded systems in different application domains. Part II "The SPES Modeling Framework" describes the SPES modeling framework. Part III "Application and Evaluation of the SPES Modeling Framework" reports on the validation steps taken to ensure that the framework met the requirements discussed in Part I. Finally, Part IV "Impact of the SPES Modeling Framework" summarizes the results achieved and provides an outlook on future work. The book is mainly aimed at professionals and practitioners who deal with the development of embedded systems on a daily basis. Researchers in academia and industry may use it as a compendium for the requirements and state-of-the-art solution concepts for embedded systems development.
1 Einfuhrung in die Thematik.- 1.1 Begriffsfindungen.- 1.2 Herausforderung Nr. 1: Der Preiskarnpf in der IT-Branche.- 1.3 Von Umsatzzielen und Provisionsmodellen.- 1.3.1 Einfuhrung und Begriffsfindungen.- 1.3.2 Umsatzziele.- 1.3.3 Das lineare Provisionsmodell.- 1.3.4 Das progressive Provisionsmodell.- 1.3.5 Zusammenfassung.- 1.4 Der Forecast.- 1.4.1 Begriffsfindung.- 1.4.2 Moegliche Einflussgroessen im Forecast.- 1.4.3 Informationen, die der Forecast beinhalten sollte.- 1.4.4 Forecast Meetings.- 1.4.5 Fazit.- 1.5 Vorgehensweisen im Vertrieb.- 1.5.1 Allgemeines zu Vorgehensweisen im Vertrieb.- 1.5.2 Unterschiedliche Modelle.- 1.5.3 Aufwandsbetrachtungen im Vertrieb.- 1.5.4 Der Vertriebszyklus - Sales Cycle.- 1.5.5 Unterschiedliche Vertriebsansatze hinsichtlich der Zielgruppe beim Kunden.- 1.5.6 Der "Take-the-money-and-go"-Ansatz.- 1.5.7 Argumentationshilfe Return on Investment.- 1.5.8 Fazit.- 1.6 Die Quartalsdenke.- 1.6.1 Einfuhrung.- 1.6.2 Umsatzziele boersennotierter Unternehmen.- 1.6.3 Damoklesschwert Quartalsende.- 1.6.4 Konsequenzen der Quartalsdenke.- 1.6.5 Fazit.- 1.7 Vertrieb uber das Internet.- 1.7.1 Ruckblick.- 1.7.2 Vorgehensweise beim Vertrieb uber das Internet.- 1.7.3 Zu schaffende Grundvoraussetzungen.- 1.7.4 Fazit.- 1.8 Call Center.- 1.8.1 Einfuhrung.- 1.8.2 Arbeitsweise von Call Centern.- 1.8.3 Die 3 Schritte eines Piloten.- 1.8.4 Auswahl eines Call Centers.- 1.8.5 Vorteile des Einsatzes von Call Centern.- 1.8.6 Entlohnung von Call Centern.- 1.8.7 Weitere Einsatzmoeglichkeiten von Call Centern.- 1.8.8 Fazit.- 1.9 Vertriebsgebiete.- 1.9.1 Einfuhrung.- 1.9.2 Nach Postleitzahlen aufgebaute Vertriebsgebiete.- 1.9.3 Nach Branchen aufgeteilte Vertriebsgebiete.- 1.9.4 Mischung zwischen Branchen- und Postleitzahlen-orientierten Vertriebsgebieten.- 1.9.5 Problemfalle bei der Zuordnung.- 1.9.6 Fazit.- 1.10 Ausblick auf die weiteren Inhalte dieses Buches.- 2 Rollen im Vertrieb.- 2.1 Einfuhrung in Rollen.- 2.2 Die einzelnen Rollen innerhalb einer grossen Vertriebsorganisation.- 2.2.1 Der Vertriebsleiter.- 2.2.2 Der Vertriebsdirektor.- 2.2.3 Der Gebietsleiter.- 2.2.4 Der Vertriebsmitarbeiter.- 2.2.5 Der Telesales.- 2.2.6 Der Telequalifizierer.- 2.2.7 Zusammenfassung.- 2.3 Die Wirkungsfelder der unterschiedlichen Rollen.- 2.3.1 Einfuhrung.- 2.3.2 Wirkungsfeld des Vertriebsleiters.- 2.3.3 Wirkungsfeld des Vertriebsdirektors.- 2.3.4 Wirkungsfeld des Gebietsverkaufsleiters.- 2.3.5 Wirkungsfeld des Vertriebsmitarbeiters.- 2.3.6 Wirkungsfeld des Telesales.- 2.3.7 Wirkungsfeld des Telequalifizierers.- 2.4 Der Presales als Bindeglied zum Vertriebsmitarbeiter.- 2.4.1 Vorbemerkung.- 2.4.2 Aufgabenbeschreibung des Presales.- 2.4.3 Der Presales-Pool.- 2.4.4 Wirkungsfeld des Presales.- 2.4.5 Der Unterschied zum Consulting-Mitarbeiter.- 2.4.6 Zusammenfassung.- 2.5 Fazit.- 3 Produktvertrieb versus Dienstleistungsvertrieb.- 3.1 Einfuhrung in die Thematik.- 3.2 Die wesentlichen Unterschiede zwischen Produkt- und Dienstleistungsvertrieb.- 3.2.1 Allgemeines.- 3.2.2 Unterschiede beim Geldfluss.- 3.2.3 Unterschiede im Risiko.- 3.2.4 Unterschiede im Vertriebsansatz.- 3.2.5 Unterschiede bei der Kompensation von Umsatzausfallen.- 3.2.6 Unterschiede in der Motivation.- 3.2.7 Fazit.- 3.3 Gemeinsamkeiten zwischen Produkt- und Dienstleistungsvertrieb.- 3.3.1 Allgemeines.- 3.3.2 Gesprachsvorbereitung.- 3.3.3 Wettbewerbsbetrachtungen.- 3.3.4 Fazit.- 3.4 Gemeinsames Hilfsmittel im Vertrieb: Der Workshop.- 3.4.1 Hinfuhrung zum Thema.- 3.4.2 Zielsetzung des Workshops.- 3.4.3 Berechnung des Workshops.- 3.4.4 Zusammensetzung des Teilnehmerkreises.- 3.4.5 Inhaltliche Gestaltung eines Workshops beim Kunden.- 3.4.6 Auswirkungen eines erfolgreichen Workshops.- 3.4.7 Externe Unterstutzung bei der Konzeption eines Workshops.- 3.5 Integration von Risikomanagement.- 3.5.1 Einleitung.- 3.5.2 Erstellen einer Risikoliste.- 3.5.3 Festlegen von Risikoklassen und Risikowahrscheinlichkeitsklassen.- 3.5.4 Ableiten einer Risikomatrix.- 3.5.5 Ergebnisse.- 3.
In the past, rendering systems used a range of different approaches, each compatible and able to handle certain kinds of images. However, the last few years have seen the development of practical techniques, which bring together many areas of research into stable, production ready rendering tools. Written by experienced graphics software developers, Production Rendering: Design and Implementation provides not only a complete framework of different topics including shading engines and compilers, but discusses also the techniques used to implement feature film quality rendering engines. Key Topics ??A Rendering framework for managing a micro polygon-oriented graphics pipeline ??Problems presented by different types of geometry showing how different surface types can be made ready for shading ??Shading and how it fits into a rendering pipeline ??How to write a good shader compiler ??Ray tracing in a production renderer ??Incorporating global illumination into a renderer ??Gathering surface samples into a final image ??Tips and tricks in rendering About the authors Mark Elendt, Senior Mathematician, has been with Side Effects Software Inc, Canada for 11 years and has written at least 5 renderers over these years. He was chief architect for the Houdini renderers Mantra and VMantra. In 1997 he received a Technical Achievement Award from the Academy of Motion Picture Arts and Sciences. Rick LaMont, co-founder and CTO of Dot C Software, USA, currently acts as lead programmer of RenderDotC and Mai-Tai. He received the Computerworld Smithsonian Award for Technology Benefiting Mankind for his work on the WeyerhaeuserDesign Center (Foley and van Dam, Second Edition, color plate I.8). Jacopo Pantaleoni, is currently a Developer for LightFlow Technologies, Italy, which he founded in 1999. His interests in mathematics, computer programming and, realistic rendering lead to the publication of Lightflow Rendering Tools. In 2000, he also began working with a team of beta testers, on a connection between his rendering software and MayaTM. Scott Iverson, is the chief developer of the AIR renderer, and founder of Sitex Graphics Inc, USA Paul Gregory, works for the Aqsis Team, UK. He is the originator, and lead developer of the open source renderer "Aqsis." Matthew Bentham, is currently at ART VPS Ltd, UK. He is also the software developer responsible for compiler technology at ART VPS, creators of the RenderDrive rendering appliance. Ian Stephenson, is a Senior Lecturer at the National Centre for Computer Animation (NCCA), Bournemouth University, UK. Developer of the Angel rendering system, he is also the author of Essential RenderMan Fast.
Model checking is a powerful approach for the formal verification of software. When applicable, it automatically provides complete proofs of correctness, or explains, via counter-examples, why a system is not correct.This book provides a basic introduction to this new technique. The first part describes in simple terms the theoretical basis of model checking: transition systems as a formal model of systems, temporal logic as a formal language for behavioral properties, and model-checking algorithms. The second part explains how to write rich and structured temporal logic specifications in practice, while the third part surveys some of the major model checkers available.
This edited book presents scientific results of the 17th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2016) which was held on May 30 - June 1, 2016 in Shanghai, China. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them.
This book discusses various open issues in software engineering, such as the efficiency of automated testing techniques, predictions for cost estimation, data processing, and automatic code generation. Many traditional techniques are available for addressing these problems. But, with the rapid changes in software development, they often prove to be outdated or incapable of handling the software's complexity. Hence, many previously used methods are proving insufficient to solve the problems now arising in software development. The book highlights a number of unique problems and effective solutions that reflect the state-of-the-art in software engineering. Deep learning is the latest computing technique, and is now gaining popularity in various fields of software engineering. This book explores new trends and experiments that have yielded promising solutions to current challenges in software engineering. As such, it offers a valuable reference guide for a broad audience including systems analysts, software engineers, researchers, graduate students and professors engaged in teaching software engineering.
Explains how software reliability can be applied to software programs of various sizes, functions and languages, and businesses. This work provides real-life examples from industries such as defence engineering, and finance. It is suitable for software and quality assurance engineers and graduate students.
This book constitutes the refereed post-conference proceedings of the Second IFIP International Cross-Domain Conference on Internet of Things, IFIPIoT 2019, held in Tampa, USA, in October/ November 2019. The 11 full papers presented were carefully reviewed and selected from 22 submissions. Also included in this volume are 8 invited papers. The papers are organized in the following topical sections: IoT applications; context reasoning and situational awareness; IoT security; smart and low power IoT; smart network architectures; and smart system design and IoT education.
Computer software and technologies are advancing at an amazing rate. The accessibility of these software sources allows for a wider power among common users as well as rapid advancement in program development and operating information. Free and Open Source Software in Modern Data Science and Business Intelligence: Emerging Research and Opportunities is a critical scholarly resource that examines the differences between the two types of software, integral in the FOSS movement, and their effect on the distribution and use of software. Featuring coverage on a wide range of topics, such as FOSS Ecology, graph mining, and project tasks, this book is geared towards academicians, researchers, and students interested in current research on the growing importance of FOSS and its expanding reach in IT infrastructure.
This book reports on recent advances in software engineering research and practice. Divided into 15 chapters, it addresses: languages and tools; development processes; modelling, simulation and verification; and education. In the first category, the book includes chapters on domain-specific languages, software complexity, testing and tools. In the second, it reports on test-driven development, processing of business rules, and software management. In turn, subsequent chapters address modelling, simulation and verification of real-time systems, mobile systems and computer networks, and a scrum-based framework. The book was written by researchers and practitioners, the goal being to achieve a synergistic combination of research results achieved in academia and best practices used in the industry, and to provide a valuable reference guide for both groups.
In establishing a framework for dealing with uncertainties in software engineering, and for using quantitative measures in related decision-making, this text puts into perspective the large body of work having statistical content that is relevant to software engineering. Aimed at computer scientists, software engineers, and reliability analysts who have some exposure to probability and statistics, the content is pitched at a level appropriate for research workers in software reliability, and for graduate level courses in applied statistics computer science, operations research, and software engineering.
This textbook is about systematic problem solving and systematic reasoning using type-driven design. There are two problem solving techniques that are emphasized throughout the book: divide and conquer and iterative refinement. Divide and conquer is the process by which a large problem is broken into two or more smaller problems that are easier to solve and then the solutions for the smaller pieces are combined to create an answer to the problem. Iterative refinement is the process by which a solution to a problem is gradually made better-like the drafts of an essay. Mastering these techniques are essential to becoming a good problem solver and programmer. The book is divided in five parts. Part I focuses on the basics. It starts with how to write expressions and subsequently leads to decision making and functions as the basis for problem solving. Part II then introduces compound data of finite size, while Part III covers compound data of arbitrary size like e.g. lists, intervals, natural numbers, and binary trees. It also introduces structural recursion, a powerful data-processing strategy that uses divide and conquer to process data whose size is not fixed. Next, Part IV delves into abstraction and shows how to eliminate repetitions in solutions to problems. It also introduces generic programming which is abstraction over the type of data processed. This leads to the realization that functions are data and, perhaps more surprising, that data are functions, which in turn naturally leads to object-oriented programming. Part V introduces distributed programming, i.e., using multiple computers to solve a problem. This book promises that by the end of it readers will have designed and implemented a multiplayer video game that they can play with their friends over the internet. To achieve this, however, there is a lot about problem solving and programming that must be learned first. The game is developed using iterative refinement. The reader learns step-by-step about programming and how to apply new knowledge to develop increasingly better versions of the video game. This way, readers practice modern trends that are likely to be common throughout a professional career and beyond.
This book offers a new Modular Petri Net as a solution to the vast Petri net models. It presents some approaches centering around modules (known as "Petri modules"). The goal of this book is to introduce a methodology in which Petri nets are moved to a new level. In this new level, large Petri net models are made of Petri modules, which are independent and run on different computers. This book also contains the literature study on modular Petri nets and definitions for the newer Petri modules. Also, algorithms for extracting Petri modules, and algorithms for connecting Petri modules, and applications are given in this book. Besides, the ideas and algorithms given in this book are implemented in the software General-purpose Petri Net Simulator (GPenSIM). Hence, with the use of this book the readers/users would be able to know that real-life discrete event systems could be modeled, analyzed, and performance-optimized with GPenSIM.
Customers in the new millennium increasingly expect on-time delivery of high-quality software products for their needs. This focus on quality requires industries and organizations to define a reliable software development infrastructure conducive to consistently producing quality software. Only through a pragmatic software-quality strategy will companies be able to remain competitive and focused. "A Practical Approach to Software Quality" offers a comprehensive introduction to software quality and useful guidance on implementing a dependable quality system within an industry or organization. Written from a practitioner¿s viewpoint, the book explains the principles of software quality management and software process improvement. It reconciles theory with practice, supporting the fundamentals with description of current approaches of software engineers to build quality into software. Chapters address software inspections and testing, the ISO 9000 standard and the SPICE standard, the Capability Maturity Model, metrics and problem solving, and formal methods and design. Topics and features: * Inclusive presentation of central issues in software quality management * Provides in-depth material on using assessments to assist with organizational improvements; includes CMM, SPICE, and ISO 9000: 2000 * Detailed coverage of software process improvement * Broad discussion of software inspections and testing, including testing in an E-commerce environment * Presents software usability and usability standards (ISO 9241 and ISO 13407), as well as the SUMI methodology for assessing usability * Describes adaptable organization metrics and how the Balanced scorecard and GQM can assist organizations in identifying the right metrics With its accessible and concise style, and emphasis on the practical aspects of software-quality enhancement, this new book is an excellent resource for learning about the subject and its impact on organizations. Software engineering practitioners and professionals will find the book an essential tool, as will researchers and students seeking an introduction to the field.
How to Write Code You're Proud of . . . Every Single Day ". . . [A] timely and humble reminder of the ever-increasing complexity of our programmatic world and how we owe it to the legacy of humankind--and to ourselves--to practice ethical development. Take your time reading Clean Craftsmanship. . . . Keep this book on your go-to bookshelf. Let this book be your old friend--your Uncle Bob, your guide--as you make your way through this world with curiosity and courage." --From the Foreword by Stacia Heimgartner Viscardi, CST & Agile Mentor In Clean Craftsmanship, the legendary Robert C. Martin ("Uncle Bob") has written the principles that define the profession--and the craft--of software development. Uncle Bob brings together the disciplines, standards, and ethics you need to deliver robust, effective code and to be proud of all the software you write. Robert Martin, the best-selling author of Clean Code, provides a pragmatic, technical, and prescriptive guide to the foundational disciplines of software craftsmanship. He discusses standards, showing how the world's expectations of developers often differ from their own and helping you bring the two in sync. Bob concludes with the ethics of the programming profession, describing the fundamental promises all developers should make to their colleagues, their users, and, above all, themselves. With Uncle Bob's insights, all programmers and their managers can consistently deliver code that builds trust instead of undermining it--trust among users and throughout societies that depend on software for their survival. Moving towards the "north star" of true software craftsmanship: the state of knowing how to program well Practical, specific guidance for applying five core disciplines: test-driven development, refactoring, simple design, collaborative programming, and acceptance tests How developers and teams can promote productivity, quality, and courage The true meaning of integrity and teamwork among programmers, and ten specific commitments every software professional should make Register your book for convenient access to the book's companion videos, updates, and/or corrections as they become available. See inside book for details. |
![]() ![]() You may like...
Matrix Diagonal Stability in Systems and…
Eugenius Kaszkurewicz, Amit Bhaya
Hardcover
R3,040
Discovery Miles 30 400
Resolution of Singularities of Embedded…
Shreeram S. Abhyankar
Hardcover
R3,059
Discovery Miles 30 590
IRC-SET 2020 - Proceedings of the 6th…
Huaqun Guo, Hongliang Ren, …
Hardcover
R4,531
Discovery Miles 45 310
Introduction to Nonlinear and Global…
Eligius M. T. Hendrix, Boglarka G. -Toth
Hardcover
R1,539
Discovery Miles 15 390
Targeting Chronic Inflammatory Lung…
Kamal Dua, Philip M. Hansbro, …
Paperback
R4,285
Discovery Miles 42 850
Optimization with LINGO-18 - Problems…
Neha Gupta, Irfan Ali
Hardcover
R3,124
Discovery Miles 31 240
Practical Industrial Data Networks…
Steve Mackay, Edwin Wright, …
Paperback
R1,540
Discovery Miles 15 400
Meeting People via WiFi and Bluetooth
Joshua Schroeder, Henry Dalziel
Paperback
R821
Discovery Miles 8 210
Healthcare Data Analytics and Management
Nilanjan Dey, Amira Ashour, …
Paperback
|