![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming > Software engineering
This comprehensive reference text discusses nature inspired algorithms and their applications. It presents the methodology to write new algorithms with the help of MATLAB programs and instructions for better understanding of concepts. It covers well-known algorithms including evolutionary algorithms, genetic algorithm, particle Swarm optimization and differential evolution, and recent approached including gray wolf optimization. A separate chapter discusses test case generation using techniques such as particle swarm optimization, genetic algorithm, and differential evolution algorithm. The book- Discusses in detail various nature inspired algorithms and their applications Provides MATLAB programs for the corresponding algorithm Presents methodology to write new algorithms Examines well-known algorithms like the genetic algorithm, particle swarm optimization and differential evolution, and recent approaches like gray wolf optimization. Provides conceptual linking of algorithms with theoretical concepts The text will be useful for graduate students in the field of electrical engineering, electronics engineering, computer science and engineering. Discussing nature inspired algorithms and their applications in a single volume, this text will be useful as a reference text for graduate students in the field of electrical engineering, electronics engineering, computer science and engineering. It discusses important algorithms including deterministic algorithms, randomized algorithms, evolutionary algorithms, particle swarm optimization, big bang big crunch (BB-BC) algorithm, genetic algorithm and grey wolf optimization algorithm. "
Without correct timing, there is no safe and reliable embedded software. This book shows how to consider timing early in the development process for embedded systems, how to solve acute timing problems, how to perform timing optimization, and how to address the aspect of timing verification.The book is organized in twelve chapters. The first three cover various basics of microprocessor technologies and the operating systems used therein. The next four chapters cover timing problems both in theory and practice, covering also various timing analysis techniques as well as special issues like multi- and many-core timing. Chapter 8 deals with aspects of timing optimization, followed by chapter 9 that highlights various methodological issues of the actual development process. Chapter 10 presents timing analysis in AUTOSAR in detail, while chapter 11 focuses on safety aspects and timing verification. Finally, chapter 12 provides an outlook on upcoming and future developments in software timing. The number of embedded systems that we encounter in everyday life is growing steadily. At the same time, the complexity of the software is constantly increasing. This book is mainly written for software developers and project leaders in industry. It is enriched by many practical examples mostly from the automotive domain, yet the vast majority of the book is relevant for any embedded software project. This way it is also well-suited as a textbook for academic courses with a strong practical emphasis, e.g. at applied sciences universities. Features and Benefits * Shows how to consider timing in the development process for embedded systems, how to solve timing problems, and how to address timing verification * Enriched by many practical examples mostly from the automotive domain * Mainly written for software developers and project leaders in industry
Distributed across servers, difficult to test, and resistant to modification-modern software is complex. Grokking Simplicity is a friendly, practical guide that will change the way you approach software design and development. It introduces a unique approach to functional programming that explains why certain features of software are prone to complexity, and teaches you the functional techniques you can use to simplify these systems so that they're easier to test and debug. Available in PDF (ePub, kindle, and liveBook formats coming soon). about the technologyEven experienced developers struggle with software systems that sprawl across distributed servers and APIs, are filled with redundant code, and are difficult to reliably test and modify. Adopting ways of thinking derived from functional programming can help you design and refactor your codebase in ways that reduce complexity, rather than encouraging it. Grokking Simplicity lays out how to use functional programming in a professional environment to write a codebase that's easier to test and reuse, has fewer bugs, and is better at handling the asynchronous nature of distributed systems. about the bookIn Grokking Simplicity, you'll learn techniques and, more importantly, a mindset that will help you tackle common problems that arise when software gets complex. Veteran functional programmer Eric Normand guides you to a crystal-clear understanding of why certain features of modern software are so prone to complexity and introduces you to the functional techniques you can use to simplify these systems so that they're easier to read, test, and debug. Through hands-on examples, exercises, and numerous self-assessments, you'll learn to organize your code for maximum reusability and internalize methods to keep unwanted complexity out of your codebase. Regardless of the language you're using, the ways of thinking in this book will help recognize problematic code and tame even the most complex software. what's inside Apply functional programming principles to reduce codebase complexity Work with data transformation pipelines for code that's easier to test and reuse Tools for modeling time to simplify asynchrony 60 exercises and 100 questions to test your knowledge about the readerFor experienced programmers. Examples are in JavaScript. about the author Eric Normand has been a functional programmer since 2001 and has been teaching functional programming online and in person since 2007. Visit LispCast.com to see more of his credentials.
This book presents the state of the art, challenges and future trends in automotive software engineering. The amount of automotive software has grown from just a few lines of code in the 1970s to millions of lines in today's cars. And this trend seems destined to continue in the years to come, considering all the innovations in electric/hybrid, autonomous, and connected cars. Yet there are also concerns related to onboard software, such as security, robustness, and trust. This book covers all essential aspects of the field. After a general introduction to the topic, it addresses automotive software development, automotive software reuse, E/E architectures and safety, C-ITS and security, and future trends. The specific topics discussed include requirements engineering for embedded software systems, tools and methods used in the automotive industry, software product lines, architectural frameworks, various related ISO standards, functional safety and safety cases, cooperative intelligent transportation systems, autonomous vehicles, and security and privacy issues. The intended audience includes researchers from academia who want to learn what the fundamental challenges are and how they are being tackled in the industry, and practitioners looking for cutting-edge academic findings. Although the book is not written as lecture notes, it can also be used in advanced master's-level courses on software and system engineering. The book also includes a number of case studies that can be used for student projects.
Environment Modeling-Based Requirements Engineering for Software Intensive Systems provides a new and promising approach for engineering the requirements of software-intensive systems, presenting a systematic, promising approach to identifying, clarifying, modeling, deriving, and validating the requirements of software-intensive systems from well-modeled environment simulations. In addition, the book presents a new view of software capability, i.e. the effect-based software capability in terms of environment modeling.
This volume presents a collection of methods for dealing with software reliability. Ideally, formal methods need to be intuitive to use, require a relatively brief learning period, and incur only small overhead to the development process. This book compares these varying methods and reveals their respective advantages and disadvantages, while also staying close to the dual themes of automata theory and logic. Topics and features: * Collects and compares the key software reliability methods currently in use: deductive verification, automatic verification, testing, and process algebra * Provides useful information suitable in the software selection process for a given project * Offers numerous exercises, projects, and running examples to facilitate learning formal methods and allows for ¿hands-on¿ experience with these critical tools * Describes the mathematical principles supporting formal methods * Gives insights into new research directions in the field, as well as ways of developing new methods and/or adjusting existing ones. This volume can be used as an introduction to software methods techniques, a source for learning about various ways to enhance software reliability, and a guide to formal methods techniques. It is an essential resource for professionals and software engineers in R&D departments in industry, using software reliability, program-modeling systems, and verification methods.
This book summarizes the results of Design Thinking Research carried out at Stanford University in Palo Alto, California, USA, and Hasso Plattner Institute in Potsdam, Germany. The authors offer readers a closer look at Design Thinking with its processes of innovations and methods. The contents of the articles range from how to design ideas, methods, and technologies via creativity experiments and wicked problem solutions, to creative collaboration in the real world and the connectivity of designers and engineers. But the topics go beyond this in their detailed exploration of design thinking and its use in IT systems engineering fields and even from a management perspective. The authors show how these methods and strategies work in companies, introduce new technologies and their functions and demonstrate how Design Thinking can influence as diverse a topic area as marriage. Furthermore, we see how special design thinking use functions in solving wicked problems in complex fields. Thinking and creating innovations are basically and inherently human - so is Design Thinking. Due to this, Design Thinking is not only a factual matter or a result of special courses nor of being gifted or trained: it's a way of dealing with our environment and improving techniques, technologies and life.
Unique selling point: Focuses solely on entity-relationship model diagramming and design Core audience: Undergraduate CS students and professionals Place in the market: Undergraduate textbook
Edsger Wybe Dijkstra (1930-2002) was one of the most influential researchers in the history of computer science, making fundamental contributions to both the theory and practice of computing. Early in his career, he proposed the single-source shortest path algorithm, now commonly referred to as Dijkstra's algorithm. He wrote (with Jaap Zonneveld) the first ALGOL 60 compiler, and designed and implemented with his colleagues the influential THE operating system. Dijkstra invented the field of concurrent algorithms, with concepts such as mutual exclusion, deadlock detection, and synchronization. A prolific writer and forceful proponent of the concept of structured programming, he convincingly argued against the use of the Go To statement. In 1972 he was awarded the ACM Turing Award for "fundamental contributions to programming as a high, intellectual challenge; for eloquent insistence and practical demonstration that programs should be composed correctly, not just debugged into correctness; for illuminating perception of problems at the foundations of program design." Subsequently he invented the concept of self-stabilization relevant to fault-tolerant computing. He also devised an elegant language for nondeterministic programming and its weakest precondition semantics, featured in his influential 1976 book A Discipline of Programming in which he advocated the development of programs in concert with their correctness proofs. In the later stages of his life, he devoted much attention to the development and presentation of mathematical proofs, providing further support to his long-held view that the programming process should be viewed as a mathematical activity. In this unique new book, 31 computer scientists, including five recipients of the Turing Award, present and discuss Dijkstra's numerous contributions to computing science and assess their impact. Several authors knew Dijkstra as a friend, teacher, lecturer, or colleague. Their biographical essays and tributes provide a fascinating multi-author picture of Dijkstra, from the early days of his career up to the end of his life.
This book will serve as a guide in understanding workflow scheduling techniques on computing systems such as Cluster, Supercomputers, Grid computing, Cloud computing, Edge computing, Fog computing, and the practical realization of such methods. It offers a whole new perspective and holistic approach in understanding computing systems' workflow scheduling. Expressing and exposing approaches for various process-centric cloud-based applications give a full coverage of most systems' energy consumption, reliability, resource utilization, cost, and application stochastic computation. By combining theory with application and connecting mathematical concepts and models with their resource management targets, this book will be equally accessible to readers with both Computer Science and Engineering backgrounds. It will be of great interest to students and professionals alike in the field of computing system design, management, and application. This book will also be beneficial to the general audience and technology enthusiasts who want to expand their knowledge on computer structure.
In the 5G era, edge computing and new ecosystems of mobile microservices enable new business models, strategies, and competitive advantage. Focusing on microservices, this book introduces the essential concepts, technologies, and trade-offs in the edge computing architectural stack, providing for widespread adoption and dissemination. The book elucidates the concepts, architectures, well-defined building blocks, and prototypes for mobile microservice platforms and pervasive application development, as well as the implementation and configuration of service middleware and AI-based microservices. A goal-oriented service composition model is then proposed by the author, allowing for an economic assessment of connected, smart mobile services. Based on this model, costs can be minimized through statistical workload aggregation effects or backhaul data transport reduction, and customer experience and safety can be enhanced through reduced response times. This title will be a useful guide for students and IT professionals to get started with microservices and when studying the use of microservices in pervasive applications. It will also appeal to researchers and students studying software architecture and service-oriented computing, and especially those interested in edge computing, pervasive computing, the Internet of Things, and mobile microservices.
How to Find and Fix the Killer Software Bugs that Evade Conventional Testing In Exploratory Software Testing, renowned software testing expert James Whittaker reveals the real causes of today's most serious, well-hidden software bugs--and introduces powerful new "exploratory" techniques for finding and correcting them. Drawing on nearly two decades of experience working at the cutting edge of testing with Google, Microsoft, and other top software organizations, Whittaker introduces innovative new processes for manual testing that are repeatable, prescriptive, teachable, and extremely effective. Whittaker defines both in-the-small techniques for individual testers and in-the-large techniques to supercharge test teams. He also introduces a hybrid strategy for injecting exploratory concepts into traditional scripted testing. You'll learn when to use each, and how to use them all successfully. Concise, entertaining, and actionable, this book introduces robust techniques that have been used extensively by real testers on shipping software, illuminating their actual experiences with these techniques, and the results they've achieved. Writing for testers, QA specialists, developers, program managers, and architects alike, Whittaker answers crucial questions such as: * Why do some bugs remain invisible to automated testing--and how can I uncover them? * What techniques will help me consistently discover and eliminate "show stopper" bugs? * How do I make manual testing more effective--and less boring and unpleasant? * What's the most effective high-level test strategy for each project? * Which inputs should I test when I can't test them all? * Which test cases will provide the best feature coverage? * How can I get better results by combining exploratory testing with traditional script or scenario-based testing? * How do I reflect feedback from the development process, such as code changes?
This book describes a cross-domain architecture and design tools for networked complex systems where application subsystems of different criticality coexist and interact on networked multi-core chips. The architecture leverages multi-core platforms for a hierarchical system perspective of mixed-criticality applications. This system perspective is realized by virtualization to establish security, safety and real-time performance. The impact further includes a reduction of time-to-market, decreased development, deployment and maintenance cost, and the exploitation of the economies of scale through cross-domain components and tools. Describes an end-to-end architecture for hypervisor-level, chip-level, and cluster level. Offers a solution for different types of resources including processors, on-chip communication, off-chip communication, and I/O. Provides a cross-domain approach with examples for wind-power, health-care, and avionics. Introduces hierarchical adaptation strategies for mixed-criticality systems Provides modular verification and certification methods for the seamless integration of mixed-criticality systems. Covers platform technologies, along with a methodology for the development process. Presents an experimental evaluation of technological results in cooperation with industrial partners. The information in this book will be extremely useful to industry leaders who design and manufacture products with distributed embedded systems in mixed-criticality use-cases. It will also benefit suppliers of embedded components or development tools used in this area. As an educational tool, this material can be used to teach students and working professionals in areas including embedded systems, computer networks, system architecture, dependability, real-time systems, and avionics, wind-power and health-care systems.
As interactive systems are quickly becoming integral to our everyday lives, this book investigates how we can make these systems, from desktop and mobile apps to more wearable and immersive applications, more usable and maintainable by using HCI design patterns. It also examines how we can facilitate the reuse of design practices in the development lifecycle of multi-devices, multi-platforms and multi-contexts user interfaces. Effective design tools are provided for combining HCI design patterns and User Interface (UI) driven engineering to enhance design whilst differentiating between UI and the underlying system features. Several examples are used to demonstrate how HCI design patterns can support this decoupling by providing an architectural framework for pattern-oriented and model-driven engineering of multi-platforms and multi-devices user interfaces. Patterns of HCI Design and HCI Design of Patterns is for students, academics and Industry specialists who are concerned with user interfaces and usability within the software development community.
This book is a collection of chapters from the IFIP working groups 13.8 and 9.4. The 10 papers included present experiences and research on the topic of digital transformation and innovation practices in the global south. The topics span from digital transformation initiatives to novel innovative technological developments, practices and applications of marginalised people in the global south.
As long as humans write software, the key to successful software security is making the software development program process more efficient and effective. Although the approach of this textbook includes people, process, and technology approaches to software security, Practical Core Software Security: A Reference Framework stresses the people element of software security, which is still the most important part to manage as software is developed, controlled, and exploited by humans. The text outlines a step-by-step process for software security that is relevant to today's technical, operational, business, and development environments. It focuses on what humans can do to control and manage a secure software development process using best practices and metrics. Although security issues will always exist, students learn how to maximize an organization's ability to minimize vulnerabilities in software products before they are released or deployed by building security into the development process. The authors have worked with Fortune 500 companies and have often seen examples of the breakdown of security development lifecycle (SDL) practices. The text takes an experience-based approach to apply components of the best available SDL models in dealing with the problems described above. Software security best practices, an SDL model, and framework are presented in this book. Starting with an overview of the SDL, the text outlines a model for mapping SDL best practices to the software development life cycle (SDLC). It explains how to use this model to build and manage a mature SDL program. Exercises and an in-depth case study aid students in mastering the SDL model. Professionals skilled in secure software development and related tasks are in tremendous demand today. The industry continues to experience exponential demand that should continue to grow for the foreseeable future. This book can benefit professionals as much as students. As they integrate the book's ideas into their software security practices, their value increases to their organizations, management teams, community, and industry.
The safety case (SC) is one of the railway industry's most important deliverables for creating confidence in their systems. This is the first book on how to write an SC, based on the standard EN 50129:2003. Experience has shown that preparing and understanding an SC is difficult and time consuming, and as such the book provides insights that enhance the training for writing an SC. The book discusses both "regular" safety cases and agile safety cases, which avoid too much documentation, improve communication between the stakeholders, allow quicker approval of the system, and which are important in the light of rapidly changing technology. In addition, it discusses the necessity of frequently updating software due to market requirements, changes in requirements and increased cyber-security threats. After a general introduction to SCs and agile thinking in chapter 1, chapter 2 describes the majority of the roles that are relevant when developing railway-signaling systems. Next, chapter 3 provides information related to the assessment of signaling systems, to certifications based on IEC 61508 and to the authorization of signaling systems. Chapter 4 then explains how an agile safety plan satisfying the requirements given in EN 50126-1:1999 can be developed, while chapter 5 provides a brief introduction to safety case patterns and notations. Lastly, chapter 6 combines all this and describes how an (agile) SC can be developed and what it should include. To ensure that infrastructure managers, suppliers, consultants and others can take full advantage of the agile mind-set, the book includes concrete examples and presents relevant agile practices. Although the scope of the book is limited to signaling systems, the basic foundations for (agile) SCs are clearly described so that they can also be applied in other cases.
This book offers an overview of the key ideas of Petri nets, how they were developed, and how they were applied in diverse applications. The chapters in the first part offer individual perspectives on the impact of Petri's work. The second part of the book contains personal memories from researchers who collaborated with him closely, in particular they recount his unique personality. The chapters in the third part offer more conventional treatments on various aspects of current Petri net research, and the fourth part examines the wide applications of Petri nets, and the relationships with other domains. The editors and authors are the leading researchers in this domain, and this book will be a valuable insight for researchers in computer science, particularly those engaged with concurrency and distributed systems.
The Internet of Things (IoT) is an emerging network superstructure that will connect physical resources and actual users. It will support an ecosystem of smart applications and services bringing hyper-connectivity to our society by using augmented and rich interfaces. Whereas in the beginning IoT referred to the advent of barcodes and Radio Frequency Identification (RFID), which helped to automate inventory, tracking and basic identification, today IoT is characterized by a dynamic trend toward connecting smart sensors, objects, devices, data and applications. The next step will be cognitive IoT, facilitating object and data re-use across application domains and leveraging hyper-connectivity, interoperability solutions and semantically enriched information distribution. The Architectural Reference Model (ARM), presented in this book by the members of the IoT-A project team driving this harmonization effort, makes it possible to connect vertically closed systems, architectures and application areas so as to create open interoperable systems and integrated environments and platforms. It constitutes a foundation from which software companies can capitalize on the benefits of developing consumer-oriented platforms including hardware, software and services. The material is structured in two parts. Part A introduces the general concepts developed for and applied in the ARM. It is aimed at end users who want to use IoT technologies, managers interested in understanding the opportunities generated by these novel technologies, and system architects who are interested in an overview of the underlying basic models. It also includes several case studies to illustrate how the ARM has been used in real-life scenarios. Part B then addresses the topic at a more detailed technical level and is targeted at readers with a more scientific or technical background. It provides in-depth guidance on the ARM, including a detailed description of a process for generating concrete architectures, as well as reference manuals with guidelines on how to use the various models and perspectives presented to create a concrete architecture. Furthermore, best practices and tips on how system engineers can use the ARM to develop specific IoT architectures for dedicated IoT solutions are illustrated and exemplified in reverse mapping exercises of existing standards and platforms."
How to Succeed in the Enterprise Software Market describes enterprise-level information systems that businesses use to support their processes. This book provides a clear and simple framework to help software companies understand this experience, and help them build software products compatible with organizations, humans, and complex customer environments. How to Succeed in the Enterprise Software Market combines leading research on how technology affects humans and organizations with industry experience and case studies on enterprise software companies. It includes the inside story from case studies on emerging companies including OpenMarket, Inc, E-Docs, ObjectStore, NewRiver, Inc. and BBN Communications and major buyers of IT services in the financial services industry. This book is a practical guide to results that bridge gaps between hard and soft science views of systems development, academic research, and actual practice.
This book presents a set of software engineering techniques and tools to improve the productivity and assure the quality in quantum software development. Through the collaboration of the software engineering community with the quantum computing community new architectural paradigms for quantum-enabled computing systems will be anticipated and developed. The book starts with a chapter that introduces the main concepts and general foundations related to quantum computing. This is followed by a number of chapters dealing with the quantum software engineering methods and techniques. Topics like the Talavera Manifesto for quantum software engineering, frameworks for hybrid systems, formal methods for quantum software engineering, quantum software modelling languages, and reengineering for quantum software are covered in this part. A second set of chapters then deals with quantum software environments and tools, detailing platforms like QuantumPath (R), Classiq as well as quantum software frameworks for deep learning. Overall, the book aims at academic researchers and practitioners involved in the creation of quantum information systems and software platforms. It is assumed that readers have a background in traditional software engineering and information systems.
This volume presents a programming model, similar to object-oriented programming, that imposes a strict discipline on the form of the constituent objects and interactions among them. Concurrency considerations have been eliminated from the model itself and are introduced only during implementation, thereby freeing programmers from dealing with concurrency explicitly. Moreover, the resulting software designs are typically more modular and easier to analyze than the more traditional ones. Numerous examples illustrate various aspects of the model and reveal that a few simple, integrated features are adequate for designing complex applications. Topics and features: * Presents a simple, easy-to-understand multiprogramming model * Provides extensive development of the underlying theory * Emphasizes program composition, thereby making possible programming of large systems through modular designs * Eliminates explicit concurrency considerations during program design * Supplies efficient implementation schemes for distributed platforms. This book addresses the problem of developing complex distributed applications on wide-area networks, such as the Internet and World Wide Web, by using effective program design principles. Computer scientists, computer engineers, and software engineers will find the book an authoritative guide to large-scale multiprogramming.
Maintaining the advanced technical focus found in Developing Essbase Applications, this second volume is another collaborative effort by some of the best and most experienced Essbase practitioners from around the world. Developing Essbase Applications: Hybrid Techniques and Practices reviews technology areas that are much-discussed but still very new, including Exalytics and Hybrid Essbase. Covering recent improvements to the Essbase engine, the book illustrates the impact of new reporting and analysis tools and also introduces advanced Essbase best practices across a variety of features, functions, and theories. Some of this book's chapters are in the same vein as the previous volume: hardware, engines, and languages. Others cover new ground with Oracle Business Intelligence Enterprise Edition, design philosophy, benchmarking concepts, and multiple client tools. As before, these subjects are covered from both the technical and best practice perspectives. This updated volume continues in the tradition of its bestselling predecessor by defining, investigating, and explaining Essbase concepts like no other resource. It also includes use cases that transform abstract theory into practical examples you can easily relate to your own Essbase environment. Illustrating the recent expansion of Essbase functionality, this book provides the up-to-date understanding you need to explore the full depth of the Essbase technology stack. Although the book presents detailed tutorial chapters that can be read on their own, reading the entire book will provide you with a similar understanding as some of the most experienced Essbase practitioners from around the world.
Software Reliability Assessment with OR Applications is a comprehensive guide to software reliability measurement, prediction, and control. It provides a thorough understanding of the field and gives solutions to the decision-making problems that concern software developers, engineers, practitioners, scientists, and researchers. Using operations research techniques, readers will learn how to solve problems under constraints such as cost, budget and schedules to achieve the highest possible quality level. Software Reliability Assessment with OR Applications is a comprehensive text on software engineering and applied statistics, state-of-the art software reliability modeling, techniques and methods for reliability assessment, and related optimization problems. It addresses various topics, including: unification methodologies in software reliability assessment; application of neural networks to software reliability assessment; software reliability growth modeling using stochastic differential equations; software release time and resource allocation problems; and optimum component selection and reliability analysis for fault tolerant systems. Software Reliability Assessment with OR Applications is designed to cater to the needs of software engineering practitioners, developers, security or risk managers, and statisticians. It can also be used as a textbook for advanced undergraduate or postgraduate courses in software reliability, industrial engineering, and operations research and management.
The basic concepts and building blocks for the design of Fine- (or FPGA) and Coarse-Grain Reconfigurable Architectures are discussed in this book. Recently-developed integrated architecture design and software-supported design flow of FPGA and coarse-grain reconfigurable architecture are also described. The book is accompanied by an interactive CD which includes case studies and lab projects for the design of FPGA and Coarse-grain architectures. |
![]() ![]() You may like...
Practical Industrial Data Communications…
Deon Reynders, Steve Mackay, …
Paperback
R1,539
Discovery Miles 15 390
Hidden Markov Models - Estimation and…
Robert J Elliott, Lakhdar Aggoun, …
Hardcover
R4,576
Discovery Miles 45 760
|