![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming > Software engineering
In 2013, the International Conference on Advance Information Systems Engineering (CAiSE) turns 25. Initially launched in 1989, for all these years the conference has provided a broad forum for researchers working in the area of Information Systems Engineering. To reflect on the work done so far and to examine prospects for future work, the CAiSE Steering Committee decided to present a selection of seminal papers published for the conference during these years and to ask their authors, all prominent researchers in the field, to comment on their work and how it has developed over the years. The scope of the papers selected covers a broad range of topics related to modeling and designing information systems, collecting and managing requirements, and with special attention to how information systems are engineered towards their final development and deployment as software components.With this approach, the book provides not only a historical analysis on how information systems engineering evolved over the years, but also a fascinating social network analysis of the research community. Additionally, many inspiring ideas for future research and new perspectives in this area are sparked by the intriguing comments of the renowned authors.
This book provides a coherent overview of the most important modelling-related security techniques available today, and demonstrates how to combine them. Further, it describes an integrated set of systematic practices that can be used to achieve increased security for software from the outset, and combines practical ways of working with practical ways of distilling, managing, and making security knowledge operational. The book addresses three main topics: (1) security requirements engineering, including security risk management, major activities, asset identification, security risk analysis and defining security requirements; (2) secure software system modelling, including modelling of context and protected assets, security risks, and decisions regarding security risk treatment using various modelling languages; and (3) secure system development, including effective approaches, pattern-driven development, and model-driven security. The primary target audience of this book is graduate students studying cyber security, software engineering and system security engineering. The book will also benefit practitioners interested in learning about the need to consider the decisions behind secure software systems. Overall it offers the ideal basis for educating future generations of security experts.
The advent of new architectures and computing platforms means that synchronization and concurrent computing are among the most important topics in computing science. Concurrent programs are made up of cooperating entities -- processors, processes, agents, peers, sensors -- and synchronization is the set of concepts, rules and mechanisms that allow them to coordinate their local computations in order to realize a common task. This book is devoted to the most difficult part of concurrent programming, namely synchronization concepts, techniques and principles when the cooperating entities are asynchronous, communicate through a shared memory, and may experience failures. Synchronization is no longer a set of tricks but, due to research results in recent decades, it relies today on sane scientific foundations as explained in this book. In this book the author explains synchronization and the implementation of concurrent objects, presenting in a uniform and comprehensive way the major theoretical and practical results of the past 30 years. Among the key features of the book are a new look at lock-based synchronization (mutual exclusion, semaphores, monitors, path expressions); an introduction to the atomicity consistency criterion and its properties and a specific chapter on transactional memory; an introduction to mutex-freedom and associated progress conditions such as obstruction-freedom and wait-freedom; a presentation of Lamport's hierarchy of safe, regular and atomic registers and associated wait-free constructions; a description of numerous wait-free constructions of concurrent objects (queues, stacks, weak counters, snapshot objects, renaming objects, etc.); a presentation of the computability power of concurrent objects including the notions of universal construction, consensus number and the associated Herlihy's hierarchy; and a survey of failure detector-based constructions of consensus objects. The book is suitable for advanced undergraduate students and graduate students in computer science or computer engineering, graduate students in mathematics interested in the foundations of process synchronization, and practitioners and engineers who need to produce correct concurrent software. The reader should have a basic knowledge of algorithms and operating systems.
Design thinking as a user-centric innovation method has become more and more widespread during the past years. An increasing number of people and institutions have experienced its innovative power. While at the same time the demand has grown for a deep, evidence-based understanding of the way design thinking functions. This challenge is addressed by the Design Thinking Research Program between Stanford University, Palo Alto, USA and Hasso Plattner Institute, Potsdam, Germany. Summarizing the outcomes of the 5th program year, this book imparts the scientific findings gained by the researchers through their investigations, experiments and studies. The method of design thinking works when applied with diligence and insight. With this book and the underlying research projects, we aim to understand the innovation process of design thinking and the people behind it. The contributions ultimately center on the issue of building innovators. The focus of the investigation is on what people are doing and thinking when engaged in creative design innovation and how their innovation work can be supported.Therefore, within three topic areas, various frameworks, methodologies, mind sets, systems and tools are explored and further developed. The book begins with an assessment of crucial factors for innovators such as empathy and creativity, the second part addresses the improvement of team collaboration and finally we turn to specific tools and approaches which ensure information transfer during the design process. All in all, the contributions shed light and show deeper insights how to support the work of design teams in order to systematically and successfully develop innovations and design progressive solutions for tomorrow.
Users increasingly demand more from their software than ever before more features, fewer errors, faster runtimes. To deliver the best quality products possible, software engineers are constantly in the process of employing novel tools in developing the latest software applications. Progressions and Innovations in Model-Driven Software Engineering investigates the most recent and relevant research on model-driven engineering. Within its pages, researchers and professionals in the field of software development, as well as academics and students of computer science, will find an up-to-date discussion of scientific literature on the topic, identifying opportunities and advantages, and complexities and challenges, inherent in the future of software engineering.
Explores the history of telepresence from the 1948 developments of master-slave manipulation, through to current telepresence technology used in space, undersea, surgery and telemedicine, operations in nuclear and other hazardous environments, policing and surveillance, agriculture, construction, mining, warehousing, education, amusement, social media and other contexts Reviews the differing technologies for visual, haptic, tactile remote sensing at the remote site, and the corresponding means of the display to the human operator Reviews the sensing and control technology, its history, and likely future, and discusses the many research and policy issues Reviews psychological experiments in telepresence with relation to virtual and augmented reality Examines social and ethical concerns: ease of spying, mischief, and crime via remote control of an avatar
The idea for this workshop originated when I came across and read Martin Zelkowitz's book on Requirements for Software Engineering Environments (the proceedings of a small workshop held at the University of Maryland in 1986). Although stimulated by the book I was also disappointed in that it didn't adequately address two important questions - "Whose requirements are these?" and "Will the environment which meets all these requirements be usable by software engineers?." And thus was the decision made to organise this workshop which would explicitly address these two questions. As time went by setting things up, it became clear that our workshop would happen more than five years after the Maryland workshop and thus, at the same time as addressing the two questions above, this workshop would attempt to update the Zelkowitz approach. Hence the workshop acquired two halves, one dominated by discussion of what we already know about usability problems in software engineering and the other by discussion of existing solutions (technical and otherwise) to these problems. This scheme also provided a good format for bringing together those in the HeI community concerned with the human factors of software engineering and those building tools to solve acknowledged, but rarely understood problems.
"Iterating Infusion: Clearer Views of Objects, Classes, and Systems" is a one-of-a-kind book, not dependent on any single technology. Rather, it provides a way to integrate the most efficient techniques from a variety of programming methods, in a manner that makes designing and programming software look easy. "Iterating Infusion" presents comprehensive tools for you to best manage and work with object orientation. These include simplified fundamental concepts, popular language comparisons, advanced designing strategies, a broad usage progression, thorough design notations (interaction algebra), and data-oriented (fundamentally-OO) languages. The title, "Iterating Infusion," alludes to the fact that any system has multiple, coexisting functional levels and that new levelsboth lower and higherare continually added to the same functional area. The practical effect is to bring processes into more focus, always clarifying the vague. The extreme form of this is when separate but compatible technologies are brought together to create advancements; these can be baby-steps or great leaps, with varying amounts of effort. In more general terms, the same thing in a different context can take on much more power. And actually, this phenomenon is at the heart of object-oriented software. Readers have been confirming that, compared to books on just low-level details, "Iterating Infusion" presents cohesive insights that allow you to solve more problems with the same effort in more key places.
The Certified Function Point Specialist Examination Guide provides a complete and authoritative review of the rules and guidelines prescribed in the release of version 4.3 of the Function Point Counting Practices Manual (CPM). Providing a fundamental understanding of the IFPUG Functional Size Measurement method, this is the ideal study guide for the CFPS examination. The text:
Active members of the Counting Practices Committee and a past president of the IFPUG supply time-tested insight on how to use the CPM manual effectively and efficiently during the exam. The two sample exams and detailed examples throughout the text help to ensure readers develop the comprehension required to attain certification the first time around. Following certification, this book is a valuable reference for applying the IFPUG method for sizing proficient software design, development, and deployment. Praise for the book: While there are a number of solid books on counting function
points, this new book fills a gap in the function point literature
by providing useful information on the specifics of becoming a
certified function point counter. The authors are all qualified for
the work at hand, and indeed have contributed to the function point
counting methodology.
Welcome to the 5th International Conference on Open Source Systems! It is quite an achievement to reach the five-year mark - that's the sign of a successful enterprise. This annual conference is now being recognized as the primary event for the open source research community, attracting not only high-quality papers, but also building a community around a technical program, a collection of workshops, and (starting this year) a Doctoral Consortium. Reaching this milestone reflects the efforts of many people, including the conference founders, as well as the organizers and participants in the previous conferences. My task has been easy, and has been greatly aided by the hard work of Kevin Crowston and Cornelia Boldyreff, the Program Committee, as well as the Organizing Team led by Bjoern Lundell. All of us are also grateful to our attendees, especially in the difficult economic climate of 2009. We hope the participants found the conference valuable both for its technical content and for its personal networking opportunities. To me, it is interesting to look back over the past five years, not just at this conference, but at the development and acceptance of open source software. Since 2004, the business and commercial side of open source has grown enormously. At that time, there were only a handful of open source businesses, led by RedHat and its Linux distribution. Companies such as MySQL and JBoss were still quite small.
Take the pain out of managing serverless applications. Knative, a collection of Kubernetes extensions curated by Google, simplifies building and running serverless systems. Knative in Action guides you through the Knative toolkit, showing you how to launch, modify, and monitor event-based apps built using cloud-hosted functions like AWS Lambda. You'll learn how to use Knative Serving to develop software that is easily deployed and autoscaled, how to use Knative Eventing to wire together disparate systems into a consistent whole, and how to integrate Knative into your shipping pipeline. about the technologyWith Knative, managing a serverless application's full lifecycle is a snap. Knative builds on Kubernetes orchestration features, making it easy to deploy and run serverless apps. It handles low-level chores-such as starting and stopping instances-so you can concentrate on features and behavior. about the book Knative in Action teaches you to build complex and efficient serverless applications. You'll dive into Knative's unique design principles and grasp cloud native concepts like handling latency-sensitive workloads. You'll deliver updates with Knative Serving and interlink apps, services, and systems with Knative Eventing. To keep you moving forward, every example includes deployment advice and tips for debugging. what's inside Deploy a service with Knative Serving Connect systems with Knative Eventing Autoscale responses for different traffic surges Develop, ship, and operate software about the readerFor software developers comfortable with CLI tools and an OO language like Java or Go. about the author Jacques Chester has worked in Pivotal and VMWare R&D since 2014, contributing to Knative and other projects.
This book is about a significant step forward in software development. It brings state-of-the-art ontology reasoning into mainstream software development and its languages. Ontology Driven Software Development is the essential, comprehensive resource on enabling technologies, consistency checking and process guidance for ontology-driven software development (ODSD). It demonstrates how to apply ontology reasoning in the lifecycle of software development, using current and emerging standards and technologies. You will learn new methodologies and infrastructures, additionally illustrated using detailed industrial case studies. The book will help you: Learn how ontology reasoning allows validations of structure models and key tasks in behavior models. Understand how to develop ODSD guidance engines for important software development activities, such as requirement engineering, domain modeling and process refinement. Become familiar with semantic standards, such as the Web Ontology Language (OWL) and the SPARQL query language. Make use of ontology reasoning, querying and justification techniques to integrate software models and to offer guidance and traceability supports. This book is helpful for undergraduate students and professionals who are interested in studying how ontologies and related semantic reasoning can be applied to the software development process. In addition, itwill also be useful for postgraduate students, professionals and researchers who are going to embark on their research in areas related to ontology or software engineering.
"Specification and transformation of programs" is short for a methodology of software development where, from a formal specification of a problem to be solved, programs correctly solving that problem are constructed by stepwise application of formal, semantics-preserving transformation rules. The approach considers programming as a formal activity. Consequently, it requires some mathematical maturity and, above all, the will to try something new. A somewhat experienced programmer or a third- or fourth-year student in computer science should be able to master most of this material - at least, this is the level I have aimed at. This book is primarily intended as a general introductory textbook on transformational methodology. As with any methodology, reading and understanding is necessary but not sufficient. Therefore, most of the chapters contain a set of exercises for practising as homework. Solutions to these exercises exist and can, in principle, be obtained at nominal cost from the author upon request on appropriate letterhead. In addition, the book also can be seen as a comprehensive account of the particular transformational methodology developed within the Munich CIP project.
This book examines the requirements, risks, and solutions to improve the security and quality of complex cyber-physical systems (C-CPS), such as production systems, power plants, and airplanes, in order to ascertain whether it is possible to protect engineering organizations against cyber threats and to ensure engineering project quality. The book consists of three parts that logically build upon each other. Part I "Product Engineering of Complex Cyber-Physical Systems" discusses the structure and behavior of engineering organizations producing complex cyber-physical systems, providing insights into processes and engineering activities, and highlighting the requirements and border conditions for secure and high-quality engineering. Part II "Engineering Quality Improvement" addresses quality improvements with a focus on engineering data generation, exchange, aggregation, and use within an engineering organization, and the need for proper data modeling and engineering-result validation. Lastly, Part III "Engineering Security Improvement" considers security aspects concerning C-CPS engineering, including engineering organizations' security assessments and engineering data management, security concepts and technologies that may be leveraged to mitigate the manipulation of engineering data, as well as design and run-time aspects of secure complex cyber-physical systems. The book is intended for several target groups: it enables computer scientists to identify research issues related to the development of new methods, architectures, and technologies for improving quality and security in multi-disciplinary engineering, pushing forward the current state of the art. It also allows researchers involved in the engineering of C-CPS to gain a better understanding of the challenges and requirements of multi-disciplinary engineering that will guide them in their future research and development activities. Lastly, it offers practicing engineers and managers with engineering backgrounds insights into the benefits and limitations of applicable methods, architectures, and technologies for selected use cases.
"Meta-Programming and Model-Driven Meta-Program Development: Principles, Processes and Techniques" presents an overall analysis of meta-programming, focusing on insights of meta-programming techniques, heterogeneous meta-program development processes in the context of model-driven, feature-based and transformative approaches. The fundamental concepts of meta-programming are still not thoroughly understood, in this well organized book divided into three parts the authors help to address this. Chapters include: Taxonomy of fundamental concepts of meta-programming; Concept of structural heterogeneous meta-programming based on the original meta-language; Model-driven concept and feature-based modeling to the development process of meta-programs; Equivalent meta-program transformations and metrics to evaluate complexity of feature-based models and meta-programs; Variety of academic research case studies within different application domains to experimentally verify the soundness of the investigated approaches. Both authors are professors at Kaunas University of Technology with 15 years research and teaching experience in the field. "Meta-Programming and Model-Driven Meta-Program Development: Principles, Processes and Techniques" is aimed at post-graduates in computer science and software engineering and researchers and program system developers wishing to extend their knowledge in this rapidly evolving sector of science and technology.
For real-time systems, the worst-case execution time (WCET) is the key objective to be considered. Traditionally, code for real-time systems is generated without taking this objective into account and the WCET is computed only after code generation. Worst-Case Execution Time Aware Compilation Techniques for Real-Time Systems presents the first comprehensive approach integrating WCET considerations into the code generation process. Based on the proposed reconciliation between a compiler and a timing analyzer, a wide range of novel optimization techniques is provided. Among others, the techniques cover source code and assembly level optimizations, exploit machine learning techniques and address the design of modern systems that have to meet multiple objectives. Using these optimizations, the WCET of real-time applications can be reduced by about 30% to 45% on the average. This opens opportunities for decreasing clock speeds, costs and energy consumption of embedded processors. The proposed techniques can be used for all types real-time systems, including automotive and avionics IT systems.
At the dawn of the 21st century and the information age, communication and c- puting power are becoming ever increasingly available, virtually pervading almost every aspect of modern socio-economical interactions. Consequently, the potential for realizing a signi?cantly greater number of technology-mediated activities has emerged. Indeed, many of our modern activity ?elds are heavily dependant upon various underlying systems and software-intensive platforms. Such technologies are commonly used in everyday activities such as commuting, traf?c control and m- agement, mobile computing, navigation, mobile communication. Thus, the correct function of the forenamed computing systems becomes a major concern. This is all the more important since, in spite of the numerous updates, patches and ?rmware revisions being constantly issued, newly discovered logical bugs in a wide range of modern software platforms (e. g. , operating systems) and software-intensive systems (e. g. , embedded systems) are just as frequently being reported. In addition, many of today's products and services are presently being deployed in a highly competitive environment wherein a product or service is succeeding in most of the cases thanks to its quality to price ratio for a given set of features. Accordingly, a number of critical aspects have to be considered, such as the ab- ity to pack as many features as needed in a given product or service while c- currently maintaining high quality, reasonable price, and short time -to- market.
The subject of this book is the control of software engineering. The rapidly increasing demand for software is accompanied by a growth in the number of products on the market, as well as their size and complexity. Our ability to control software engineering is hardly keeping pace with this growth. As a result, software projects are often late, software products sometimes lack the required quality and the productivity improvements achieved by software engineering are insufficient to keep up with the demand This book describes ways to improve software engineering control. It argues that this should be expanded to include control of the development, maintenance and reuse of software, thus making it possible to apply many of the ideas and concepts that originate in production control and quality control. The book is based on research and experience accumulated over a number of years. During this period I had two employers: Eindhoven University of Technology and Philips Electronics. Research is not a one-man activity and I would like to thank the following persons for their contributions to the successful completion of this project. First and foremost my Ph. D. advisers Theo Bemelmans, Hans van Vliet and Fred Heemstra whose insights and experience proved invaluable at every stage. Many thanks are also due to Rob Kusters and Fred Heemstra for their patience in listening to my sometimes wild ideas and for being such excellent colleagues.
This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.
Drones are taking the world by storm. The technology and laws governing them change faster than we can keep up with. The Big Book of Drones covers everything from drone law to laws on privacy, discussing the history and evolution of drones to where we are today. If you are new to piloting, it also covers how to fly a drone including a pre-flight checklist. For those who are interested in taking drones to the next level, we discuss how to build your own using a 3D printer as well as many challenging projects for your drone. For the truly advanced, The Big Book of Drones discusses how to hack a drone. This includes how to perform a replay attack, denial of service attack, and how to detect a drone and take it down. Finally, the book also covers drone forensics. This is a new field of study, but one that is steadily growing and will be an essential area of inquiry as drones become more prevalent.
Software product lines are emerging as a critical new paradigm for software development. Product lines are enabling organizations to achieve impressive time-to-market gains and cost reductions. With the increasing number of product lines and product-line researchers and practitioners, the time is right for a comprehensive examination of the issues surrounding the software product line approach. The Software Engineering Institute at Carnegie Mellon University is proud to sponsor the first conference on this important subject. This book comprises the proceedings of the First Software Product Line Conference (SPLC1), held August 28-31, 2000, in Denver, Colorado, USA. The twenty-seven papers of the conference technical program present research results and experience reports that cover all aspects of software product lines. Topics include business issues, enabling technologies, organizational issues, and life-cycle issues. Emphasis is placed on experiences in the development and fielding of product lines of complex systems, especially those that expose problems in the design, development, or evolution of software product lines. The book will be essential reading for researchers and practitioners alike.
This book draws new attention to domain-specific conceptual modeling by presenting the work of thought leaders who have designed and deployed specific modeling methods. It provides hands-on guidance on how to build models in a particular domain, such as requirements engineering, business process modeling or enterprise architecture. In addition to these results, it also puts forward ideas for future developments. All this is enriched with exercises, case studies, detailed references and further related information. All domain-specific methods described in this volume also have a tool implementation within the OMiLAB Collaborative Environment - a dedicated research and experimentation space for modeling method engineering at the University of Vienna, Austria - making these advances accessible to a wider community of further developers and users. The collection of works presented here will benefit experts and practitioners from academia and industry alike, including members of the conceptual modeling community as well as lecturers and students.
This textbook introduces the concept of embedded systems with exercises using Arduino Uno. It is intended for advanced undergraduate and graduate students in computer science, computer engineering, and electrical engineering programs. It contains a balanced discussion on both hardware and software related to embedded systems, with a focus on co-design aspects. Embedded systems have applications in Internet-of-Things (IoT), wearables, self-driving cars, smart devices, cyberphysical systems, drones, and robotics. The hardware chapter discusses various microcontrollers (including popular microcontroller hardware examples), sensors, amplifiers, filters, actuators, wired and wireless communication topologies, schematic and PCB designs, and much more. The software chapter describes OS-less programming, bitmath, polling, interrupt, timer, sleep modes, direct memory access, shared memory, mutex, and smart algorithms, with lots of C-code examples for Arduino Uno. Other topics discussed are prototyping, testing, verification, reliability, optimization, and regulations. Appropriate for courses on embedded systems, microcontrollers, and instrumentation, this textbook teaches budding embedded system programmers practical skills with fun projects to prepare them for industry products. Introduces embedded systems for wearables, Internet-of-Things (IoT), robotics, and other smart devices; Offers a balanced focus on both hardware and software co-design of embedded systems; Includes exercises, tutorials, and assignments.
Scientific applications involve very large computations that strain the resources of whatever computers are available. Such computations implement sophisticated mathematics, require deep scientific knowledge, depend on subtle interplay of different approximations, and may be subject to instabilities and sensitivity to external input. Software able to succeed in this domain invariably embeds significant domain knowledge that should be tapped for future use. Unfortunately, most existing scientific software is designed in an ad hoc way, resulting in monolithic codes understood by only a few developers. Software architecture refers to the way software is structured to promote objectives such as reusability, maintainability, extensibility, and feasibility of independent implementation. Such issues have become increasingly important in the scientific domain, as software gets larger and more complex, constructed by teams of people, and evolved over decades. In the context of scientific computation, the challenge facing mathematical software practitioners is to design, develop, and supply computational components which deliver these objectives when embedded in end-user application codes. The Architecture of Scientific Software addresses emerging methodologies and tools for the rational design of scientific software, including component integration frameworks, network-based computing, formal methods of abstraction, application programmer interface design, and the role of object-oriented languages. This book comprises the proceedings of the International Federation for Information Processing (IFIP) Conference on the Architecture of Scientific Software, which was held in Ottawa, Canada, in October 2000. It will prove invaluable reading for developers of scientific software, as well as for researchers in computational sciences and engineering.
This edited book presents scientific results of the 21st ACIS International Winter Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD2021-Winter) which was held on January 28-30, at Ho Chi Minh City, Vietnam. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way and research results about all aspects (theory, applications, and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them. The conference organizers selected the best papers from those papers accepted for presentation at the conference. The papers were chosen based on review scores submitted by members of the program committee and underwent further rigorous rounds of review. From this second round of review, 18 of most promising papers are then published in this Springer (SCI) book and not the conference proceedings. We impatiently await the important contributions that we know these authors will bring to the field of computer and information science. |
![]() ![]() You may like...
Computational Intelligence Aided Systems…
Akshansh Gupta, Hanuman Verma, …
Hardcover
R5,031
Discovery Miles 50 310
Smart Technology Trends in Industrial…
Dagmar Caganova, Michal Balog, …
Hardcover
R5,403
Discovery Miles 54 030
Application of Intelligent Control…
Dipayan Guha, Provas Kumar Roy, …
Hardcover
R4,320
Discovery Miles 43 200
Pegasus - The Secret Technology That…
Laurent Richard, Sandrine Rigaud
Paperback
Network Flows - Pearson New…
Ravindra Ahuja, Thomas Magnanti, …
Paperback
R2,731
Discovery Miles 27 310
|