![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming > Software engineering
This book looks at the two most popular ways of using Java SE 6 to write 3D games on PCs: Java 3D (a high-level scene graph API) and JOGL (a Java layer over OpenGL). Written by Java gaming expert, Andrew Davison, this book will be first Java game book to market that uses the new Java (SE) 6 platform and its features including splash screens, scripting, and the desktop tray interface. This will be first and maybe only book to market that covers Java game development using the Java 3D API and Java for OpenGL, both critical components and libraries to Java-based 3D game application development.
The book discusses the main issues of coordination in complex sociotechnical systems, covering distributed, self-organising, and pervasive systems. A chemistry-inspired model of coordination, a situated architecture and coordination language, and a cognitive model of interaction are the ingredients of the Molecules of Knowledge (MoK) model for self-organisation of knowledge presented in this book. The MoK technology is discussed, along with some case studies in the fields of collaborative systems, academic research, and citizen journalism. The target audience includes researchers and practitioners in the field of complex software systems engineering. The book is also appropriate for graduate and late undergraduate students in computer science and engineering.
Communication is one of the main activities in software projects, many such projects fail or encounter serious problems because the stakeholders involved have different understandings of the problem domain and/or they use different terminologies. Ontologies can help to mitigate these communication problems. Calero and her coeditors mainly cover two applications of ontologies in software engineering and software techonology: sharing knowledge of the problem domain and using a common terminology among all stakeholders; and filtering the knowledge when defining models and metamodels. The editors structured the contributions into three parts: first, a detailed introduction into the use of ontologies in software engineering and software technology in general; second, the use of ontologies to conceptualize different process-related domains such as software maintenance, software measurement, or SWEBOK, initiated by IEEE; third, the use of ontologies as artifacts in several software processes, like, for example, in OMGa (TM)s MOF or MDA. By presenting the advanced use of ontologies in software research and software projects, this book is of benefit to software engineering researchers in both academia and industry.
Over the last two decades, a major challenge for researchers working on modeling and evaluation of computer-based systems has been the assessment of system Non Functional Properties (NFP) such as performance, scalability, dependability and security. In this book, the authors present cutting-edge model-driven techniques for modeling and analysis of software dependability. Most of them are based on the use of UML as software specification language. From the software system specification point of view, such techniques exploit the standard extension mechanisms of UML (i.e., UML profiling). UML profiles enable software engineers to add non-functional properties to the software model, in addition to the functional ones. The authors detail the state of the art on UML profile proposals for dependability specification and rigorously describe the trade-off they accomplish. The focus is mainly on RAMS (reliability, availability, maintainability and safety) properties. Among the existing profiles, they emphasize the DAM (Dependability Analysis and Modeling) profile, which attempts to unify, under a common umbrella, the previous UML profiles from literature, providing capabilities for dependability specification and analysis. In addition, they describe two prominent model-to-model transformation techniques, which support the generation of the analysis model and allow for further assessment of different RAMS properties. Case studies from different domains are also presented, in order to provide practitioners with examples of how to apply the aforementioned techniques. Researchers and students will learn basic dependability concepts and how to model them usingUML and its extensions. They will also gain insights into dependability analysis techniques through the use of appropriate modeling formalisms as well as of model-to-model transformation techniques for deriving dependability analysis models from UML specifications. Moreover, software practitioners will find a unified framework for the specification of dependability requirements and properties of UML, and will benefit from the detailed case studies."
This book is about software product lines (SPLs) designed and developed taking UML diagrams as the primary basis, modeled according to a rigorous approach composed of an UML profile and a systematic process for variability management activities, forming the Stereotype-based Management of Variability (SMarty) approach. The book consists of five parts. Part I provides essential concepts on SPL in terms of the first development methodologies. It also introduces variability concepts and discusses SPL architectures finishing with the SMarty approach. Part II is focused on the design, verification and validation of SMarty SPLs, and Part III concentrates on the SPL architecture evolution based on ISO/IEC metrics, the SystEM-PLA method, optimization with the MOA4PLA method, and feature interaction prevention. Next, Part IV presents SMarty as a basis for SPL development, such as, the M-SPLearning SPL for mobile learning applications, the PLeTs SPL for testing tools, the PlugSPL plugin environment for supporting the SPL life cycle, the SyMPLES approach for designing embedded systems with SysML, the SMartySPEM approach for software process lines (SPrL), and re-engineering of class diagrams into an SPL. Eventually, Part V promotes controlled experimentation in UML-based SPLs, presenting essential concepts on how to plan, conduct, and document experiments, as well as showing several experiments carried out with SMarty. This book aims at lecturers, graduate students and experienced practitioners. Lecturers might use the book for graduate level courses about SPL fundamentals and tools; students will learn about the SPL engineering process, variability management, and mass customization; and practitioners will see how to plan the transition from single-product development to an SPL-based process, how to document inherent variability in a given domain, or how to apply controlled experiments to SPLs.
This textbook is about systematic problem solving and systematic reasoning using type-driven design. There are two problem solving techniques that are emphasized throughout the book: divide and conquer and iterative refinement. Divide and conquer is the process by which a large problem is broken into two or more smaller problems that are easier to solve and then the solutions for the smaller pieces are combined to create an answer to the problem. Iterative refinement is the process by which a solution to a problem is gradually made better-like the drafts of an essay. Mastering these techniques are essential to becoming a good problem solver and programmer. The book is divided in five parts. Part I focuses on the basics. It starts with how to write expressions and subsequently leads to decision making and functions as the basis for problem solving. Part II then introduces compound data of finite size, while Part III covers compound data of arbitrary size like e.g. lists, intervals, natural numbers, and binary trees. It also introduces structural recursion, a powerful data-processing strategy that uses divide and conquer to process data whose size is not fixed. Next, Part IV delves into abstraction and shows how to eliminate repetitions in solutions to problems. It also introduces generic programming which is abstraction over the type of data processed. This leads to the realization that functions are data and, perhaps more surprising, that data are functions, which in turn naturally leads to object-oriented programming. Part V introduces distributed programming, i.e., using multiple computers to solve a problem. This book promises that by the end of it readers will have designed and implemented a multiplayer video game that they can play with their friends over the internet. To achieve this, however, there is a lot about problem solving and programming that must be learned first. The game is developed using iterative refinement. The reader learns step-by-step about programming and how to apply new knowledge to develop increasingly better versions of the video game. This way, readers practice modern trends that are likely to be common throughout a professional career and beyond.
The Lean Approach to Digital Transformation: From Customer to Code and From Code to Customer is organized into three parts that expose and develop the three capabilities that are essential for a successful digital transformation: 1. Understanding how to co-create digital services with users, whether they are customers or future customers. This ability combines observation, dialogue, and iterative experimentation. The approach proposed in this book is based on the Lean Startup approach, according to an extended vision that combines Design Thinking and Growth Hacking. Companies must become truly "customer-centric", from observation and listening to co-development. The revolution of the digital age of the 21st century is that customer orientation is more imperative -- the era of abundance, usages rate of change, complexity of experiences, and shift of power towards communities -- are easier, using digital tools and digital communities. 2. Developing an information system (IS) that is the backbone of the digital transformation - called "exponential information system" to designate an open IS (in particular on its borders), capable of interfacing and combining with external services, positioned as a player in software ecosystems and built for processing scalable and dynamic data flows. The exponential information system is constantly changing and it continuously absorbs the best of information processing technology, such as Artificial Intelligence and Machine Learning. 3. Building software "micro-factories" that produce service platforms, which are called "Lean software factories." This "software factory" concept covers the integration of agile methods, tooling and continuous integration and deployment practices, a customer-oriented product approach, and a platform approach based on modularity, as well as API-based architecture and openness to external stakeholders. This software micro-factory is the foundation that continuously produces and provides constantly evolving services. These three capabilities are not unique or specific to this book, they are linked to other concepts such as agile methods, product development according to lean principles, software production approaches such as CICD (continuous integration and deployment) or DevOps. This book weaves a common frame of reference for all these approaches to derive more value from the digital transformation and to facilitate its implementation. The title of the book refers to the "lean approach to digital transformation" because the two underlying frameworks, Lean Startup and Lean Software Factory, are directly inspired by Lean, in the sense of the Toyota Way. The Lean approach is present from the beginning to the end of this book -- it provides the framework for customer orientation and the love of a job well done, which are the conditions for the success of a digital transformation.
With this book, Christopher Kormanyos delivers a highly practical guide to programming real-time embedded microcontroller systems in C++. It is divided into three parts plus several appendices. Part I provides a foundation for real-time C++ by covering language technologies, including object-oriented methods, template programming and optimization. Next, part II presents detailed descriptions of a variety of C++ components that are widely used in microcontroller programming. It details some of C++'s most powerful language elements, such as class types, templates and the STL, to develop components for microcontroller register access, low-level drivers, custom memory management, embedded containers, multitasking, etc. Finally, part III describes mathematical methods and generic utilities that can be employed to solve recurring problems in real-time C++. The appendices include a brief C++ language tutorial, information on the real-time C++ development environment and instructions for building GNU GCC cross-compilers and a microcontroller circuit. For this fourth edition, the most recent specification of C++20 is used throughout the text. Several sections on new C++20 functionality have been added, and various others reworked to reflect changes in the standard. Also several new example projects ranging from introductory to advanced level are included and existing ones extended, and various reader suggestions have been incorporated. Efficiency is always in focus and numerous examples are backed up with runtime measurements and size analyses that quantify the true costs of the code down to the very last byte and microsecond. The target audience of this book mainly consists of students and professionals interested in real-time C++. Readers should be familiar with C or another programming language and will benefit most if they have had some previous experience with microcontroller electronics and the performance and size issues prevalent in embedded systems programming.
Digital Systems Design and Prototyping: Using Field Programmable Logic and Hardware Description Languages, Second Edition covers the subject of digital systems design using two important technologies: Field Programmable Logic Devices (FPLDs) and Hardware Description Languages (HDLs). These two technologies are combined to aid in the design, prototyping, and implementation of a whole range of digital systems from very simple ones replacing traditional glue logic to very complex ones customized as the applications require. Three HDLs are presented: VHDL and Verilog, the widely used standard languages, and the proprietary Altera HDL (AHDL). The chapters on these languages serve as tutorials and comparisons are made that show the strengths and weaknesses of each language. A large number of examples are used in the description of each language providing insight for the design and implementation of FPLDs. The CD-ROM included with the book contains the Altera MAX+PLUS II development environment which is ready to compile and simulate all examples. With the addition of the Altera UP-1 prototyping board, all examples can be tested and verified in a real FPLD. Digital Systems Design and Prototyping: Using Field Programmable Logic and Hardware Description Languages, Second Edition is designed as an advanced level textbook as well as a reference for the professional engineer.
Software quality is vitally important to the success of a business. A single undetected error or defect during the software development process could have disastrous consequences during a business operation. Software review is one of the methods used to detect defects. This process maintains the quality of the product by reviewing interim deliverables during development. ""Modern Software Review: Techniques and Technologies"" provides an understanding of the critical factors affecting software review performance and gives practical guidelines for software reviews.
1. Background and Introduction.- 1.1 The Problem.- 1.2 Concepts and Definitions.- 1.3 Research Activities.- 1.4 Status of Reuse Practice.- 1.5 Scope and Organization of this Book.- 1.6 References.- 2. Managerial Guidelines.- 2.1 Managerial Issues and Approaches.- 2.1.1 Organizational Management and Structure.- 2.1.2 Organizational Behavior.- 2.1.3 Contractual and Legal Considerations.- 2.1.4 Financial Considerations.- 2.1.5 Case Study: Reuse Program at Hartford Insurance Group.- 2.2 Software Development and Maintenance Incorporating Reuse.- 2.2.1 The Software Process.- 2.2.2 Life-Cycle Models.- 2.2.3 A Generic Reuse/Reusability Model.- 2.2.4 Establishing a Process.- 2.2.5 Case Study: JIAWG Reuse-Based Process Plan.- 2.3 References.- 3. Technical Guidelines.- 3.1 Domain Analysis.- 3.1.1 Overview.- 3.1.2 Case Study: The Domain Analysis Project at Software Engineering Institute (SEI).- 3.2 Creating Reusable Components.- 3.2.1 Spanning the Life Cycle.- 3.2.2 Requirements and Designs.- 3.2.2.1 Overview.- 3.2.2.2 Object-Oriented Approaches.- 3.2.3 Code Components.- 3.2.3.1 Code Component Structures.- 3.2.3.2 Programming Style.- 3.2.4 Component Quality.- 3.2.5 Classifying and Storing Components.- 3.2.6 Case Study: A Design Study of Telephony Software at Ericsson Telecom.- 3.3 Reusing Components.- 3.3.1 Cognitive Aspects.- 3.3.2 Searching and Retrieving.- 3.3.3 Understanding and Assessing Components.- 3.3.4 Adapting Components.- 3.3.5 Composition of Code Components.- 3.3.6 Case Study: A Quantitative Study of Spacecraft Control Software Reuse at GSFC.- 3.3.7 Case Study: The Reusable Software Library (RSL) at Intermetrics, Inc..- 3.4 Tools and Environments.- 3.5 References.- 4. Getting Started.- 4.1 Discussion.- 4.2 A Phased Approach.- 4.3 References.- Appendix A: Collected Guidelines.- Appendix B: Guidelines for Reusable Ada Code.
This edited book presents scientific results of the 16th IEEE/ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD 2015) which was held on June 1 - 3, 2015 in Takamatsu, Japan. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them.
This CISA study guide is for those interested in achieving CISA certification and provides complete coverage of ISACA's latest CISA Review Manual (2019) with practical examples and over 850 exam-oriented practice questions Key Features Gain tactical skills in auditing, control, and security to pass the CISA examination Get up to speed with auditing business IT systems Increase your value to organizations and be at the forefront of an evolving business landscape by achieving CISA certification Book DescriptionAre you looking to prepare for the CISA exam and understand the roles and responsibilities of an information systems (IS) auditor? The CISA - Certified Information Systems Auditor Study Guide is here to help you get started with CISA exam prep. This book covers all the five CISA domains in detail to help you pass the exam. You'll start by getting up and running with the practical aspects of an information systems audit. The book then shows you how to govern and manage IT, before getting you up to speed with acquiring information systems. As you progress, you'll gain knowledge of information systems operations and understand how to maintain business resilience, which will help you tackle various real-world business problems. Finally, you'll be able to assist your organization in effectively protecting and controlling information systems with IT audit standards. By the end of this CISA book, you'll not only have covered the essential concepts and techniques you need to know to pass the CISA certification exam but also have the ability to apply them in the real world. What you will learn Understand the information systems auditing process Get to grips with IT governance and management Gain knowledge of information systems acquisition Assist your organization in protecting and controlling information systems with IT audit standards Understand information systems operations and how to ensure business resilience Evaluate your organization's security policies, standards, and procedures to meet its objectives Who this book is forThis CISA exam study guide is designed for those with a non-technical background who are interested in achieving CISA certification and are currently employed or looking to gain employment in IT audit and security management positions.
This book is the first attempt to bring together current research findings in the domain of interactive horizontal displays. The novel compilation will integrate and summarise findings from the most important international tabletop research teams. It will provide a state-of-the art overview of this research domain and therefore allow for discussion of emerging and future directions in research and technology of interactive horizontal displays. Latest advances in interaction and software technologies and their increasing availability beyond research labs, refuels the interest in interactive horizontal displays. In the early 1990s Mark Weiser s vision of Ubiquitous Computing redefined the notion of Human Computer Interaction. Interaction was no longer considered to happen only with standard desktop computers but also with elements of their environment. This book is structured in three major areas: under, on/above and around tabletops. These areas are associated with different research disciplines such as Hardware/Software and Computer Science, Human Computer Interaction (HCI) and Computer Supported Collaborative Work (CSCW). However, the comprehensive and compelling presentation of the topic of the book results from its interdisciplinary character. The book addresses fellow researchers who are interested in this domain and practitioners considering interactive tabletops in real-world projects. It will also be a useful introduction into tabletop research that can be used for the academic curriculum."
Creativity and rationale comprise an essential tension in design. They are two sides of the coin; contrary, complementary, but perhaps also interdependent. Designs always serve purposes. They always have an internal logic. They can be queried, explained, and evaluated. These characteristics are what design rationale is about. But at the same time designs always provoke experiences and insights. They open up possibilities, raise questions, and engage human sense making. Design is always about creativity. "Creativity and Rationale: Enhancing Human Experience by Design" comprises 19 complementary chapters by leading experts in the areas of human-computer interaction design, sociotechnical systems design, requirements engineering, information systems, and artificial intelligence. Researchers, research students and practitioners in human-computer interaction and software design will find this state of the art volume invaluable.
Formal methods for the specification and verification of hardware and software systems are becoming more and more important as systems increase in size and complexity. The aim of the book is to illustrate progress in formal methods, based on Petri net formalisms. It contains a collection of examples arising from different fields, such as flexible manufacturing, telecommunication and workflow management systems.The book covers the main phases in the life cycle of design and implementation of a system, i.e., specification, model checking techniques for verification, analysis of properties, code generation, and execution of models. These techniques and their tool support are discussed in detail including practical issues. Amongst others, fundamental concepts such as composition, abstraction, and reusability of models, model verification, and verification of properties are systematically introduced.
Software exists in a wide array of products, ranging from toys, entertainment systems, medical systems, and home appliances to large-scale products such as aircraft and communication systems. Knowledge Engineering for Software Development Life Cycles: Support Technologies and Applications bridges the best practices and design principles successfully employed over last two decades with modern Knowledge Engineering (KE), which has provided some of the most valuable techniques and tools to support encoding knowledge and experiences. Through its identification and exploration of software development practices, captured as software guidelines that can be represented to automated software development, decision making, and knowledge management, this book brings industry and academia together to address the need for the growing applications and supporting knowledge-based approaches to software development.
This is a book about the development of dependable, embedded software. It is for systems designers, implementers, and verifiers who are experienced in general embedded software development, but who are now facing the prospect of delivering a software-based system for a safety-critical application. It is aimed at those creating a product that must satisfy one or more of the international standards relating to safety-critical applications, including IEC 61508, ISO 26262, EN 50128, EN 50657, IEC 62304, or related standards. Of the first edition, Stephen Thomas, PE, Founder and Editor of FunctionalSafetyEngineer.com said, "I highly recommend Mr. Hobbs' book."
Base stations developed according to the 3GPP Long Term Evolution (LTE) standard require unprecedented processing power. 3GPP LTE enables data rates beyond hundreds of Mbits/s by using advanced technologies, necessitating a highly complex LTE physical layer. The operating power of base stations is a significant cost for operators, and is currently optimized using state-of-the-art hardware solutions, such as heterogeneous distributed systems. The traditional system design method of porting algorithms to heterogeneous distributed systems based on test-and-refine methods is a manual, thus time-expensive, task. "Physical Layer Multi-Core Prototyping: A Dataflow-Based Approach" provides a clear introduction to the 3GPP LTE physical layer and to dataflow-based prototyping and programming. The difficulties in the process of 3GPP LTE physical layer porting are outlined, with particular focus on automatic partitioning and scheduling, load balancing and computation latency reduction, specifically in systems based on heterogeneous multi-core Digital Signal Processors. Multi-core prototyping methods based on algorithm dataflow modeling and architecture system-level modeling are assessed with the goal of automating and optimizing algorithm porting. With its analysis of physical layer processing and proposals of parallel programming methods, which include automatic partitioning and scheduling, "Physical Layer Multi-Core Prototyping: A Dataflow-Based Approach" is a key resource for researchers and students. This study of LTE algorithms which require dynamic or static assignment and dynamic or static scheduling, allows readers to reassess and expand their knowledge of this vital component of LTE base station design. "
As we entered the 21st century, the rapid growth of information technology has changed our lives more conveniently than we have ever speculated. Recently in all fields of the industry, heterogeneous technologies have converged with information technology resulting in a new paradigm, information technology convergence. In the process of information technology convergence, the latest issues in the structure of data, system, network, and infrastructure have become the most challenging task. Proceedings of the International Conference on IT Convergence and Security 2011 approaches the subject matter with problems in technical convergence and convergences of security technology by looking at new issues that arise from techniques converging. The general scope is convergence security and the latest information technology with the following most important features and benefits: 1. Introduction of the most recent information technology and its related ideas 2. Applications and problems related to technology convergence, and its case studies 3. Introduction of converging existing security techniques through convergence security Overall, after reading Proceedings of the International Conference on IT Convergence and Security 2011, readers will understand the most state of the art information strategies and technologies of convergence security.
The main objective is to provide quick and essential knowledge for the subject with the help of summary and solved questions /case studies without going into detailed discussion. This book will be much helpful for the students as a supplementary text/workbook; and to the non-computer professionals, who deal with the systems analysis and design as part of their business. Such problem solving approach will be able to provide practical knowledge of the subject and similar learning output, without going into lengthy discussions. Though the book is conceived as supplementary text/workbook; the topics are selected and arranged in such a way that it can provide complete and sufficient knowledge of the subject.
Evolutionary Computation and Optimization Algorithms in Software Engineering: Applications and Techniques lays the foundation for the successful integration of evolutionary computation into software engineering. It surveys techniques ranging from genetic algorithms, to swarm optimization theory, to ant colony optimization, demonstrating their uses and capabilities. These techniques are applied to aspects of software engineering such as software testing, quality assessment, reliability assessment, and fault prediction models, among others, to providing researchers, scholars and students with the knowledge needed to expand this burgeoning application.
Visual languages have long been lit pursuitofeffective communication 00 tween human and machine. Today, they are suecessfully employed for e: nd user progmmming, modeliog, rapid prototypmg, and design activities by people ofmany disciplines including arehitects, artists, children, engi neers, and scientists. Furthermore. with rapid advances ofthe Internet and Web technology, human human communication through the Web or eleo tronie mobile deviees is becoming more and moreprevalent This manuscript provides a comprehensive introduetion to diagmmmatiooI visual programming languages and the technologyofautomatie genemtion ofsnch languages. It covers a broad rangeofcontents from the underlying theoryofgraph grammars to the applications in various domains. Thecon tents were ex: l: l: aeted from the papers that my Ph. D. students and I have published in the last 10 years. and are updated and organized in a coherent fashion. The manuseript gives an in. -depth treatmentof all the topic areas. Pointers to related work and further readings are also faeilitated at the end ofeverychapterexeeptChapter 9. Rather than describing how to program visually, the manuscript discusses what are visual programming languages, and how sooh languages and their underlying foundations can be usefully applied to other fields incomputer science that need graphs as the p: rimary meansofrepresentation. Assuming the basic knowledge of computer programming and compiler co: nstruetion, the manuscript can be used as a textbook for senior orgradu ate computer science classes on visual languages, or a reference book for programming language classes, practitioners, and researchers inthe related field. The manuscript cannot be completed without the helps of many people. |
![]() ![]() You may like...
Functional Encryption
Khairol Amali Bin Ahmad, Khaleel Ahmad, …
Hardcover
R2,911
Discovery Miles 29 110
Modern Bamboo Structures - Proceedings…
Yan Xiao, Masafumi Inoue, …
Hardcover
R5,842
Discovery Miles 58 420
|