![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer programming
This book lets you master C++ 2008 as quickly and easily as possible by using all the time- and work-saving features of Visual Studio 2008. That's true whether you are a Java developer who wants to learn C++, a C# or Visual Basic developer who wants to master another .NET language, a C++ developer who wants to move into .NET, or a programming novice who's starting from scratch. When you are done, you will know how to use C++ 2008 to create bullet-proof applications like the best professionals do. You will know how to develop object-oriented applications using business classes, inheritance, polymorphism, interfaces, and generics the way they are used in the real world. You will know how to compile, run, and enhance legacy C and native C++ code on the .NET platform. You will be prepared to learn more about native C++ if you should ever need to do that. And you will have another set of skills that will make you more valuable on the job. To ensure mastery, this book presents 12 complete, real-world applications that demonstrate best programming practices. And all of the information is presented in the distinctive Murach "paired-pages" style that allows for self-paced training and reference.
In recent years, digital technologies have become more ubiquitous and integrated into everyday life. While once reserved mostly for personal uses, video games and similar innovations are now implemented across a variety of fields. Transforming Gaming and Computer Simulation Technologies across Industries is a pivotal reference source for the latest research on emerging simulation technologies and gaming innovations to enhance industry performance and dependency. Featuring extensive coverage across a range of relevant perspectives and topics, such as user research, player identification, and multi-user virtual environments, this book is ideally designed for engineers, professionals, practitioners, upper-level students, and academics seeking current research on gaming and computer simulation technologies across different industries. Topics Covered: Digital vs. Non-Digital Platforms Ludic Simulations Mathematical Simulations Medical Gaming Multi-User Virtual Environments Player Experiences Player Identification User Research
This unique volume explores cutting-edge management approaches to developing complex software that is efficient, scalable, sustainable, and suitable for distributed environments. Practical insights are offered by an international selection of pre-eminent authorities, including case studies, best practices, and balanced corporate analyses. Emphasis is placed on the use of the latest software technologies and frameworks for life-cycle methods, including the design, implementation and testing stages of software development. Topics and features: * Reviews approaches for reusability, cost and time estimation, and for functional size measurement of distributed software applications * Discusses the core characteristics of a large-scale defense system, and the design of software project management (SPM) as a service * Introduces the 3PR framework, research on crowdsourcing software development, and an innovative approach to modeling large-scale multi-agent software systems * Examines a system architecture for ambient assisted living, and an approach to cloud migration and management assessment * Describes a software error proneness mechanism, a novel Scrum process for use in the defense domain, and an ontology annotation for SPM in distributed environments* Investigates the benefits of agile project management for higher education institutions, and SPM that combines software and data engineering This important text/reference is essential reading for project managers and software engineers involved in developing software for distributed computing environments. Students and researchers interested in SPM technologies and frameworks will also find the work to be an invaluable resource. Prof. Zaigham Mahmood is a Senior Technology Consultant at Debesis Education UK and an Associate Lecturer (Research) at the University of Derby, UK. He also holds positions as Foreign Professor at NUST and IIU in Islamabad, Pakistan, and Professor Extraordinaire at the North West University Potchefstroom, South Africa.
The inclusion of experts in communicability in the software industry has allowed timeframes to speed up in the commercialization of new technological products worldwide. However, this constant evolution of software in the face of the hardware revolution opens up a host of new horizons to maintain and increase the quality of the interactive systems following a set of standardized norms and rules for the production of interactive software. Currently, we see some efforts towards this goal, but they are still partial solutions, incomplete, and flawed from the theoretical as well as practical points of view. If the quality of the interactive design is analyzed, it is left to professionals to generate systems that are efficient, reliable, user-friendly, and cutting-edge. The Handbook of Research on Software Quality Innovation in Interactive Systems analyzes the quality of the software applied to the interactive systems and considers the constant advances in the software industry. This book reviews the past and present of information and communication technologies with a projection towards the future, along with analyses of software, software design, phrases to use, and the purposes for software applications in interactive systems. This book is ideal for students, professors, researchers, programmers, analysists of systems, computer engineers, interactive designers, managers of software quality, and evaluators of interactive systems.
Both object orientation and parallelism are modern programming paradigms which have gained much popularity in the last 10-15 years. Object orientation raises hopes for increased productivity of software generation and maintenance methods. Parallelism can serve to structure a problem but also promises faster program execution. The two areas of computing science in which these paradigms play the most prominent role are programming languages and databases. In programming languages, one can take an academic approach with a primary focus on the generality of the semantics of the language constructs which support the respective paradigm. In databases, one is willing to restrict the power of the constructs in the interest of increased efficiency. Inter- and intra-object parallelism have received an increasing amount of attention in the last few years by researchers in the area of object- oriented programming. At first glance, an object is very similar to a process which offers services to other processes and demands services from them. It has, however, transpired that object-oriented concepts cause problems when combined with parallelism. In programming languages, the introduction of parallelism and the synchronization constraints it brings with it can get in the way of code reusability. In databases, the combination of object orientation and parallelism requires, for example, a generalization of the transaction model, new approaches to the specification of information systems, an implementation model of object communication, and the design of an overall system architecture. There has been insufficient communication between researchers in programming languages and in databases on these issues. Object Orientation with Parallelism and Persistence grew out of a Dagstuhl Seminar of the same title in April 1995 whose goal it was to put the new research area object orientation with parallelism' on an interdisciplinary basis. Object Orientation with Parallelism and Persistence will be of interest to researchers and professionals working in software engineering, programming languages, and database systems.
While most discoverability evaluation studies in the Library and Information Science field discuss the intersection of discovery layers and library systems, this book looks specifically at digital repositories, examining discoverability from the lenses of system structure, user searches, and external discovery avenues. Discoverability, the ease with which information can be found by a user, is the cornerstone of all successful digital information platforms. Yet, most digital repository practitioners and researchers lack a holistic and comprehensive understanding of how and where discoverability happens. This book brings together current understandings of user needs and behaviors and poses them alongside a deeper examination of digital repositories around the theme of discoverability. It examines discoverability in digital repositories from both user and system perspectives by exploring how users access content (including their search patterns and habits, need for digital content, effects of outreach, or integration with Wikipedia and other web-based tools) and how systems support or prevent discoverability through the structure or quality of metadata, system interfaces, exposure to search engines or lack thereof, and integration with library discovery tools. Discoverability in Digital Repositories will be particularly useful to digital repository managers, practitioners, and researchers, metadata librarians, systems librarians, and user studies, usability and user experience librarians. Additionally, and perhaps most prominently, this book is composed with the emerging practitioner in mind. Instructors and students in Library and Information Science and Information Management programs will benefit from this book that specifically addresses discoverability in digital repository systems and services.
In today s world, services and data are integrated in ever new constellations, requiring the easy, flexible and scalable integration of autonomous, heterogeneous components into complex systems at any time. Event-based architectures inherently decouple system components. Event-based components are not designed to work with specific other components in a traditional request/reply mode, but separate communication from computation through asynchronous communication mechanisms via a dedicated notification service. Muhl, Fiege, and Pietzuch provide the reader with an in-depth description of event-based systems. They cover the complete spectrum of topics, ranging from a treatment of local event matching and distributed event forwarding algorithms, through a more practical discussion of software engineering issues raised by the event-based style, to a presentation of state-of-the-art research topics in event-based systems, such as composite event detection and security. Their presentation gives researchers a comprehensive overview of the area and lots of hints for future research. In addition, they show the power of event-based architectures in modern system design, thus encouraging professionals to exploit this technique in next generation large-scale distributed applications like information dissemination, network monitoring, enterprise application integration, or mobile systems.
IFIP's Working Group 2.7(13.4)* has, since its establishment in 1974, con centrated on the software problems of user interfaces. From its original interest in operating systems interfaces the group has gradually shifted em phasis towards the development of interactive systems. The group has orga nized a number of international working conferences on interactive software technology, the proceedings of which have contributed to the accumulated knowledge in the field. The current title of the Working Group is 'User Interface Engineering', with the aim of investigating the nature, concepts, and construction of user interfaces for software systems. The scope of work involved is: - to increase understanding of the development of interactive systems; - to provide a framework for reasoning about interactive systems; - to provide engineering models for their development. This report addresses all three aspects of the scope, as further described below. In 1986 the working group published a report (Beech, 1986) with an object-oriented reference model for describing the components of operating systems interfaces. The modelwas implementation oriented and built on an object concept and the notion of interaction as consisting of commands and responses. Through working with that model the group addressed a number of issues, such as multi-media and multi-modal interfaces, customizable in terfaces, and history logging. However, a conclusion was reached that many software design considerations and principles are independent of implemen tation models, but do depend on the nature of the interaction process."
This book provides the most complete formal specification of the semantics of the Business Process Model and Notation 2.0 standard (BPMN) available to date, in a style that is easily understandable for a wide range of readers - not only for experts in formal methods, but e.g. also for developers of modeling tools, software architects, or graduate students specializing in business process management. BPMN - issued by the Object Management Group - is a widely used standard for business process modeling. However, major drawbacks of BPMN include its limited support for organizational modeling, its only implicit expression of modalities, and its lack of integrated user interaction and data modeling. Further, in many cases the syntactical and, in particular, semantic definitions of BPMN are inaccurate, incomplete or inconsistent. The book addresses concrete issues concerning the execution semantics of business processes and provides a formal definition of BPMN process diagrams, which can serve as a sound basis for further extensions, i.e., in the form of horizontal refinements of the core language. To this end, the Abstract State Machine (ASMs) method is used to formalize the semantics of BPMN. ASMs have demonstrated their value in various domains, e.g. specifying the semantics of programming or modeling languages, verifying the specification of the Java Virtual Machine, or formalizing the ITIL change management process. This kind of improvement promotes more consistency in the interpretation of comprehensive models, as well as real exchangeability of models between different tools. In the outlook at the end of the book, the authors conclude with proposing extensions that address actor modeling (including an intuitive way to denote permissions and obligations), integration of user-centric views, a refined communication concept, and data integration.
Your secret weapon to understanding--and using!--one of the most powerful influences in the world today From your Facebook News Feed to your most recent insurance premiums--even making toast!--algorithms play a role in virtually everything that happens in modern society and in your personal life. And while they can seem complicated from a distance, the reality is that, with a little help, anyone can understand--and even use--these powerful problem-solving tools! In Algorithms For Dummies, you'll discover the basics of algorithms, including what they are, how they work, where you can find them (spoiler alert: everywhere!), who invented the most important ones in use today (a Greek philosopher is involved), and how to create them yourself. You'll also find: Dozens of graphs and charts that help you understand the inner workings of algorithms Links to an online repository called GitHub for constant access to updated code Step-by-step instructions on how to use Google Colaboratory, a zero-setup coding environment that runs right from your browser Whether you're a curious internet user wondering how Google seems to always know the right answer to your question or a beginning computer science student looking for a head start on your next class, Algorithms For Dummies is the can't-miss resource you've been waiting for.
Software and Systems Traceability provides a comprehensive description of the practices and theories of software traceability across all phases of the software development lifecycle. The term software traceability is derived from the concept of requirements traceability. Requirements traceability is the ability to track a requirement all the way from its origins to the downstream work products that implement that requirement in a software system. Software traceability is defined as the ability to relate the various types of software artefacts created during the development of software systems. Traceability relations can improve the quality of a product being developed, and reduce the time and cost of development. More specifically, traceability relations can support evolution of software systems, reuse of parts of a system by comparing components of new and existing systems, validation that a system meets its requirements, understanding of the rationale for certain design and implementation decisions, and analysis of the implications of changes in the system.
M. CARPENTIER Director General DG XIII, Telecommunications, Information Industries and Innovation of the Commission of the European Communities It is with great pleasure that I introduce and recommend this collection of guidelines produced by EWICS TC7. This Technical Committee has consistently attracted technical experts of high quality from all over Europe and the standard of the Committee's work has reflected this. The Committee has been sponsored by the Commission of the European Communities since 1978. During this period, there has been the opportunity to observe the enthusiasm and dedication in the activities of the group, the expertise and effort invested in its work, the discipline in meeting objectives and the quality of the resulting guidelines. It is no surprise that these guidelines have influenced the work of international standardisation bodies. Now the first six of EWICS TCTs guidelines are being made available as a book. I am convinced that all computer system developers who use them will greatly enhance their chances of achieving quality systems. v Acknowledgements In the preparation of this book, the editoLisgrateful to P. Bishop, G. Covington II, C. Goring, and W. Quirk for their help in editing the guidelines. In addition, he would like to thank S. Bologna, W. Ehrenberger, M. Ould, J. Rata, L. Sintonen and J. Zalewski for reviewing the chapters and providing additional material.
Looking to become more efficient using Unity? How to Cheat in Unity 5 takes a no-nonsense approach to help you achieve fast and effective results with Unity 5. Geared towards the intermediate user, HTC in Unity 5 provides content beyond what an introductory book offers, and allows you to work more quickly and powerfully in Unity. Packed full with easy-to-follow methods to get the most from Unity, this book explores time-saving features for interface customization and scene management, along with productivity-enhancing ways to work with rendering and optimization. In addition, this book features a companion website at www.alanthorn.net, where you can download the book's companion files and also watch bonus tutorial video content. Learn bite-sized tips and tricks for effective Unity workflows Become a more powerful Unity user through interface customization Enhance your productivity with rendering tricks, better scene organization and more Better understand Unity asset and import workflows Learn techniques to save you time and money during development
Soft computing embraces methodologies for the development of intelligent systems that have been successfully applied to a large number of real-word problems. This collection of keynote papers, presented at the 7th On-line World Conference on Soft Computing in Engineering Design and Manufacturing, provides a comprehensive overview of recent advances in fuzzy, neural and evolutionary computing techniques and applications in engineering design and manufacturing. Features:- New and highly advanced research results at the forefront of soft computing in engineering design and manufacturing. - Keynote papers by world-renowned researchers in the field. - A good overview of current soft computing research around the world. A collection of methodologies aimed at researchers and professional design and manufacturing engineers who develop and apply intelligent systems in computer engineering.
A resource like no other—the first comprehensive guide to phase unwrapping Phase unwrapping is a mathematical problem-solving technique increasingly used in synthetic aperture radar (SAR) interferometry, optical interferometry, adaptive optics, and medical imaging. In Two-Dimensional Phase Unwrapping, two internationally recognized experts sort through the multitude of ideas and algorithms cluttering current research, explain clearly how to solve phase unwrapping problems, and provide practicable algorithms that can be applied to problems encountered in diverse disciplines. Complete with case studies and examples as well as hundreds of images and figures illustrating the concepts, this book features:
Two-Dimensional Phase Unwrapping skillfully integrates concepts, algorithms, software, and examples into a powerful benchmark against which new ideas and algorithms for phase unwrapping can be tested. This unique introduction to a dynamic, rapidly evolving field is essential for professionals and graduate students in SAR interferometry, optical interferometry, adaptive optics, and magnetic resonance imaging (MRI).
Web services and Service-Oriented Computing (SOC) have become thriving areas of academic research, joint university/industry research projects, and novel IT products on the market. SOC is the computing paradigm that uses Web services as building blocks for the engineering of composite, distributed applications out of the reusable application logic encapsulated by Web services. Web services could be considered the best-known and most standardized technology in use today for distributed computing over the Internet. This book is the second installment of a two-book collection covering the state-of-the-art of both theoretical and practical aspects of Web services and SOC research and deployments. Advanced Web Services specifically focuses on advanced topics of Web services and SOC and covers topics including Web services transactions, security and trust, Web service management, real-world case studies, and novel perspectives and future directions. The editors present foundational topics in the first book of the collection, Web Services Foundations (Springer, 2013). Together, both books comprise approximately 1400 pages and are the result of an enormous community effort that involved more than 100 authors, comprising the world's leading experts in this field.
hebookpresentedtothereaderisdevotedtotime-dependentscheduling. TScheduling problems, in general, consist in the allocation of resources over time in order to perform a set of jobs. Any allocation that meets all requirements concerning the jobs and resources is called a feasible schedule. The quality of a schedule is measured by a criterion function. The aim of scheduling is to ?nd, among all feasible schedules, a schedule that optimizes the criterion function. A solution to an arbitrary scheduling problem consists in giving a polynomial-time algorithm generating either an optimal schedule or a schedule that is close to the optimal one, if the given scheduling problem has been proved to be computationally intractable. The scheduling problems are subject of interest of the scheduling theory, originated in mid-?fties of the twentieth century. The theory has been developing dynamically and new research areas constantly come into existence. The subject of this book, ti- dependent scheduling, is one of such areas. In time-dependent scheduling, the processing time of a job is variable and depends on the starting time of the job. This crucial assumption allows us to apply the scheduling theory to a broader spectrum of problems. For example, in the framework of the time-dependent scheduling theory we may consider the problems of repayment of multiple loans, ?re ?ghting and maintenance assignments. In this book, we will discuss algorithms and complexity issues concerning various time-dependent scheduling problems.
The great challenge of reverse engineering is recovering design information from legacy code: the concept recovery problem. This monograph describes our research effort in attacking this problem. It discusses our theory of how a constraint-based approach to program plan recognition can efficiently extract design concepts from source code, and it details experiments in concept recovery that support our claims of scalability. Importantly, we present our models and experiments in sufficient detail so that they can be easily replicated. This book is intended for researchers or software developers concerned with reverse engineering or reengineering legacy systems. However, it may also interest those researchers who are interested using plan recognition techniques or constraint-based reasoning. We expect the reader to have a reasonable computer science background (i.e., familiarity with the basics of programming and algorithm analysis), but we do not require familiarity with the fields of reverse engineering or artificial intelligence (AI). To this end, we carefully explain all the AI techniques we use. This book is designed as a reference for advanced undergraduate or graduate seminar courses in software engineering, reverse engineering, or reengineering. It can also serve as a supplementary textbook for software engineering-related courses, such as those on program understanding or design recovery, for AI-related courses, such as those on plan recognition or constraint satisfaction, and for courses that cover both topics, such as those on AI applications to software engineering. ORGANIZATION The book comprises eight chapters.
This book takes you through all the basic steps of character design for games and animation, from brainstorming and references through to the development phase and final render. It covers a range of styles such as cartoon, stylized and semi-realistic, and explains how to differentiate between them and use them effectively. Using a step-by-step approach for each stage of the process, this book guides you through the process of creating a new character from scratch. It contains a wealth of design tips and tricks as well as checklists and worksheets for you to use in your own projects. The book covers how to work with briefs, as well as providing advice and practical strategies for working with clients and creating art as a product that can be tailored and sold. This book will be a valuable resource for all junior artists, hobby artists, and art students looking to develop and improve their character development skills for games and animation.
This book proposes a purely classical first-order logical approach to the theory of programming. The authors, leading members of the famous "Hungarian school," use this approach to give a unified and systematic presentation of the theory. This approach provides formal methods and tools for reasoning about computer programs and programming languages by allowing the syntactic and semantic characterization of programs, the description of program properties, and ways to check whether a given program satisfies certain properties. The basic methods are logical extension, inductive definition and their combination, all of which admit an appropriate first-order representation of data and time. The framework proposed by the authors allows the investigation and development of different programming theories and logics from a unified point of view. Dynamic and temporal logics, for example, are investigated and compared with respect to their expressive and proof-theoretic powers. The book should appeal to both theoretical researchers and students. For researchers in computer science the book provides a coherent presentation of a new approach which permits the solution of various problems in programming theory in a unified manner by the use of first-order logical tools. The book may serve as a basis for graduate courses in programming theory and logic as it covers all important questions arising between the theory of computation and formal descriptive languages and presents an appropriate derivation system.
This thesis introduces a new integrated algorithm for the detection of lane-level irregular driving. To date, there has been very little improvement in the ability to detect lane level irregular driving styles, mainly due to a lack of high performance positioning techniques and suitable driving pattern recognition algorithms. The algorithm combines data from the Global Positioning System (GPS), Inertial Measurement Unit (IMU) and lane information using advanced filtering methods. The vehicle state within a lane is estimated using a Particle Filter (PF) and an Extended Kalman Filter (EKF). The state information is then used within a novel Fuzzy Inference System (FIS) based algorithm to detect different types of irregular driving. Simulation and field trial results are used to demonstrate the accuracy and reliability of the proposed irregular driving detection method.
Computers are currently used in a variety of critical applications, including systems for nuclear reactor control, flight control (both aircraft and spacecraft), and air traffic control. Moreover, experience has shown that the dependability of such systems is particularly sensitive to that of its software components, both the system software of the embedded computers and the application software they support. Software Performability: From Concepts to Applications addresses the construction and solution of analytic performability models for critical-application software. The book includes a review of general performability concepts along with notions which are peculiar to software performability. Since fault tolerance is widely recognized as a viable means for improving the dependability of computer system (beyond what can be achieved by fault prevention), the examples considered are fault-tolerant software systems that incorporate particular methods of design diversity and fault recovery. Software Performability: From Concepts to Applications will be of direct benefit to both practitioners and researchers in the area of performance and dependability evaluation, fault-tolerant computing, and dependable systems for critical applications. For practitioners, it supplies a basis for defining combined performance-dependability criteria (in the form of objective functions) that can be used to enhance the performability (performance/dependability) of existing software designs. For those with research interests in model-based evaluation, the book provides an analytic framework and a variety of performability modeling examples in an application context of recognized importance. The material contained in this book will both stimulate future research on related topics and, for teaching purposes, serve as a reference text in courses on computer system evaluation, fault-tolerant computing, and dependable high-performance computer systems.
This book focuses on new and emerging data mining solutions that offer a greater level of transparency than existing solutions. Transparent data mining solutions with desirable properties (e.g. effective, fully automatic, scalable) are covered in the book. Experimental findings of transparent solutions are tailored to different domain experts, and experimental metrics for evaluating algorithmic transparency are presented. The book also discusses societal effects of black box vs. transparent approaches to data mining, as well as real-world use cases for these approaches.As algorithms increasingly support different aspects of modern life, a greater level of transparency is sorely needed, not least because discrimination and biases have to be avoided. With contributions from domain experts, this book provides an overview of an emerging area of data mining that has profound societal consequences, and provides the technical background to for readers to contribute to the field or to put existing approaches to practical use.
The development of successful, usable Web-based systems and applications requires careful consideration of problems, needs, and unique circumstances within and among organizations. Uniting research from a number of different disciplines, Web engineering seeks to develop solutions and uncover new trends in the rapidly growing body of literature on Web system design, modeling, and methodology. Models for Capitalizing on Web Engineering Advancements: Trends and Discoveries contains research on new developments and existing applications made possible by the principles of Web engineering. With selections focused on a broad range of applications from telemedicine to geographic information retrieval this book provides a foundation for further study of the unique challenges faced by Web application designers.
Three powerful technologies are combined in a single book: Remoting, Reflection, and Threading. When these technologies come together, readers are faced with a powerful range of tools that allows them to run code faster, more securely, and more flexibly, so they'll be able to code applications across the spectrum--from a single machine to an entire network. |
![]() ![]() You may like...
Introducing Delphi Programming - Theory…
John Barrow, Linda Miller, …
Paperback
![]() R863 Discovery Miles 8 630
Introduction to Computational Economics…
Hans Fehr, Fabian Kindermann
Hardcover
R4,347
Discovery Miles 43 470
Research Anthology on Agile Software…
Information R Management Association
Hardcover
R15,773
Discovery Miles 157 730
Java How to Program, Late Objects…
Paul Deitel, Harvey Deitel
Paperback
|