![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages
The goal of this book is to present the most advanced research works in realistic computer generated images. It is made up of the papers presented during a Eurographics workshop that has been held in Rennes (France) on June 1990. Although realism in computer graphics has existed for many years, we have considered that two research directions can now clearly be identified. One makes use of empirical methods to efficiently create images that look real. As opposed to this approach, the other orientation makes use of physics to produce images that are exact representations of the real world (at the expense of additional processing time), hence the term photosimulation which indeed was the subject of this book. The objectives of this workshop were to assemble experts from physics and computer graphics in order to contribute to the introduction of physics-based approaches in the field of computer generated images. The fact that this workshop was the first entirely devoted to this topic was a bet and fortunately it turned out that it was a success. The contents of this book is organized in five chapters: Efficient Ray Tracing Meth ods, Theory of Global Illumination Models, Photometric Algorithms, Form-Factor Cal culations and Physics-Based Methods.
This new edition includes the latest advances and developments in computational probability involving A Probability Programming Language (APPL). The book examines and presents, in a systematic manner, computational probability methods that encompass data structures and algorithms. The developed techniques address problems that require exact probability calculations, many of which have been considered intractable in the past. The book addresses the plight of the probabilist by providing algorithms to perform calculations associated with random variables. Computational Probability: Algorithms and Applications in the Mathematical Sciences, 2nd Edition begins with an introductory chapter that contains short examples involving the elementary use of APPL. Chapter 2 reviews the Maple data structures and functions necessary to implement APPL. This is followed by a discussion of the development of the data structures and algorithms (Chapters 3-6 for continuous random variables and Chapters 7-9 for discrete random variables) used in APPL. The book concludes with Chapters 10-15 introducing a sampling of various applications in the mathematical sciences. This book should appeal to researchers in the mathematical sciences with an interest in applied probability and instructors using the book for a special topics course in computational probability taught in a mathematics, statistics, operations research, management science, or industrial engineering department.
"Proceedings of the 2012 International Conference on Information
Technology and Software Engineering" presents selected articles
from this major event, which was held in Beijing, December 8-10,
2012. This book presents the latest research trends, methods and
experimental results in the fields of information technology and
software engineering, covering various state-of-the-art research
theories and approaches. The subjects range from intelligent
computing to information processing, software engineering, Web,
unified modeling language (UML), multimedia, communication
technologies, system identification, graphics and visualizing,
etc.
With the New Perspectives' critical-thinking, problem-solving approach, students will gain a comprehensive understanding of Microsoft Office Access 2010 and will learn how to take advantage of the flexibility it offers. Case-based tutorials challenge students to apply what they are learning to real-life tasks, preparing them to easily transfer skills to new situations. With New Perspectives, students understand why they're learning what they're learning and are better situated to retain skills beyond the classroom.
Geocomputation with R is for people who want to analyze, visualize and model geographic data with open source software. It is based on R, a statistical programming language that has powerful data processing, visualization, and geospatial capabilities. The book equips you with the knowledge and skills to tackle a wide range of issues manifested in geographic data, including those with scientific, societal, and environmental implications. This book will interest people from many backgrounds, especially Geographic Information Systems (GIS) users interested in applying their domain-specific knowledge in a powerful open source language for data science, and R users interested in extending their skills to handle spatial data. The book is divided into three parts: (I) Foundations, aimed at getting you up-to-speed with geographic data in R, (II) extensions, which covers advanced techniques, and (III) applications to real-world problems. The chapters cover progressively more advanced topics, with early chapters providing strong foundations on which the later chapters build. Part I describes the nature of spatial datasets in R and methods for manipulating them. It also covers geographic data import/export and transforming coordinate reference systems. Part II represents methods that build on these foundations. It covers advanced map making (including web mapping), "bridges" to GIS, sharing reproducible code, and how to do cross-validation in the presence of spatial autocorrelation. Part III applies the knowledge gained to tackle real-world problems, including representing and modeling transport systems, finding optimal locations for stores or services, and ecological modeling. Exercises at the end of each chapter give you the skills needed to tackle a range of geospatial problems. Solutions for each chapter and supplementary materials providing extended examples are available at https://geocompr.github.io/geocompkg/articles/. Dr. Robin Lovelace is a University Academic Fellow at the University of Leeds, where he has taught R for geographic research over many years, with a focus on transport systems. Dr. Jakub Nowosad is an Assistant Professor in the Department of Geoinformation at the Adam Mickiewicz University in Poznan, where his focus is on the analysis of large datasets to understand environmental processes. Dr. Jannes Muenchow is a Postdoctoral Researcher in the GIScience Department at the University of Jena, where he develops and teaches a range of geographic methods, with a focus on ecological modeling, statistical geocomputing, and predictive mapping. All three are active developers and work on a number of R packages, including stplanr, sabre, and RQGIS.
This book covers the distinguishing characteristics and tropes of visual novels (VNs) as choice-based games and analyzes VNs like 999: Nine Hours, Nine Persons, Nine Doors; Hatoful Boyfriend; and Monster Prom, some of the best examples of the genre as illustrations. The author covers structuring branching narrative and plot, designing impactful and compelling choices, writing entertaining relationships and character interactions, understanding the importance of a VN's prose, and planning a VN's overall narrative design and story delivery. The book contains exercises at the end of chapters to practice the techniques discussed. By the end of the book, if the reader finishes all the exercises, they may have several portfolio pieces or a significant portion of their own VN project designed. Features: Discusses different aspects and genres of VNs, what makes them enjoyable, and successful techniques developers can incorporate into their own games Analyzes various VNs and choice-based games that use these successful techniques Shares tips from developers on portfolio pieces, hiring a team to work on VNs, and plotting and outlining VNs Branching Story, Unlocked Dialogue: Designing and Writing Visual Novels is a valuable resource for developers and narrative designers interested in working on VNs. The book will show them how they can design their own VN projects, design branching narratives, develop entertaining plots and relationships, design impactful and compelling choices, and write prose that's a pleasure to read.
This textbook provides a comprehensive introduction to probability and stochastic processes, and shows how these subjects may be applied in computer performance modeling. The author's aim is to derive probability theory in a way that highlights the complementary nature of its formal, intuitive, and applicative aspects while illustrating how the theory is applied in a variety of settings. Readers are assumed to be familiar with elementary linear algebra and calculus, including being conversant with limits, but otherwise, this book provides a self-contained approach suitable for graduate or advanced undergraduate students. The first half of the book covers the basic concepts of probability, including combinatorics, expectation, random variables, and fundamental theorems. In the second half of the book, the reader is introduced to stochastic processes. Subjects covered include renewal processes, queueing theory, Markov processes, matrix geometric techniques, reversibility, and networks of queues. Examples and applications are drawn from problems in computer performance modeling. Throughout, large numbers of exercises of varying degrees of difficulty will help to secure a reader's understanding of these important and fascinating subjects.
The third volume of The Art of Hearthstone chronicles the artistic achievements that infused the Year of the Mammoth with charm, character, and beauty. Through vivid illustrations and behind-the-scenes interviews with artists and game designers, The Art of Hearthstone draws back the curtain to a massive creative undertaking, showing how a huge team came together to deliver one of Hearthstone's most impressive years ever.
This book presents the state of the art in high-performance computing and simulation on modern supercomputer architectures. It covers trends in hardware and software development in general and the future of high-performance systems and heterogeneous architectures in particular. The application-related contributions cover computational fluid dynamics, material science, medical applications and climate research; innovative fields such as coupled multi-physics and multi-scale simulations are highlighted. All papers were chosen from presentations given at the 18th Workshop on Sustained Simulation Performance held at the HLRS, University of Stuttgart, Germany in October 2013 and subsequent Workshop of the same name held at Tohoku University in March 2014.
For professionals who need to design, implement or manage a quality software program, this volume identifies ten major components that make up a solid program in line with ISO 9001 quality management precepts. This second edition is expanded by over 20 per cent, with updated references, text revisions and new chapters on software safety and software risk management. It seeks to provide the starting points for a standardized documentation system, and better understanding of the individual program components and how they integrate to form the whole system.
Multimedia Mining: A Highway to Intelligent Multimedia Documents brings together experts in digital media content analysis, state-of-art data mining and knowledge discovery in multimedia database systems, knowledge engineers and domain experts from diverse applied disciplines. Multimedia documents are ubiquitous and often required, if not essential, in many applications today. This phenomenon has made multimedia documents widespread and extremely large. There are tools for managing and searching within these collections, but the need for tools to extract hidden useful knowledge embedded within multimedia objects is becoming pressing and central for many decision-making applications. The tools needed today are tools for discovering relationships between objects or segments within multimedia document components, such as classifying images based on their content, extracting patterns in sound, categorizing speech and music, and recognizing and tracking objects in video streams.
Within 50 years computers could have capabilities rivaling that of the human brain. Effective utilization of such new technologies poses a significant challenge to the computer science community, which finds an ever increasing number of complex applications within its technological grasp. In addition to increased complexity, most, if not all, of these applications are also accompanied by an inherent increase in the consequences associated with their failure, resulting in the construction of increasingly high consequence complex systems. Systems that fall within this domain are beyond the ability to construct in a brute force manner. There are two major challenges in developing such systems: manage complexity and provide sufficient evidence that the system satisfies dependability constraints. Society is tacitly relying on the research community to solve these problems on a timetable satisfying the needs of industry. While impressive results have been obtained, the research community is still, to some extent, hamstrung by the lack of realistic case study problems against which to benchmark new techniques and approaches. The purpose of High Integrity Software is to explore a cross-section of some of the most promising areas of research in the construction of high consequence complex systems, for example, a case study involving the Bay Area Rapid Transit (BART) system. Because of its scope and complexity, the BART case study is being recognized by many in the formal methods community as one of the definitive case study problems, and as such provides a valuable insight into the challenges that must be faced in the upcoming years. High Integrity Software is suitable as a secondary text for agraduate level course, and as a reference for researchers and practitioners in industry.
This book is the first comprehensive survey of the field of constraint databases, written by leading researchers. Constraint databases are a fairly new and active area of database research. The key idea is that constraints, such as linear or polynomial equations, are used to represent large, or even infinite, sets in a compact way. The ability to deal with infinite sets makes constraint databases particularly promising as a technology for integrating spatial and temporal data with standard relational databases. Constraint databases bring techniques from a variety of fields, such as logic and model theory, algebraic and computational geometry, as well as symbolic computation, to the design and analysis of data models and query languages.
Games and simulations have emerged as new and effective tools for educational learning by providing interactivity and integration with online resources that are typically unavailable with traditional educational resources. Design, Utilization, and Analysis of Simulations and Game-Based Educational Worlds presents developments and evaluations of games and computer-mediated simulations in order to showcase a better understanding of the role of electronic games in multiple studies. This book is useful for researchers, practitioners, and policymakers to gain a deeper comprehension of the relationship between research and practice of electronic gaming and simulations in the educational environment.
AI is an integral part of every video game and this book helps game developers keep up with the constantly evolving technological advances to create robust AI. The authors draw on their considerable experience and uses case studies from real games to provide a complete reference. Also included are exercises so readers can test their comprehension and understanding of the concepts and practices presented. This revised and updated Third Edition includes new techniques, algorithms, data structures and representations needed to create powerful AI in games. It helps experienced game developers learn new techniques and provides students with a solid understanding of game AI that will help them jumpstart their careers.
The PACE System: An Expert Consulting System for Nursing provides a case study of the research, design, implementation, and commercial distribution of this decision support system. Beginning with a summary of PACE's twenty year development and its start as a university-based system, the author contrasts the original system with its current version to give a concrete understanding of its evolution. Further chapters discuss issues in the initial development of the knowledge base; the specific activities and efforts needed to acquire, maintain, and manage the knowledge base; how the network is maintained. The implementation, distribution, and subsequent validation of the entire clinical system is reviewed. Principles and lessons from the development of PACE are analyzed with the hope that today's researchers and developers will glean useful information from these earlier efforts.
John Chambers turns his attention to R, the enormously successful open-source system based on the S language. His book guides the reader through programming with R, beginning with simple interactive use and progressing by gradual stages, starting with simple functions. More advanced programming techniques can be added as needed, allowing users to grow into software contributors, benefiting their careers and the community. R packages provide a powerful mechanism for contributions to be organized and communicated. This is the only advanced programming book on R, written by the author of the S language from which R evolved.
This volume introduces machine learning techniques that are particularly powerful and effective for modeling multimedia data and common tasks of multimedia content analysis. It systematically covers key machine learning techniques in an intuitive fashion and demonstrates their applications through case studies. Coverage includes examples of unsupervised learning, generative models and discriminative models. In addition, the book examines Maximum Margin Markov (M3) networks, which strive to combine the advantages of both the graphical models and Support Vector Machines (SVM).
This book contains contributions presented during the international conference on Model-Based Reasoning (MBR012), held on June 21-23 in Sestri Levante, Italy. Interdisciplinary researchers discuss in this volume how scientific cognition and other kinds of cognition make use of models, abduction, and explanatory reasoning in order to produce important or creative changes in theories and concepts. Some of the contributions analyzed the problem of model-based reasoning in technology and stressed the issues of scientific and technological innovation. The book is divided in three main parts: models, mental models, representations; abduction, problem solving and practical reasoning; historical, epistemological and technological issues. The volume is based on the papers that were presented at the international "
Machine learning methods are now an important tool for scientists, researchers, engineers and students in a wide range of areas. This book is written for people who want to adopt and use the main tools of machine learning, but aren't necessarily going to want to be machine learning researchers. Intended for students in final year undergraduate or first year graduate computer science programs in machine learning, this textbook is a machine learning toolkit. Applied Machine Learning covers many topics for people who want to use machine learning processes to get things done, with a strong emphasis on using existing tools and packages, rather than writing one's own code. A companion to the author's Probability and Statistics for Computer Science, this book picks up where the earlier book left off (but also supplies a summary of probability that the reader can use). Emphasizing the usefulness of standard machinery from applied statistics, this textbook gives an overview of the major applied areas in learning, including coverage of:* classification using standard machinery (naive bayes; nearest neighbor; SVM)* clustering and vector quantization (largely as in PSCS)* PCA (largely as in PSCS)* variants of PCA (NIPALS; latent semantic analysis; canonical correlation analysis)* linear regression (largely as in PSCS)* generalized linear models including logistic regression* model selection with Lasso, elasticnet* robustness and m-estimators* Markov chains and HMM's (largely as in PSCS)* EM in fairly gory detail; long experience teaching this suggests one detailed example is required, which students hate; but once they've been through that, the next one is easy* simple graphical models (in the variational inference section)* classification with neural networks, with a particular emphasis onimage classification* autoencoding with neural networks* structure learning
Information and communication technology (ICT) is permeating all aspects of service management; in the public sector, ICT is improving the capacity of government agencies to provide a wide array of innovative services that benefit citizens. E-Government is emerging as a multidisciplinary field of research based initially on empirical insights from practice. Efforts to theoretically anchor the field have opened perspectives from multiple research domains, as demonstrated in Practical Studies in E-Government. In this volume, the editors and contributors consider the evolution of the e-government field from both practical and research perspectives. Featuring in-depth case studies of initiatives in eight countries, the book deals with such technology-oriented issues as interoperability, prototyping, data quality, and advanced interfaces, and management-oriented issues as e-procurement, e-identification, election results verification, and information privacy. The book features best practices, tools for measuring and improving performance, and analytical methods for researchers.
Enterprise Resource Planning (ERP), Supply Chain Management (SCM), Customer Relationship Management (CRM), Business Intelligence (BI) and Big Data analytics (BDA) are business related tasks and processes, which are supported by standardized software solutions. The book explains that this requires business-oriented thinking and acting from IT specialists and data scientists. It is a good idea to let students experience this directly from the business perspective, for example as executives of a virtual company in a role-playing game. The second edition of the book has been completely revised, restructured and supplemented with actual topics such as blockchains in supply chains and the correlation between Big Data analytics, artificial intelligence and machine learning. The structure of the book is based on the gradual implementation and integration of the respective information systems from the business and management perspectives. Part I contains chapters with detailed descriptions of the topics supplemented by online tests and exercises. Part II introduces role play and the online gaming and simulation environment. Supplementary teaching material, presentations, templates, and video clips are available online in the gaming area. The gaming and business simulation Kdibisglobal.com, newly created for this book, now includes a beer division, a bottled water division, a soft drink division and a manufacturing division for barcode cash register scanner with their specific business processes and supply chains. |
![]() ![]() You may like...
Financial Analysis With Microsoft Excel
Timothy Mayes
Paperback
Essential Java for Scientists and…
Brian Hahn, Katherine Malan
Paperback
R1,296
Discovery Miles 12 960
Cybersecurity Issues and Challenges for…
Saqib Saeed, Abdullah M. Almuhaideb, …
Hardcover
R8,595
Discovery Miles 85 950
Data Communication and Computer Networks…
Jill West, Curt M. White
Paperback
Database Systems - Design…
Carlos Coronel, Steven Morris
Paperback
|