![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Computer software packages
SUCCEEDING IN BUSINESS WITH MICROSOFT OFFICE EXCEL 2010, International Edition prepares your students to solve business problems by moving beyond the basic "point and click" skills to think critically about realistic business situations. When students combine software analysis with their own decision making abilities, they are more likely meet any business challenge with success. The Succeeding in Business Series emphasizes problem-solving, critical thinking, and analysis - challenging students to find efficient and effective solutions.
This book describes various new computer based approaches which can be exploited for the (digital) reconstruction, recognition, restoration, presentation and classification of digital heritage. They are based on applications of virtual reality, augmented reality and artificial intelligence, to be used for storing and retrieving of historical artifacts, digital reconstruction, or virtual viewing. The book is divided into three sections: "Classification of Heritage Data" presents chapters covering various domains and aspects including text categorization, image retrieval and classification, and object spotting in historical documents. Next, in "Detection and Recognition of Digital Heritage Artifacts", techniques like neural networks or deep learning are used for the restoration of degraded heritage documents, Tamil Palm Leaf Characters recognition, the reconstruction of heritage images, and the selection of suitable images for 3D reconstruction and classification of Indian land mark heritage images. Lastly, "Applications of Modern Tools in Digital Heritage" highlights some example applications for dance transcription, architectural geometry of early temples by digital reconstruction, and computer vision based techniques for collecting and integrating knowledge on flora. This book is mainly written for researchers and graduate students in digital preservation and heritage, or computer scientists looking for applications of virtual reality, computer vision, and artificial intelligence techniques.
This book stages a dialogue between international researchers from the broad fields of complexity science and narrative studies. It presents an edited collection of chapters on aspects of how narrative theory from the humanities may be exploited to understand, explain, describe, and communicate aspects of complex systems, such as their emergent properties, feedbacks, and downwards causation; and how ideas from complexity science can inform narrative theory, and help explain, understand, and construct new, more complex models of narrative as a cognitive faculty and as a pervasive cultural form in new and old media. The book is suitable for academics, practitioners, and professionals, and postgraduates in complex systems, narrative theory, literary and film studies, new media and game studies, and science communication.
This book presents the best papers from the 1st International Conference on Mathematical Research for Blockchain Economy (MARBLE) 2019, held in Santorini, Greece. While most blockchain conferences and forums are dedicated to business applications, product development or Initial Coin Offering (ICO) launches, this conference focused on the mathematics behind blockchain to bridge the gap between practice and theory. Every year, thousands of blockchain projects are launched and circulated in the market, and there is a tremendous wealth of blockchain applications, from finance to healthcare, education, media, logistics and more. However, due to theoretical and technical barriers, most of these applications are impractical for use in a real-world business context. The papers in this book reveal the challenges and limitations, such as scalability, latency, privacy and security, and showcase solutions and developments to overcome them.
As schools continue to explore the transition from traditional education to teaching and learning online, new instructional design frameworks are needed that can support with the development of e-learning content. The e-learning frameworks examined within this book have eight dimensions: (1) institutional, (2) pedagogical, (3) technological, (4) interface design, (5) evaluation, (6) management, (7) resource support, and (8) ethical. Each of these dimensions contains a group of concerns or issues that need to be examined to assess and develop an institutions e-capability in order to introduce the best e-learning practices. Challenges and Opportunities for the Global Implementation of E-Learning Frameworks presents global perspectives on the latest best practices and success stories of institutions that were able to effectively implement e-learning frameworks. An e-learning framework is used as a guide to examine e-learning practices in countries around the globe to reflect on opportunities and challenges for implementing quality learning. In this book, therefore, tips for success factors and issues relevant to failures will be presented along with an analysis of similarities and differences between several countries and educational lessons. While highlighting topics such as course design and development, ICT use in the classroom, and e-learning for different subjects, this book is ideal for university leaders, practitioners in e-learning, continuing education institutions, government agencies, course developers, in-service and preservice teachers, administrators, practitioners, stakeholders, researchers, academicians, and students seeking knowledge on how e-learning frameworks are being implemented across the globe.
Geometry is the cornerstone of computer graphics and computer animation, and provides the framework and tools for solving problems in two and three dimensions. This may be in the form of describing simple shapes such as a circle, ellipse, or parabola, or complex problems such as rotating 3D objects about an arbitrary axis. Geometry for Computer Graphics draws together a wide variety of geometric information that will provide a sourcebook of facts, examples, and proofs for students, academics, researchers, and professional practitioners. The book is divided into 4 sections: the first summarizes hundreds of formulae used to solve 2D and 3D geometric problems. The second section places these formulae in context in the form of worked examples. The third provides the origin and proofs of these formulae, and communicates mathematical strategies for solving geometric problems. The last section is a glossary of terms used in geometry.
When managers and ecologists need to make decisions about the environment, they use models to simulate the dynamic systems that interest them. All management decisions affect certain landscapes over time, and those landscapes are composed of intricate webs of dynamic processes that need to be considered in relation to each other. With widespread use of Geographic Information Systems (GIS), there is a growing need for complex models ncorporating an increasing amount of data. The open-source Spatial Modeling Environment (SME) was developed to build upon common modeling software, such as STELLA (R), and Powersim (R), among others, to create, run, analyze, and present spatial models of ecosystems, watersheds, populations, and landscapes. In this book, the creators of the Spatial Modeling Environment discuss and illustrate the uses of SME as a modeling tool for all kinds of complex spatial systems. The authors demonstrate the entire process of spatial modeling, beginning with the conceptual design, continuing through formal implementation and analysis, and finally with the interpretation and presentation of the results. A variety of applications and case studies address particular types of ecological and management problems and help to identify potential problems for modelers. Researchers and students interested in spatial modeling will learn how to simulate the complex dynamics of landscapes. Managers and decision makers will acquire tools for predicting changes in landscapes while learning about both the possibilities and the limitations of simulation models. The enclosed CD contains SME, color illustrations and models and data from the examples in the book.
Describing novel mathematical concepts for recommendation engines, Realtime Data Mining: Self-Learning Techniques for Recommendation Engines features a sound mathematical framework unifying approaches based on control and learning theories, tensor factorization, and hierarchical methods. Furthermore, it presents promising results of numerous experiments on real-world data. The area of realtime data mining is currently developing at an exceptionally dynamic pace, and realtime data mining systems are the counterpart of today's "classic" data mining systems. Whereas the latter learn from historical data and then use it to deduce necessary actions, realtime analytics systems learn and act continuously and autonomously. In the vanguard of these new analytics systems are recommendation engines. They are principally found on the Internet, where all information is available in realtime and an immediate feedback is guaranteed. This monograph appeals to computer scientists and specialists in machine learning, especially from the area of recommender systems, because it conveys a new way of realtime thinking by considering recommendation tasks as control-theoretic problems. Realtime Data Mining: Self-Learning Techniques for Recommendation Engines will also interest application-oriented mathematicians because it consistently combines some of the most promising mathematical areas, namely control theory, multilevel approximation, and tensor factorization.
Matthias Schu examines three main topics in his research: The intention of store-based retail and wholesale companies to open up an own online channel, factors determining the foreign market selection behavior of online retailers as well as factors affecting the speed in the internationalization process of online retailers. New insights for retail research and management are presented and contribute to existing knowledge; the study is valuable for academic researchers and for practitioners who are interested in a thorough analysis of online retailing from a strategic and theoretical perspective.
This book features the latest research in the area of immersive technologies as presented at the 7th International Extended Reality (XR) Conference, held in Lisbon, Portugal in 2022. Bridging the gap between academia and industry, it showcases the latest advances in augmented reality (AR), virtual reality (VR), extended reality (XR) and metaverse and their applications in various sectors such as business, marketing, retail, education, healthcare, tourism, events, fashion, entertainment, and gaming. The volume gathers selected research papers by prominent AR, VR, XR and metaverse scholars from around the world. Presenting the most significant topics and latest findings in the fields of augmented reality, virtual reality, extended reality and metaverse, it will be a valuable asset for academics and practitioners alike.
Ontologies tend to be found everywhere. They are viewed as the silver bullet for many applications, such as database integration, peer-to-peer systems, e-commerce, semantic web services, or social networks. However, in open or evolving systems, such as the semantic web, different parties would, in general, adopt different ontologies. Thus, merely using ontologies, like using XML, does not reduce heterogeneity: it just raises heterogeneity problems to a higher level. Euzenat and Shvaiko's book is devoted to ontology matching as a solution to the semantic heterogeneity problem faced by computer systems. Ontology matching aims at finding correspondences between semantically related entities of different ontologies. These correspondences may stand for equivalence as well as other relations, such as consequence, subsumption, or disjointness, between ontology entities. Many different matching solutions have been proposed so far from various viewpoints, e.g., databases, information systems, and artificial intelligence. The second edition of Ontology Matching has been thoroughly revised and updated to reflect the most recent advances in this quickly developing area, which resulted in more than 150 pages of new content. In particular, the book includes a new chapter dedicated to the methodology for performing ontology matching. It also covers emerging topics, such as data interlinking, ontology partitioning and pruning, context-based matching, matcher tuning, alignment debugging, and user involvement in matching, to mention a few. More than 100 state-of-the-art matching systems and frameworks were reviewed. With Ontology Matching, researchers and practitioners will find a reference book that presents currently available work in a uniform framework. In particular, the work and the techniques presented in this book can be equally applied to database schema matching, catalog integration, XML schema matching and other related problems. The objectives of the book include presenting (i) the state of the art and (ii) the latest research results in ontology matching by providing a systematic and detailed account of matching techniques and matching systems from theoretical, practical and application perspectives.
This focuses on the developing field of building probability models with the power of symbolic algebra systems. The book combines the uses of symbolic algebra with probabilistic/stochastic application and highlights the applications in a variety of contexts. The research explored in each chapter is unified by the use of A Probability Programming Language (APPL) to achieve the modeling objectives. APPL, as a research tool, enables a probabilist or statistician the ability to explore new ideas, methods, and models. Furthermore, as an open-source language, it sets the foundation for future algorithms to augment the original code. Computational Probability Applications is comprised of fifteen chapters, each presenting a specific application of computational probability using the APPL modeling and computer language. The chapter topics include using inverse gamma as a survival distribution, linear approximations of probability density functions, and also moment-ratio diagrams for univariate distributions. These works highlight interesting examples, often done by undergraduate students and graduate students that can serve as templates for future work. In addition, this book should appeal to researchers and practitioners in a range of fields including probability, statistics, engineering, finance, neuroscience, and economics.
Multilevel and Longitudinal Modeling with IBM SPSS, Third Edition, demonstrates how to use the multilevel and longitudinal modeling techniques available in IBM SPSS Versions 25-27. Annotated screenshots with all relevant output provide readers with a step-by-step understanding of each technique as they are shown how to navigate the program. Throughout, diagnostic tools, data management issues, and related graphics are introduced. SPSS commands show the flow of the menu structure and how to facilitate model building, while annotated syntax is also available for those who prefer this approach. Extended examples illustrating the logic of model development and evaluation are included throughout the book, demonstrating the context and rationale of the research questions and the steps around which the analyses are structured. The book opens with the conceptual and methodological issues associated with multilevel and longitudinal modeling, followed by a discussion of SPSS data management techniques that facilitate working with multilevel, longitudinal, or cross-classified data sets. The next few chapters introduce the basics of multilevel modeling, developing a multilevel model, extensions of the basic two-level model (e.g., three-level models, models for binary and ordinal outcomes), and troubleshooting techniques for everyday-use programming and modeling problems along with potential solutions. Models for investigating individual and organizational change are next developed, followed by models with multivariate outcomes and, finally, models with cross-classified and multiple membership data structures. The book concludes with thoughts about ways to expand on the various multilevel and longitudinal modeling techniques introduced and issues (e.g., missing data, sample weights) to keep in mind in conducting multilevel analyses. Key features of the third edition: Thoroughly updated throughout to reflect IBM SPSS Versions 26-27. Introduction to fixed-effects regression for examining change over time where random-effects modeling may not be an optimal choice. Additional treatment of key topics specifically aligned with multilevel modeling (e.g., models with binary and ordinal outcomes). Expanded coverage of models with cross-classified and multiple membership data structures. Added discussion on model checking for improvement (e.g., examining residuals, locating outliers). Further discussion of alternatives for dealing with missing data and the use of sample weights within multilevel data structures. Supported by online data sets, the book's practical approach makes it an essential text for graduate-level courses on multilevel, longitudinal, latent variable modeling, multivariate statistics, or advanced quantitative techniques taught in departments of business, education, health, psychology, and sociology. The book will also prove appealing to researchers in these fields. The book is designed to provide an excellent supplement to Heck and Thomas's An Introduction to Multilevel Modeling Techniques, Fourth Edition; however, it can also be used with any multilevel or longitudinal modeling book or as a stand-alone text.
This book presents a coherent description of the theoretical and practical aspects of Coloured Petri Nets (CP-nets or CPN). It shows how CP-nets have been de veloped - from being a promising theoretical model to being a full-fledged lan guage for the design, specification, simulation, validation and implementation of large software systems (and other systems in which human beings and/or com puters communicate by means of some more or less formal rules). The book contains the formal definition of CP-nets and the mathematical theory behind their analysis methods. However, it has been the intention to write the book in such a way that it also becomes attractive to readers who are more interested in applications than the underlying mathematics. This means that a large part of the book is written in a style which is closer to an engineering textbook (or a users' manual) than it is to a typical textbook in theoretical computer science. The book consists of three separate volumes. The first volume defines the net model (i. e. , hierarchical CP-nets) and the basic concepts (e. g. , the different behavioural properties such as deadlocks, fair ness and home markings). It gives a detailed presentation of many small exam ples and a brief overview of some industrial applications. It introduces the for mal analysis methods. Finally, it contains a description of a set of CPN tools which support the practical use of CP-nets.
What the book is about This book is about the theory and practice of the use of multimedia, multimodal interfaces for leaming. Yet it is not about technology as such, at least in the sense that the authors do not subscribe to the idea that one should do something just because it is technologically possible. 'Multimedia' has been adopted in some commercial quarters to mean little more than a computer with some form of audio ar (more usually) video attachment. This is a trend which ought to be resisted, as exemplified by the material in this book. Rather than merely using a new technology 'because it is there', there is a need to examine how people leam and eommunicate, and to study diverse ways in which computers ean harness text, sounds, speech, images, moving pietures, gestures, touch, etc. , to promote effective human leaming. We need to identify which media, in whieh combinations, using what mappings of domain to representation, are appropriate far which educational purposes . . The word 'multimodal ' in the title underlies this perspective. The intention is to focus attention less on the technology and more on how to strueture different kinds of information via different sensory channels in order to yield the best possible quality of communication and educational interaction. (Though the reader should refer to Chapter 1 for a discussion of the use of the word 'multimodal' . ) Historically there was little problem.
In the third paper in this chapter, Mike Pratt provides an historical intro duction to solid modeling. He presents the development of the three most freqently used techniques: cellular subdivision, constructive solid modeling and boundary representation. Although each of these techniques devel oped more or less independently, today the designer's needs dictate that a successful system allows access to all of these methods. For example, sculptured surfaces are generally represented using a boundary represen tation. However, the design of a complex vehicle generally dictates that a sculptured surface representation is most efficient for the 'skin' while constructive solid geometry representation is most efficent for the inter nal mechanism. Pratt also discusses the emerging concept of design by 'feature line'. Finally, he addresses the very important problem of data exchange between solid modeling systems and the progress that is being made towards developing an international standard. With the advent of reasonably low cost scientific workstations with rea sonable to outstanding graphics capabilities, scientists and engineers are increasingly turning to computer analysis for answers to fundamental ques tions and to computer graphics for present~tion of those answers. Although the current crop of workstations exhibit quite impressive computational ca pability, they are still not capable of solving many problems in a reasonable time frame, e. g. , executing computational fluid dynamics and finite element codes or generating complex ray traced or radiosity based images. In the sixth chapter Mike Muuss of the U. S.
This book introduces the Vienna Simulator Suite for 3rd-Generation Partnership Project (3GPP)-compatible Long Term Evolution-Advanced (LTE-A) simulators and presents applications to demonstrate their uses for describing, designing, and optimizing wireless cellular LTE-A networks. Part One addresses LTE and LTE-A link level techniques. As there has been high demand for the downlink (DL) simulator, it constitutes the central focus of the majority of the chapters. This part of the book reports on relevant highlights, including single-user (SU), multi-user (MU) and single-input-single-output (SISO) as well as multiple-input-multiple-output (MIMO) transmissions. Furthermore, it summarizes the optimal pilot pattern for high-speed communications as well as different synchronization issues. One chapter is devoted to experiments that show how the link level simulator can provide input to a testbed. This section also uses measurements to present and validate fundamental results on orthogonal frequency division multiplexing (OFDM) transmissions that are not limited to LTE-A. One chapter exclusively deals with the newest tool, the uplink (UL) link level simulator, and presents cutting-edge results. In turn, Part Two focuses on system-level simulations. From early on, system-level simulations have been in high demand, as people are naturally seeking answers when scenarios with numerous base stations and hundreds of users are investigated. This part not only explains how mathematical abstraction can be employed to speed up simulations by several hundred times without sacrificing precision, but also illustrates new theories on how to abstract large urban heterogeneous networks with indoor small cells. It also reports on advanced applications such as train and car transmissions to demonstrate the tools' capabilities.
Auf ein gutes Word! Sie mAchten endlich in Word Ihre ArbeitsablAufe effektiver gestalten? Dann ist das Taschenbuch mit dem Aha-Effekt genau das Richtige fA1/4r Sie! Rainer Schwabe zeigt kurz und bA1/4ndig, wie Sie Word nach Ihren Anforderungen und WA1/4nschen gestalten und mit welchen geheimen Tricks alles noch viel schneller geht. Heben Sie sich aus der Word-Masse zum Beispiel mit richtigen Serienbriefen, schAnem Layout und A1/4bersichtlichen Tabellen hervor. Machen Sie schnelle Eingaben, erstellen Sie eigene TastenkA1/4rzel und Befehle. SchApfen Sie aus dem Word-Vollem!
YOU HAVE TO OWN THIS BOOK! "Software Exorcism: A Handbook for Debugging and Optimizing Legacy Code" takes an unflinching, no bulls$&# look at behavioral problems in the software engineering industry, shedding much-needed light on the social forces that make it difficult for programmers to do their job. Do you have a co-worker who perpetually writes bad code that "you" are forced to clean up? This is your book. While there are plenty of books on the market that cover debugging and short-term workarounds for bad code, Reverend Bill Blunden takes a revolutionary step beyond them by bringing our attention to the underlying illnesses that plague the software industry as a whole. Further, "Software Exorcism" discusses tools and techniques for effective and aggressive debugging, gives optimization strategies that appeal to all levels of programmers, and presents in-depth treatments of technical issues with honest assessments that are not biased toward proprietary solutions.
The five-volume set IFIP AICT 630, 631, 632, 633, and 634 constitutes the refereed proceedings of the International IFIP WG 5.7 Conference on Advances in Production Management Systems, APMS 2021, held in Nantes, France, in September 2021.*The 378 papers presented were carefully reviewed and selected from 529 submissions. They discuss artificial intelligence techniques, decision aid and new and renewed paradigms for sustainable and resilient production systems at four-wall factory and value chain levels. The papers are organized in the following topical sections: Part I: artificial intelligence based optimization techniques for demand-driven manufacturing; hybrid approaches for production planning and scheduling; intelligent systems for manufacturing planning and control in the industry 4.0; learning and robust decision support systems for agile manufacturing environments; low-code and model-driven engineering for production system; meta-heuristics and optimization techniques for energy-oriented manufacturing systems; metaheuristics for production systems; modern analytics and new AI-based smart techniques for replenishment and production planning under uncertainty; system identification for manufacturing control applications; and the future of lean thinking and practice Part II: digital transformation of SME manufacturers: the crucial role of standard; digital transformations towards supply chain resiliency; engineering of smart-product-service-systems of the future; lean and Six Sigma in services healthcare; new trends and challenges in reconfigurable, flexible or agile production system; production management in food supply chains; and sustainability in production planning and lot-sizing Part III: autonomous robots in delivery logistics; digital transformation approaches in production management; finance-driven supply chain; gastronomic service system design; modern scheduling and applications in industry 4.0; recent advances in sustainable manufacturing; regular session: green production and circularity concepts; regular session: improvement models and methods for green and innovative systems; regular session: supply chain and routing management; regular session: robotics and human aspects; regular session: classification and data management methods; smart supply chain and production in society 5.0 era; and supply chain risk management under coronavirus Part IV: AI for resilience in global supply chain networks in the context of pandemic disruptions; blockchain in the operations and supply chain management; data-based services as key enablers for smart products, manufacturing and assembly; data-driven methods for supply chain optimization; digital twins based on systems engineering and semantic modeling; digital twins in companies first developments and future challenges; human-centered artificial intelligence in smart manufacturing for the operator 4.0; operations management in engineer-to-order manufacturing; product and asset life cycle management for smart and sustainable manufacturing systems; robotics technologies for control, smart manufacturing and logistics; serious games analytics: improving games and learning support; smart and sustainable production and supply chains; smart methods and techniques for sustainable supply chain management; the new digital lean manufacturing paradigm; and the role of emerging technologies in disaster relief operations: lessons from COVID-19 Part V: data-driven platforms and applications in production and logistics: digital twins and AI for sustainability; regular session: new approaches for routing problem solving; regular session: improvement of design and operation of manufacturing systems; regular session: crossdock and transportation issues; regular session: maintenance improvement and lifecycle management; regular session: additive manufacturing and mass customization; regular session: frameworks and conceptual modelling for systems and services efficiency; regular session: optimization of production and transportation systems; regular session: optimization of supply chain agility and reconfigurability; regular session: advanced modelling approaches; regular session: simulation and optimization of systems performances; regular session: AI-based approaches for quality and performance improvement of production systems; and regular session: risk and performance management of supply chains *The conference was held online.
Electronic mail and message handling is a rapidly expanding field which incorporates both telecommunications and computer technologies. It combines the old technologies of telex, telegram, and analog facsimile with the new systems of Teletex, digital facsimile, and computer-based messaging. This book answers the questions What is electronic mail? and Why is it important? The major systems, services, technology, and standardization issues are comprehensively surveyed.
OpenGL (R): A Primer is a concise presentation of fundamental OpenGL, providing readers with a succinct introduction to essential OpenGL commands as well as detailed listings of OpenGL functions and parameters. Angel uses a top-down philosophy to teach computer graphics based on the idea that students learn modern computer graphics best if they can start programming significant applications as soon as possible. The book makes it easy for students to find functions and their descriptions, and supplemental examples are included in every chapter to illustrate core concepts. This primer can be used both as a companion to a book introducing computer graphics principles and as a stand-alone guide and reference to OpenGL for programmers with a background in computer graphics.
|
![]() ![]() You may like...
Women in Philosophy - What Needs to…
Katrina Hutchison, Fiona Jenkins
Hardcover
R3,979
Discovery Miles 39 790
This Bridge Called My Back, Fortieth…
Cherr ie Moraga, Gloria Anzald ua
Paperback
|