![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing
A key focus in recent years has been on sustainable development and promoting environmentally conscious practices. In today's rapidly evolving technological world, it is important to consider how technology can be applied to solve problems across disciplines and fields in these areas. Further study is needed in order to understand how technology can be applied to sustainability and the best practices, considerations, and challenges that follow. Futuristic Trends for Sustainable Development and Sustainable Ecosystems discusses recent advances and innovative research in the area of information and communication technology for sustainable development and covers practices in several artificial intelligence fields such as knowledge representation and reasoning, natural language processing, machine learning, and the semantic web. Covering topics such as blockchain, deep learning, and renewable energy, this reference work is ideal for computer scientists, industry professionals, researchers, academicians, scholars, instructors, and students.
The book 'National Cyber Olympiad' has been divided into five sections namely Computer and IT, Logical Reasoning, Achievers section, Subjective section, and Model Papers. In every chapter, the theory has been explained through solved examples, illustrations and diagrams wherever required. To enhance the problem solving skills of candidates Multiple Choice Questions (MCQs) with detailed solutions are provided in the end of each chapter. The questions in the Achievers' section are set to evaluate the computer skills of brilliant students while the subjective section includes questions of descriptive nature. Two Model Papers have been included for practice purpose. A CD containing Study Chart for systematic preparation, Tips & Tricks to crack Cyber Olympiad, Pattern of exam, and links of Previous Years Papers is accompanied with this book. #v&spublishers
The discovery and development of new computational methods have expanded the capabilities and uses of simulations. With agent-based models, the applications of computer simulations are significantly enhanced. Multi-Agent-Based Simulations Applied to Biological and Environmental Systems is a pivotal reference source for the latest research on the implementation of autonomous agents in computer simulation paradigms. Featuring extensive coverage on relevant applications, such as biodiversity conservation, pollution reduction, and environmental risk assessment, this publication is an ideal source for researchers, academics, engineers, practitioners, and professionals seeking material on various issues surrounding the use of agent-based simulations.
Developments in Technologies for Human-Centric Mobile Computing and Applications is a comprehensive collection of knowledge and practice in the development of technologies in human -centric mobile technology. This book focuses on the developmental aspects of mobile technology; bringing together researchers, educators, and practitioners to encourage readers to think outside of the box.
Multimedia technologies are becoming more sophisticated, enabling
the Internet to accommodate a rapidly growing audience with a full
range of services and efficient delivery methods. Although the
Internet now puts communication, education, commerce and
socialization at our finger tips, its rapid growth has raised some
weighty security concerns with respect to multimedia content. The
owners of this content face enormous challenges in safeguarding
their intellectual property, while still exploiting the Internet as
an important resource for commerce.
Take an active role in managing technology! From new business models to new types of business, information technology has become a key driver of business and an essential component of corporate strategy. But simply acquiring technology is not enough; organizations must manage IT effectively to gain the competitive advantage. Henry Lucas's Information Technology: Strategic Decision Making for Managers focuses on the key knowledge and skills you need to take an active role in managing technology and obtain the maximum benefits from investing in IT. Offering streamlined, up-to-date coverage, the text is ideally suited for MBA students or anyone who wants to learn more about how to gain the competitive advantage by successfully managing IT. Features Focuses on managerial issues: This text explores the many real technology issues confronting today's managers, such as what to do with legacy systems, when to outsource, and how to choose a source of processing and services. Shows how to evaluate IT investments: Two full chapters cover the value of information technology and how to evaluate IT project proposals using both net present value and real options approaches. Balances technical and managerial coverage: This balance helps you understand how diverse companies have developed their IT architectures and environments. Explains the various applications of technology: Concrete examples illustrate major IT applications, such as ecommerce, ERP, CRM, decision and intelligent systems, and knowledge management.
The main purpose of this book is not only to present recent studies and advances in the field of social science research, but also to stimulate discussion on related practical issues concerning statistics, mathematics, and economics. Accordingly, a broad range of tools and techniques that can be used to solve problems on these topics are presented in detail in this book, which offers an ideal reference work for all researchers interested in effective quantitative and qualitative tools. The content is divided into three major sections. The first, which is titled "Social work", collects papers on problems related to the social sciences, e.g. social cohesion, health, and digital technologies. Papers in the second part, "Education and teaching issues," address qualitative aspects, education, learning, violence, diversity, disability, and ageing, while the book's final part, "Recent trends in qualitative and quantitative models for socio-economic systems and social work", features contributions on both qualitative and quantitative issues. The book is based on a scientific collaboration, in the social sciences, mathematics, statistics, and economics, among experts from the "Pablo de Olavide" University of Seville (Spain), the "University of Defence" of Brno (Czech Republic), the "G. D'Annunzio" University of Chieti-Pescara (Italy) and "Alexandru Ioan Cuza University" of Iasi (Romania). The contributions, which have been selected using a peer-review process, examine a wide variety of topics related to the social sciences in general, while also highlighting new and intriguing empirical research conducted in various countries. Given its scope, the book will appeal, in equal measure, to sociologists, mathematicians, statisticians and philosophers, and more generally to scholars and specialists in related fields.
This book addresses the experimental calibration of best-estimate numerical simulation models. The results of measurements and computations are never exact. Therefore, knowing only the nominal values of experimentally measured or computed quantities is insufficient for applications, particularly since the respective experimental and computed nominal values seldom coincide. In the author's view, the objective of predictive modeling is to extract "best estimate" values for model parameters and predicted results, together with "best estimate" uncertainties for these parameters and results. To achieve this goal, predictive modeling combines imprecisely known experimental and computational data, which calls for reasoning on the basis of incomplete, error-rich, and occasionally discrepant information. The customary methods used for data assimilation combine experimental and computational information by minimizing an a priori, user-chosen, "cost functional" (usually a quadratic functional that represents the weighted errors between measured and computed responses). In contrast to these user-influenced methods, the BERRU (Best Estimate Results with Reduced Uncertainties) Predictive Modeling methodology developed by the author relies on the thermodynamics-based maximum entropy principle to eliminate the need for relying on minimizing user-chosen functionals, thus generalizing the "data adjustment" and/or the "4D-VAR" data assimilation procedures used in the geophysical sciences. The BERRU predictive modeling methodology also provides a "model validation metric" which quantifies the consistency (agreement/disagreement) between measurements and computations. This "model validation metric" (or "consistency indicator") is constructed from parameter covariance matrices, response covariance matrices (measured and computed), and response sensitivities to model parameters. Traditional methods for computing response sensitivities are hampered by the "curse of dimensionality," which makes them impractical for applications to large-scale systems that involve many imprecisely known parameters. Reducing the computational effort required for precisely calculating the response sensitivities is paramount, and the comprehensive adjoint sensitivity analysis methodology developed by the author shows great promise in this regard, as shown in this book. After discarding inconsistent data (if any) using the consistency indicator, the BERRU predictive modeling methodology provides best-estimate values for predicted parameters and responses along with best-estimate reduced uncertainties (i.e., smaller predicted standard deviations) for the predicted quantities. Applying the BERRU methodology yields optimal, experimentally validated, "best estimate" predictive modeling tools for designing new technologies and facilities, while also improving on existing ones.
The Complete "Tool Kit" for the Hottest Area in RF/Wireless Design!
In today's society, the professional development of teachers is urgent due to the constant change in working conditions and the impact that information and communication technologies have in teaching practices. ""Online Learning Communities and Teacher Professional Development: Methods for Improved Education Delivery"" features innovative applications and solutions useful for teachers in developing knowledge and skills for the integration of technology into everyday teaching practices. This defining collection of field research discusses how technology itself can serve as an important resource in terms of providing arenas for professional development.
This book presents recently developed computational approaches for the study of reactive materials under extreme physical and thermodynamic conditions. It delves into cutting edge developments in simulation methods for reactive materials, including quantum calculations spanning nanometer length scales and picosecond timescales, to reactive force fields, coarse-grained approaches, and machine learning methods spanning microns and nanoseconds and beyond. These methods are discussed in the context of a broad range of fields, including prebiotic chemistry in impacting comets, studies of planetary interiors, high pressure synthesis of new compounds, and detonations of energetic materials. The book presents a pedagogical approach for these state-of-the-art approaches, compiled into a single source for the first time. Ultimately, the volume aims to make valuable research tools accessible to experimentalists and theoreticians alike for any number of scientific efforts, spanning many different types of compounds and reactive conditions.
As today's world continues to advance, Artificial Intelligence (AI) is a field that has become a staple of technological development and led to the advancement of numerous professional industries. An application within AI that has gained attention is machine learning. Machine learning uses statistical techniques and algorithms to give computer systems the ability to understand and its popularity has circulated through many trades. Understanding this technology and its countless implementations is pivotal for scientists and researchers across the world. The Handbook of Research on Emerging Trends and Applications of Machine Learning provides a high-level understanding of various machine learning algorithms along with modern tools and techniques using Artificial Intelligence. In addition, this book explores the critical role that machine learning plays in a variety of professional fields including healthcare, business, and computer science. While highlighting topics including image processing, predictive analytics, and smart grid management, this book is ideally designed for developers, data scientists, business analysts, information architects, finance agents, healthcare professionals, researchers, retail traders, professors, and graduate students seeking current research on the benefits, implementations, and trends of machine learning.
This handbook is organized under three major parts. The first part of this handbook deals with multimedia security for emerging applications. The chapters include basic concepts of multimedia tools and applications, biological and behavioral biometrics, effective multimedia encryption and secure watermarking techniques for emerging applications, an adaptive face identification approach for android mobile devices, and multimedia using chaotic and perceptual hashing function. The second part of this handbook focuses on multimedia processing for various potential applications. The chapter includes a detail survey of image processing based automated glaucoma detection techniques and role of de-noising, recent study of dictionary learning based image reconstruction techniques for analyzing the big medical data, brief introduction of quantum image processing and it applications, a segmentation-less efficient Alzheimer detection approach, object recognition, image enhancements and de-noising techniques for emerging applications, improved performance of image compression approach, and automated detection of eye related diseases using digital image processing. The third part of this handbook introduces multimedia applications. The chapter includes the extensive survey on the role of multimedia in medicine and multimedia forensics classification, a finger based authentication system for e-health security, analysis of recently developed deep learning techniques for emotion and activity recognition. Further, the book introduce a case study on change of ECG according to time for user identification, role of multimedia in big data, cloud computing, the Internet of things (IoT) and blockchain environment in detail for real life applications. This handbook targets researchers, policy makers, programmers and industry professionals in creating new knowledge for developing efficient techniques/framework for multimedia applications. Advanced level students studying computer science, specifically security and multimedia will find this book useful as a reference.
The past few years have seen a major change in computing systems, as growing data volumes and stalling processor speeds require more and more applications to scale out to clusters. Today, a myriad data sources, from the Internet to business operations to scientific instruments, produce large and valuable data streams. However, the processing capabilities of single machines have not kept up with the size of data. As a result, organizations increasingly need to scale out their computations over clusters. At the same time, the speed and sophistication required of data processing have grown. In addition to simple queries, complex algorithms like machine learning and graph analysis are becoming common. And in addition to batch processing, streaming analysis of real-time data is required to let organizations take timely action. Future computing platforms will need to not only scale out traditional workloads, but support these new applications too. This book, a revised version of the 2014 ACM Dissertation Award winning dissertation, proposes an architecture for cluster computing systems that can tackle emerging data processing workloads at scale. Whereas early cluster computing systems, like MapReduce, handled batch processing, our architecture also enables streaming and interactive queries, while keeping MapReduce's scalability and fault tolerance. And whereas most deployed systems only support simple one-pass computations (e.g., SQL queries), ours also extends to the multi-pass algorithms required for complex analytics like machine learning. Finally, unlike the specialized systems proposed for some of these workloads, our architecture allows these computations to be combined, enabling rich new applications that intermix, for example, streaming and batch processing. We achieve these results through a simple extension to MapReduce that adds primitives for data sharing, called Resilient Distributed Datasets (RDDs). We show that this is enough to capture a wide range of workloads. We implement RDDs in the open source Spark system, which we evaluate using synthetic and real workloads. Spark matches or exceeds the performance of specialized systems in many domains, while offering stronger fault tolerance properties and allowing these workloads to be combined. Finally, we examine the generality of RDDs from both a theoretical modeling perspective and a systems perspective. This version of the dissertation makes corrections throughout the text and adds a new section on the evolution of Apache Spark in industry since 2014. In addition, editing, formatting, and links for the references have been added.
This book provides a survey on research, development, and trends in innovative computing in communications engineering and computer science. It features selected and expanded papers from the EAI International Conference on Computer Science and Engineering 2018 (COMPSE 2018), with contributions by top global researchers and practitioners in the field. The content is of relevance to computer science graduates, researchers and academicians in computer science and engineering. The authors discuss new technologies in computer science and engineering that have reduced the dimension of data coverage worldwide, reducing the gaps and coverage of domains globally. They discuss how these advances have also contributed to strength in prediction, analysis, and decision in the areas such as Technology, Management, Social Computing, Green Computing, and Telecom. Contributions show how nurturing the research in technology and computing is essential to finding the right pattern in the ocean of data. Focuses on research areas of innovative computing and its application in engineering and technology; Includes contributions from researchers in computing and engineering from around the world; Features selected and expanded papers from EAI International Conference on Computer Science and Engineering 2018 (COMPSE 2018).
This book provides a comprehensive guide to the state-of-the-art in cardiovascular computing and highlights novel directions and challenges in this constantly evolving multidisciplinary field. The topics covered span a wide range of methods and clinical applications of cardiovascular computing, including advanced technologies for the acquisition and analysis of signals and images, cardiovascular informatics, and mathematical and computational modeling.
Complexes of physically interacting proteins constitute fundamental functional units that drive almost all biological processes within cells. A faithful reconstruction of the entire set of protein complexes (the "complexosome") is therefore important not only to understand the composition of complexes but also the higher level functional organization within cells. Advances over the last several years, particularly through the use of high-throughput proteomics techniques, have made it possible to map substantial fractions of protein interactions (the "interactomes") from model organisms including Arabidopsis thaliana (a flowering plant), Caenorhabditis elegans (a nematode), Drosophila melanogaster (fruit fly), and Saccharomyces cerevisiae (budding yeast). These interaction datasets have enabled systematic inquiry into the identification and study of protein complexes from organisms. Computational methods have played a significant role in this context, by contributing accurate, efficient, and exhaustive ways to analyze the enormous amounts of data. These methods have helped to compensate for some of the limitations in experimental datasets including the presence of biological and technical noise and the relative paucity of credible interactions. In this book, we systematically walk through computational methods devised to date (approximately between 2000 and 2016) for identifying protein complexes from the network of protein interactions (the protein-protein interaction (PPI) network). We present a detailed taxonomy of these methods, and comprehensively evaluate them for protein complex identification across a variety of scenarios including the absence of many true interactions and the presence of false-positive interactions (noise) in PPI networks. Based on this evaluation, we highlight challenges faced by the methods, for instance in identifying sparse, sub-, or small complexes and in discerning overlapping complexes, and reveal how a combination of strategies is necessary to accurately reconstruct the entire complexosome.
This book introduces the applications of deep learning in various human centric visual analysis tasks, including classical ones like face detection and alignment and some newly rising tasks like fashion clothing parsing. Starting from an overview of current research in human centric visual analysis, the book then presents a tutorial of basic concepts and techniques of deep learning. In addition, the book systematically investigates the main human centric analysis tasks of different levels, ranging from detection and segmentation to parsing and higher-level understanding. At last, it presents the state-of-the-art solutions based on deep learning for every task, as well as providing sufficient references and extensive discussions. Specifically, this book addresses four important research topics, including 1) localizing persons in images, such as face and pedestrian detection; 2) parsing persons in details, such as human pose and clothing parsing, 3) identifying and verifying persons, such as face and human identification, and 4) high-level human centric tasks, such as person attributes and human activity understanding. This book can serve as reading material and reference text for academic professors / students or industrial engineers working in the field of vision surveillance, biometrics, and human-computer interaction, where human centric visual analysis are indispensable in analysing human identity, pose, attributes, and behaviours for further understanding.
This unique, new book covers the whole field of electronic warfare modeling and simulation at a systems level, including chapters that describe basic electronic warfare (EW) concepts. Written by a well-known expert in the field with more than 24 years of experience, the book explores EW applications and techniques and the radio frequency spectrum. A detailed resource for entry-level engineering personnel in EW, military personnel with no radio or communications engineering background, technicians and software professionals, the work explains the basic concepts required for modeling and simulation that today's professionals need to understand. Practitioners find clear explanations of important mathematical concepts, such as decibel notation and spherical trigonometry, necessary for modeling and simulation. Moreover, the book describes specific types of EW equipment, how they work and how each is mathematically modeled. |
You may like...
Oncogenes as Transcriptional Regulators…
Moshe Vaniv, Jacques Ghysdael
Hardcover
R2,396
Discovery Miles 23 960
New Approaches in Intelligent Image…
Roumen Kountchev, Kazumi Nakamatsu
Hardcover
Bacterial Artificial Chromosomes…
Shaying Zhao, Marvin Stodolsky
Hardcover
R2,692
Discovery Miles 26 920
|