![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Computer programming > Algorithms & procedures
This book discusses machine learning and artificial intelligence (AI) for agricultural economics. It is written with a view towards bringing the benefits of advanced analytics and prognostics capabilities to small scale farmers worldwide. This volume provides data science and software engineering teams with the skills and tools to fully utilize economic models to develop the software capabilities necessary for creating lifesaving applications. The book introduces essential agricultural economic concepts from the perspective of full-scale software development with the emphasis on creating niche blue ocean products. Chapters detail several agricultural economic and AI reference architectures with a focus on data integration, algorithm development, regression, prognostics model development and mathematical optimization. Upgrading traditional AI software development paradigms to function in dynamic agricultural and economic markets, this volume will be of great use to researchers and students in agricultural economics, data science, engineering, and machine learning as well as engineers and industry professionals in the public and private sectors.
This book focuses on the combination of IoT and data science, in particular how methods, algorithms, and tools from data science can effectively support IoT. The authors show how data science methodologies, techniques and tools, can translate data into information, enabling the effectiveness and usefulness of new services offered by IoT stakeholders. The authors posit that if IoT is indeed the infrastructure of the future, data structure is the key that can lead to a significant improvement of human life. The book aims to present innovative IoT applications as well as ongoing research that exploit modern data science approaches. Readers are offered issues and challenges in a cross-disciplinary scenario that involves both IoT and data science fields. The book features contributions from academics, researchers, and professionals from both fields.
This book highlights the design, use and structure of blockchain systems and decentralized ledger technologies (B/DLT) for use in the construction industry. Construction remains a fragmented change-resistant industry with chronic problems of underproductivity and a very low digitization factor compared to other fields. In parallel, the convergence, embedding and coordination of digital technologies in the physical world provides a unique opportunity for the construction industry to leap ahead and adopt fourth industrial revolution technologies. Within this context, B/DLT are an excellent fit for the digitization of the construction industry. B/DLT are effective in this as they organize and align digital and physical supply chains, produce stigmergic coordination out of decentralization, enable the governance of complex projects for multiple stakeholders, while enabling the creation of a new class of business models and legal instruments for construction.
Computational geometry as an area of research in its own right emerged in the early seventies of this century. Right from the beginning, it was obvious that strong connections of various kinds exist to questions studied in the considerably older field of combinatorial geometry. For example, the combinatorial structure of a geometric problem usually decides which algorithmic method solves the problem most efficiently. Furthermore, the analysis of an algorithm often requires a great deal of combinatorial knowledge. As it turns out, however, the connection between the two research areas commonly referred to as computa tional geometry and combinatorial geometry is not as lop-sided as it appears. Indeed, the interest in computational issues in geometry gives a new and con structive direction to the combinatorial study of geometry. It is the intention of this book to demonstrate that computational and com binatorial investigations in geometry are doomed to profit from each other. To reach this goal, I designed this book to consist of three parts, acorn binatorial part, a computational part, and one that presents applications of the results of the first two parts. The choice of the topics covered in this book was guided by my attempt to describe the most fundamental algorithms in computational geometry that have an interesting combinatorial structure. In this early stage geometric transforms played an important role as they reveal connections between seemingly unrelated problems and thus help to structure the field."
This book introduces the state-of-the-art algorithms for data and computation privacy. It mainly focuses on searchable symmetric encryption algorithms and privacy preserving multi-party computation algorithms. This book also introduces algorithms for breaking privacy, and gives intuition on how to design algorithm to counter privacy attacks. Some well-designed differential privacy algorithms are also included in this book. Driven by lower cost, higher reliability, better performance, and faster deployment, data and computing services are increasingly outsourced to clouds. In this computing paradigm, one often has to store privacy sensitive data at parties, that cannot fully trust and perform privacy sensitive computation with parties that again cannot fully trust. For both scenarios, preserving data privacy and computation privacy is extremely important. After the Facebook-Cambridge Analytical data scandal and the implementation of the General Data Protection Regulation by European Union, users are becoming more privacy aware and more concerned with their privacy in this digital world. This book targets database engineers, cloud computing engineers and researchers working in this field. Advanced-level students studying computer science and electrical engineering will also find this book useful as a reference or secondary text.
This book introduces the technological innovations of robotic vehicles. It presents the concepts required for self-driving cars on the road. Besides, readers can gain invaluable knowledge in the construction, programming, and control of the six-legged robot. The book also presents the controllers and aerodynamics of several different types of rotorcrafts. It includes the simulation and flight of the various kinds of rotor-propelled air vehicles under each of their different aerodynamics environment. The book is suitable for academia, educators, students, and researchers who are interested in autonomous vehicles, robotics, and rotor-propelled vehicles.
This book is written for software product teams that use AI to add intelligent models to their products or are planning to use it. As AI adoption grows, it is becoming important that all AI driven products can demonstrate they are not introducing any bias to the AI-based decisions they are making, as well as reducing any pre-existing bias or discrimination. The responsibility to ensure that the AI models are ethical and make responsible decisions does not lie with the data scientists alone. The product owners and the business analysts are as important in ensuring bias-free AI as the data scientists on the team. This book addresses the part that these roles play in building a fair, explainable and accountable model, along with ensuring model and data privacy. Each chapter covers the fundamentals for the topic and then goes deep into the subject matter - providing the details that enable the business analysts and the data scientists to implement these fundamentals. AI research is one of the most active and growing areas of computer science and statistics. This book includes an overview of the many techniques that draw from the research or are created by combining different research outputs. Some of the techniques from relevant and popular libraries are covered, but deliberately not drawn very heavily from as they are already well documented, and new research is likely to replace some of it.
Books on computation in the marketplace tend to discuss the topics within specific fields. Many computational algorithms, however, share common roots. Great advantages emerge if numerical methodologies break the boundaries and find their uses across disciplines. Interdisciplinary Computing In Java Programming Language introduces readers of different backgrounds to the beauty of the selected algorithms. Serious quantitative researchers, writing customized codes for computation, enjoy cracking source codes as opposed to the black-box approach. Most C and Fortran programs, despite being slightly faster in program execution, lack built-in support for plotting and graphical user interface. This book selects Java as the platform where source codes are developed and applications are run, helping readers/users best appreciate the fun of computation. Interdisciplinary Computing In Java Programming Language is designed to meet the needs of a professional audience composed of practitioners and researchers in science and technology. This book is also suitable for senior undergraduate and graduate-level students in computer science, as a secondary text.
This book presents high-quality research papers presented at the International Conference on Smart Computing and Cyber Security: Strategic Foresight, Security Challenges and Innovation (SMARTCYBER 2020) held during July 7-8, 2020, in the Department of Smart Computing, Kyungdong University, Global Campus, South Korea. The book includes selected works from academics and industrial experts in the field of computer science, information technology, and electronics and telecommunication. The content addresses challenges of cyber security.
This volume deals with problems of modern effective algorithms for the numerical solution of the most frequently occurring elliptic partial differential equations. From the point of view of implementation, attention is paid to algorithms for both classical sequential and parallel computer systems. The first two chapters are devoted to fast algorithms for solving the Poisson and biharmonic equation. In the third chapter, parallel algorithms for model parallel computer systems of the SIMD and MIMD types are described. The implementation aspects of parallel algorithms for solving model elliptic boundary value problems are outlined for systems with matrix, pipeline and multiprocessor parallel computer architectures. A modern and popular multigrid computational principle which offers a good opportunity for a parallel realization is described in the next chapter. More parallel variants based in this idea are presented, whereby methods and assignments strategies for hypercube systems are treated in more detail. The last chapter presents VLSI designs for solving special tridiagonal linear systems of equations arising from finite-difference approximations of elliptic problems. For researchers interested in the development and application of fast algorithms for solving elliptic partial differential equations using advanced computer systems.
This open access book presents selected papers from International Symposium on Mathematics, Quantum Theory, and Cryptography (MQC), which was held on September 25-27, 2019 in Fukuoka, Japan. The international symposium MQC addresses the mathematics and quantum theory underlying secure modeling of the post quantum cryptography including e.g. mathematical study of the light-matter interaction models as well as quantum computing. The security of the most widely used RSA cryptosystem is based on the difficulty of factoring large integers. However, in 1994 Shor proposed a quantum polynomial time algorithm for factoring integers, and the RSA cryptosystem is no longer secure in the quantum computing model. This vulnerability has prompted research into post-quantum cryptography using alternative mathematical problems that are secure in the era of quantum computers. In this regard, the National Institute of Standards and Technology (NIST) began to standardize post-quantum cryptography in 2016. This book is suitable for postgraduate students in mathematics and computer science, as well as for experts in industry working on post-quantum cryptography.
This book presents a summary of artificial intelligence and machine learning techniques in its first two chapters. The remaining chapters of the book provide everything one must know about the basic artificial intelligence to modern machine intelligence techniques including the hybrid computational intelligence technique, using the concepts of several real-life solved examples, design of projects and research ideas. The solved examples with more than 200 illustrations presented in the book are a great help to instructors, students, non-AI professionals, and researchers. Each example is discussed in detail with encoding, normalization, architecture, detailed design, process flow, and sample input/output. Summary of the fundamental concepts with solved examples is a unique combination and highlight of this book.
This book aims to provide some insights into recently developed bio-inspired algorithms within recent emerging trends of fog computing, sentiment analysis, and data streaming as well as to provide a more comprehensive approach to the big data management from pre-processing to analytics to visualization phases. The subject area of this book is within the realm of computer science, notably algorithms (meta-heuristic and, more particularly, bio-inspired algorithms). Although application domains of these new algorithms may be mentioned, the scope of this book is not on the application of algorithms to specific or general domains but to provide an update on recent research trends for bio-inspired algorithms within a specific application domain or emerging area. These areas include data streaming, fog computing, and phases of big data management. One of the reasons for writing this book is that the bio-inspired approach does not receive much attention but shows considerable promise and diversity in terms of approach of many issues in big data and streaming. Some novel approaches of this book are the use of these algorithms to all phases of data management (not just a particular phase such as data mining or business intelligence as many books focus on); effective demonstration of the effectiveness of a selected algorithm within a chapter against comparative algorithms using the experimental method. Another novel approach is a brief overview and evaluation of traditional algorithms, both sequential and parallel, for use in data mining, in order to provide an overview of existing algorithms in use. This overview complements a further chapter on bio-inspired algorithms for data mining to enable readers to make a more suitable choice of algorithm for data mining within a particular context. In all chapters, references for further reading are provided, and in selected chapters, the author also include ideas for future research.
This book comprises select proceedings of the 7th International Conference on Data Science and Engineering (ICDSE 2021). The contents of this book focus on responsible data science. This book tries to integrate research across diverse topics related to data science, such as fairness, trust, ethics, confidentiality, transparency, and accuracy. The chapters in this book represent research from different perspectives that offer novel theoretical implications that span multiple disciplines. The book will serve as a reference resource for researchers and practitioners in academia and industry.
Although rigidity has been studied since the time of Lagrange (1788) and Maxwell (1864), it is only in the last twenty-five years that it has begun to find applications in the basic sciences. The modern era starts with Laman (1970), who made the subject rigorous in two dimensions, followed by the development of computer algorithms that can test over a million sites in seconds and find the rigid regions, and the associated pivots, leading to many applications. This workshop was organized to bring together leading researchers studying the underlying theory, and to explore the various areas of science where applications of these ideas are being implemented.
This book highlights essential concepts in connection with the traditional bat algorithm and its recent variants, as well as its application to find optimal solutions for a variety of real-world engineering and medical problems. Today, swarm intelligence-based meta-heuristic algorithms are extensively being used to address a wide range of real-world optimization problems due to their adaptability and robustness. Developed in 2009, the bat algorithm (BA) is one of the most successful swarm intelligence procedures, and has been used to tackle optimization tasks for more than a decade. The BA's mathematical model is quite straightforward and easy to understand and enhance, compared to other swarm approaches. Hence, it has attracted the attention of researchers who are working to find optimal solutions in a diverse range of domains, such as N-dimensional numerical optimization, constrained/unconstrained optimization and linear/nonlinear optimization problems. Along with the traditional BA, its enhanced versions are now also being used to solve optimization problems in science, engineering and medical applications around the globe.
The theory of parsing is an important application area of the theory of formal languages and automata. The evolution of modem high-level programming languages created a need for a general and theoretically dean methodology for writing compilers for these languages. It was perceived that the compilation process had to be "syntax-directed," that is, the functioning of a programming language compiler had to be defined completely by the underlying formal syntax of the language. A program text to be compiled is "parsed" according to the syntax of the language, and the object code for the program is generated according to the semantics attached to the parsed syntactic entities. Context-free grammars were soon found to be the most convenient formalism for describing the syntax of programming languages, and accordingly methods for parsing context-free languages were devel oped. Practical considerations led to the definition of various kinds of restricted context-free grammars that are parsable by means of efficient deterministic linear-time algorithms."
The five-volume set IFIP AICT 630, 631, 632, 633, and 634 constitutes the refereed proceedings of the International IFIP WG 5.7 Conference on Advances in Production Management Systems, APMS 2021, held in Nantes, France, in September 2021.*The 378 papers presented were carefully reviewed and selected from 529 submissions. They discuss artificial intelligence techniques, decision aid and new and renewed paradigms for sustainable and resilient production systems at four-wall factory and value chain levels. The papers are organized in the following topical sections: Part I: artificial intelligence based optimization techniques for demand-driven manufacturing; hybrid approaches for production planning and scheduling; intelligent systems for manufacturing planning and control in the industry 4.0; learning and robust decision support systems for agile manufacturing environments; low-code and model-driven engineering for production system; meta-heuristics and optimization techniques for energy-oriented manufacturing systems; metaheuristics for production systems; modern analytics and new AI-based smart techniques for replenishment and production planning under uncertainty; system identification for manufacturing control applications; and the future of lean thinking and practice Part II: digital transformation of SME manufacturers: the crucial role of standard; digital transformations towards supply chain resiliency; engineering of smart-product-service-systems of the future; lean and Six Sigma in services healthcare; new trends and challenges in reconfigurable, flexible or agile production system; production management in food supply chains; and sustainability in production planning and lot-sizing Part III: autonomous robots in delivery logistics; digital transformation approaches in production management; finance-driven supply chain; gastronomic service system design; modern scheduling and applications in industry 4.0; recent advances in sustainable manufacturing; regular session: green production and circularity concepts; regular session: improvement models and methods for green and innovative systems; regular session: supply chain and routing management; regular session: robotics and human aspects; regular session: classification and data management methods; smart supply chain and production in society 5.0 era; and supply chain risk management under coronavirus Part IV: AI for resilience in global supply chain networks in the context of pandemic disruptions; blockchain in the operations and supply chain management; data-based services as key enablers for smart products, manufacturing and assembly; data-driven methods for supply chain optimization; digital twins based on systems engineering and semantic modeling; digital twins in companies first developments and future challenges; human-centered artificial intelligence in smart manufacturing for the operator 4.0; operations management in engineer-to-order manufacturing; product and asset life cycle management for smart and sustainable manufacturing systems; robotics technologies for control, smart manufacturing and logistics; serious games analytics: improving games and learning support; smart and sustainable production and supply chains; smart methods and techniques for sustainable supply chain management; the new digital lean manufacturing paradigm; and the role of emerging technologies in disaster relief operations: lessons from COVID-19 Part V: data-driven platforms and applications in production and logistics: digital twins and AI for sustainability; regular session: new approaches for routing problem solving; regular session: improvement of design and operation of manufacturing systems; regular session: crossdock and transportation issues; regular session: maintenance improvement and lifecycle management; regular session: additive manufacturing and mass customization; regular session: frameworks and conceptual modelling for systems and services efficiency; regular session: optimization of production and transportation systems; regular session: optimization of supply chain agility and reconfigurability; regular session: advanced modelling approaches; regular session: simulation and optimization of systems performances; regular session: AI-based approaches for quality and performance improvement of production systems; and regular session: risk and performance management of supply chains *The conference was held online.
This book reviews research developments in diverse areas of reinforcement learning such as model-free actor-critic methods, model-based learning and control, information geometry of policy searches, reward design, and exploration in biology and the behavioral sciences. Special emphasis is placed on advanced ideas, algorithms, methods, and applications. The contributed papers gathered here grew out of a lecture course on reinforcement learning held by Prof. Jan Peters in the winter semester 2018/2019 at Technische Universitat Darmstadt. The book is intended for reinforcement learning students and researchers with a firm grasp of linear algebra, statistics, and optimization. Nevertheless, all key concepts are introduced in each chapter, making the content self-contained and accessible to a broader audience.
This book includes constructive approximation theory; it presents ordinary and fractional approximations by positive sublinear operators, and high order approximation by multivariate generalized Picard, Gauss-Weierstrass, Poisson-Cauchy and trigonometric singular integrals. Constructive and Computational Fractional Analysis recently is more and more in the center of mathematics because of their great applications in the real world. In this book, all presented is original work by the author given at a very general level to cover a maximum number of cases in various applications. The author applies generalized fractional differentiation techniques of Riemann-Liouville, Caputo and Canavati types and of fractional variable order to various kinds of inequalities such as of Opial, Hardy, Hilbert-Pachpatte and on the spherical shell. He continues with E. R. Love left- and right-side fractional integral inequalities. They follow fractional Landau inequalities, of left and right sides, univariate and multivariate, including ones for Semigroups. These are developed to all possible directions, and right-side multivariate fractional Taylor formulae are proven for the purpose. It continues with several Gronwall fractional inequalities of variable order. This book results are expected to find applications in many areas of pure and applied mathematics. As such this book is suitable for researchers, graduate students and seminars of the above disciplines, also to be in all science and engineering libraries.
This book discusses the interplay between statistics, data science, machine learning and artificial intelligence, with a focus on environmental science, the natural sciences, and technology. It covers the state of the art from both a theoretical and a practical viewpoint and describes how to successfully apply machine learning methods, demonstrating the benefits of statistics for modeling and analyzing high-dimensional and big data. The book's expert contributions include theoretical studies of machine learning methods, expositions of general methodologies for sound statistical analyses of data as well as novel approaches to modeling and analyzing data for specific problems and areas. In terms of applications, the contributions deal with data as arising in industrial quality control, autonomous driving, transportation and traffic, chip manufacturing, photovoltaics, football, transmission of infectious diseases, Covid-19 and public health. The book will appeal to statisticians and data scientists, as well as engineers and computer scientists working in related fields or applications.
The burgeoning field of data analysis is expanding at an incredible pace due to the proliferation of data collection in almost every area of science. The enormous data sets now routinely encountered in the sciences provide an incentive to develop mathematical techniques and computational algorithms that help synthesize, interpret and give meaning to the data in the context of its scientific setting. A specific aim of this book is to integrate standard scientific computing methods with data analysis. By doing so, it brings together, in a self-consistent fashion, the key ideas from: * statistics, * time-frequency analysis, and * low-dimensional reductions The blend of these ideas provides meaningful insight into the data sets one is faced with in every scientific subject today, including those generated from complex dynamical systems. This is a particularly exciting field and much of the final part of the book is driven by intuitive examples from it, showing how the three areas can be used in combination to give critical insight into the fundamental workings of various problems. Data-Driven Modeling and Scientific Computation is a survey of practical numerical solution techniques for ordinary and partial differential equations as well as algorithms for data manipulation and analysis. Emphasis is on the implementation of numerical schemes to practical problems in the engineering, biological and physical sciences. An accessible introductory-to-advanced text, this book fully integrates MATLAB and its versatile and high-level programming functionality, while bringing together computational and data skills for both undergraduate and graduate students in scientific computing.
Representation learning in heterogeneous graphs (HG) is intended to provide a meaningful vector representation for each node so as to facilitate downstream applications such as link prediction, personalized recommendation, node classification, etc. This task, however, is challenging not only because of the need to incorporate heterogeneous structural (graph) information consisting of multiple types of node and edge, but also the need to consider heterogeneous attributes or types of content (e.g. text or image) associated with each node. Although considerable advances have been made in homogeneous (and heterogeneous) graph embedding, attributed graph embedding and graph neural networks, few are capable of simultaneously and effectively taking into account heterogeneous structural (graph) information as well as the heterogeneous content information of each node. In this book, we provide a comprehensive survey of current developments in HG representation learning. More importantly, we present the state-of-the-art in this field, including theoretical models and real applications that have been showcased at the top conferences and journals, such as TKDE, KDD, WWW, IJCAI and AAAI. The book has two major objectives: (1) to provide researchers with an understanding of the fundamental issues and a good point of departure for working in this rapidly expanding field, and (2) to present the latest research on applying heterogeneous graphs to model real systems and learning structural features of interaction systems. To the best of our knowledge, it is the first book to summarize the latest developments and present cutting-edge research on heterogeneous graph representation learning. To gain the most from it, readers should have a basic grasp of computer science, data mining and machine learning.
This book chronicles a 10-year introduction of blended learning into the delivery at a leading technological university, with a longstanding tradition of technology-enabled teaching and learning, and state-of-the-art infrastructure. Hence, both teachers and students were familiar with the idea of online courses. Despite this, the longitudinal experiment did not proceed as expected. Though few technical problems, it required behavioural changes from teachers and learners, thus unearthing a host of socio-technical issues, challenges, and conundrums. With the undercurrent of design ideals such as "tech for good", any industrial sector must examine whether digital platforms are credible substitutes or at best complementary. In this era of Industry 4.0, higher education, like any other industry, should not be about the creative destruction of what we value in universities, but their digital transformation. The book concludes with an agenda for large, repeatable Randomised Controlled Trials (RCTs) to validate digital platforms that could fulfil the aspirations of the key stakeholder groups - students, faculty, and regulators as well as delving into the role of Massive Open Online Courses (MOOCs) as surrogates for "fees-free" higher education and whether the design of such a HiEd 4.0 platform is even a credible proposition. Specifically, the book examines the data-driven evidence within a design-based research methodology to present outcomes of two alternative instructional designs evaluated - traditional lecturing and blended learning. Based on the research findings and statistical analysis, it concludes that the inexorable shift to online delivery of education must be guided by informed educational management and innovation.
RDF-based knowledge graphs require additional formalisms to be fully context-aware, which is presented in this book. This book also provides a collection of provenance techniques and state-of-the-art metadata-enhanced, provenance-aware, knowledge graph-based representations across multiple application domains, in order to demonstrate how to combine graph-based data models and provenance representations. This is important to make statements authoritative, verifiable, and reproducible, such as in biomedical, pharmaceutical, and cybersecurity applications, where the data source and generator can be just as important as the data itself. Capturing provenance is critical to ensure sound experimental results and rigorously designed research studies for patient and drug safety, pathology reports, and medical evidence generation. Similarly, provenance is needed for cyberthreat intelligence dashboards and attack maps that aggregate and/or fuse heterogeneous data from disparate data sources to differentiate between unimportant online events and dangerous cyberattacks, which is demonstrated in this book. Without provenance, data reliability and trustworthiness might be limited, causing data reuse, trust, reproducibility and accountability issues. This book primarily targets researchers who utilize knowledge graphs in their methods and approaches (this includes researchers from a variety of domains, such as cybersecurity, eHealth, data science, Semantic Web, etc.). This book collects core facts for the state of the art in provenance approaches and techniques, complemented by a critical review of existing approaches. New research directions are also provided that combine data science and knowledge graphs, for an increasingly important research topic. |
You may like...
The Cognitive Approach in Cloud…
Dinesh Peter, Amir Alavi, …
Paperback
R2,435
Discovery Miles 24 350
Managing and Processing Big Data in…
Rajkumar Kannan, Raihan Ur Rasool, …
Hardcover
R5,052
Discovery Miles 50 520
|