0
Your cart

Your cart is empty

Browse All Departments
Price
  • R100 - R250 (4)
  • R250 - R500 (29)
  • R500+ (1,279)
  • -
Status
Format
Author / Contributor
Publisher

Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation

Java Deep Learning Cookbook - Train neural networks for classification, NLP, and reinforcement learning using Deeplearning4j... Java Deep Learning Cookbook - Train neural networks for classification, NLP, and reinforcement learning using Deeplearning4j (Paperback)
Rahul Raj
R1,175 Discovery Miles 11 750 Ships in 10 - 15 working days

Use Java and Deeplearning4j to build robust, scalable, and highly accurate AI models from scratch Key Features Install and configure Deeplearning4j to implement deep learning models from scratch Explore recipes for developing, training, and fine-tuning your neural network models in Java Model neural networks using datasets containing images, text, and time-series data Book DescriptionJava is one of the most widely used programming languages in the world. With this book, you will see how to perform deep learning using Deeplearning4j (DL4J) - the most popular Java library for training neural networks efficiently. This book starts by showing you how to install and configure Java and DL4J on your system. You will then gain insights into deep learning basics and use your knowledge to create a deep neural network for binary classification from scratch. As you progress, you will discover how to build a convolutional neural network (CNN) in DL4J, and understand how to construct numeric vectors from text. This deep learning book will also guide you through performing anomaly detection on unsupervised data and help you set up neural networks in distributed systems effectively. In addition to this, you will learn how to import models from Keras and change the configuration in a pre-trained DL4J model. Finally, you will explore benchmarking in DL4J and optimize neural networks for optimal results. By the end of this book, you will have a clear understanding of how you can use DL4J to build robust deep learning applications in Java. What you will learn Perform data normalization and wrangling using DL4J Build deep neural networks using DL4J Implement CNNs to solve image classification problems Train autoencoders to solve anomaly detection problems using DL4J Perform benchmarking and optimization to improve your model's performance Implement reinforcement learning for real-world use cases using RL4J Leverage the capabilities of DL4J in distributed systems Who this book is forIf you are a data scientist, machine learning developer, or a deep learning enthusiast who wants to implement deep learning models in Java, this book is for you. Basic understanding of Java programming as well as some experience with machine learning and neural networks is required to get the most out of this book.

Advanced Deep Learning with R - Become an expert at designing, building, and improving advanced neural network models using R... Advanced Deep Learning with R - Become an expert at designing, building, and improving advanced neural network models using R (Paperback)
Bharatendra Rai
R1,310 Discovery Miles 13 100 Ships in 10 - 15 working days

Discover best practices for choosing, building, training, and improving deep learning models using Keras-R, and TensorFlow-R libraries Key Features Implement deep learning algorithms to build AI models with the help of tips and tricks Understand how deep learning models operate using expert techniques Apply reinforcement learning, computer vision, GANs, and NLP using a range of datasets Book DescriptionDeep learning is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data. Advanced Deep Learning with R will help you understand popular deep learning architectures and their variants in R, along with providing real-life examples for them. This deep learning book starts by covering the essential deep learning techniques and concepts for prediction and classification. You will learn about neural networks, deep learning architectures, and the fundamentals for implementing deep learning with R. The book will also take you through using important deep learning libraries such as Keras-R and TensorFlow-R to implement deep learning algorithms within applications. You will get up to speed with artificial neural networks, recurrent neural networks, convolutional neural networks, long short-term memory networks, and more using advanced examples. Later, you'll discover how to apply generative adversarial networks (GANs) to generate new images; autoencoder neural networks for image dimension reduction, image de-noising and image correction and transfer learning to prepare, define, train, and model a deep neural network. By the end of this book, you will be ready to implement your knowledge and newly acquired skills for applying deep learning algorithms in R through real-world examples. What you will learn Learn how to create binary and multi-class deep neural network models Implement GANs for generating new images Create autoencoder neural networks for image dimension reduction, image de-noising and image correction Implement deep neural networks for performing efficient text classification Learn to define a recurrent convolutional network model for classification in Keras Explore best practices and tips for performance optimization of various deep learning models Who this book is forThis book is for data scientists, machine learning practitioners, deep learning researchers and AI enthusiasts who want to develop their skills and knowledge to implement deep learning techniques and algorithms using the power of R. A solid understanding of machine learning and working knowledge of the R programming language are required.

Generative Adversarial Networks with Industrial Use Cases - Learning How to Build Gan Applications for Retail, Healthcare,... Generative Adversarial Networks with Industrial Use Cases - Learning How to Build Gan Applications for Retail, Healthcare, Telecom, Media, Education, and Hrtech (Paperback)
Navin K Manaswi
R511 Discovery Miles 5 110 Ships in 10 - 15 working days
Developments in Language Theory - 23rd International Conference, DLT 2019, Warsaw, Poland, August 5-9, 2019, Proceedings... Developments in Language Theory - 23rd International Conference, DLT 2019, Warsaw, Poland, August 5-9, 2019, Proceedings (Paperback, 1st ed. 2019)
Piotrek Hofman, Michal Skrzypczak
R1,569 Discovery Miles 15 690 Ships in 10 - 15 working days

This book constitutes the proceedings of the 23rd International Conference on Developments in Language Theory, DLT 2019, held in Warsaw, Poland, in August 2019. The 20 full papers presented together with three invited talks were carefully reviewed and selected from 30 submissions. The papers cover the following topics and areas: combinatorial and algebraic properties of words and languages; grammars, acceptors and transducers for strings, trees, graphics, arrays; algebraic theories for automata and languages; codes; efficient text algorithms; symbolic dynamics; decision problems; relationships to complexity theory and logic; picture description and analysis, polyominoes and bidimensional patterns; cryptography; concurrency; celluar automata; bio-inspired computing; quantum computing.

Natural Language Processing and Information Systems - 24th International Conference on Applications of Natural Language to... Natural Language Processing and Information Systems - 24th International Conference on Applications of Natural Language to Information Systems, NLDB 2019, Salford, UK, June 26-28, 2019, Proceedings (Paperback, 1st ed. 2019)
Elisabeth Metais, Farid Meziane, Sunil Vadera, Vijayan Sugumaran, Mohamad Saraee
R2,109 Discovery Miles 21 090 Ships in 10 - 15 working days

This book constitutes the refereed proceedings of the 24th International Conference on Applications of Natural Language to Information Systems, NLDB 2019, held in Salford, UK, in June 2019. The 21 full papers and 16 short papers were carefully reviewed and selected from 75 submissions. The papers are organized in the following topical sections: argumentation mining and applications; deep learning, neural languages and NLP; social media and web analytics; question answering; corpus analysis; semantic web, open linked data, and ontologies; natural language in conceptual modeling; natural language and ubiquitous computing; and big data and business intelligence.

The Singularity - Building a Better Future (Paperback): Nishanth Mudkey The Singularity - Building a Better Future (Paperback)
Nishanth Mudkey
R1,249 R978 Discovery Miles 9 780 Save R271 (22%) Ships in 10 - 15 working days
Deep Learning-Based Approaches for Sentiment Analysis (Hardcover, 1st ed. 2020): Basant Agarwal, Richi Nayak, Namita Mittal,... Deep Learning-Based Approaches for Sentiment Analysis (Hardcover, 1st ed. 2020)
Basant Agarwal, Richi Nayak, Namita Mittal, Srikanta Patnaik
R4,784 Discovery Miles 47 840 Ships in 10 - 15 working days

This book covers deep-learning-based approaches for sentiment analysis, a relatively new, but fast-growing research area, which has significantly changed in the past few years. The book presents a collection of state-of-the-art approaches, focusing on the best-performing, cutting-edge solutions for the most common and difficult challenges faced in sentiment analysis research. Providing detailed explanations of the methodologies, the book is a valuable resource for researchers as well as newcomers to the field.

Multibiometric Watermarking with Compressive Sensing Theory - Techniques and Applications (Paperback, Softcover reprint of the... Multibiometric Watermarking with Compressive Sensing Theory - Techniques and Applications (Paperback, Softcover reprint of the original 1st ed. 2018)
Rohit M. Thanki, Vedvyas J. Dwivedi, Komal R. Borisagar
R1,557 Discovery Miles 15 570 Ships in 10 - 15 working days

This book presents multibiometric watermarking techniques for security of biometric data. This book also covers transform domain multibiometric watermarking techniques and their advantages and limitations. The authors have developed novel watermarking techniques with a combination of Compressive Sensing (CS) theory for the security of biometric data at the system database of the biometric system. The authors show how these techniques offer higher robustness, authenticity, better imperceptibility, increased payload capacity, and secure biometric watermarks. They show how to use the CS theory for the security of biometric watermarks before embedding into the host biometric data. The suggested methods may find potential applications in the security of biometric data at various banking applications, access control of laboratories, nuclear power stations, military base, and airports.

Linguistic Linked Data - Representation, Generation and Applications (Hardcover, 1st ed. 2020): Philipp Cimiano, Christian... Linguistic Linked Data - Representation, Generation and Applications (Hardcover, 1st ed. 2020)
Philipp Cimiano, Christian Chiarcos, John P. Mccrae, Jorge Gracia
R4,521 Discovery Miles 45 210 Ships in 10 - 15 working days

This is the first monograph on the emerging area of linguistic linked data. Presenting a combination of background information on linguistic linked data and concrete implementation advice, it introduces and discusses the main benefits of applying linked data (LD) principles to the representation and publication of linguistic resources, arguing that LD does not look at a single resource in isolation but seeks to create a large network of resources that can be used together and uniformly, and so making more of the single resource. The book describes how the LD principles can be applied to modelling language resources. The first part provides the foundation for understanding the remainder of the book, introducing the data models, ontology and query languages used as the basis of the Semantic Web and LD and offering a more detailed overview of the Linguistic Linked Data Cloud. The second part of the book focuses on modelling language resources using LD principles, describing how to model lexical resources using Ontolex-lemon, the lexicon model for ontologies, and how to annotate and address elements of text represented in RDF. It also demonstrates how to model annotations, and how to capture the metadata of language resources. Further, it includes a chapter on representing linguistic categories. In the third part of the book, the authors describe how language resources can be transformed into LD and how links can be inferred and added to the data to increase connectivity and linking between different datasets. They also discuss using LD resources for natural language processing. The last part describes concrete applications of the technologies: representing and linking multilingual wordnets, applications in digital humanities and the discovery of language resources. Given its scope, the book is relevant for researchers and graduate students interested in topics at the crossroads of natural language processing / computational linguistics and the Semantic Web / linked data. It appeals to Semantic Web experts who are not proficient in applying the Semantic Web and LD principles to linguistic data, as well as to computational linguists who are used to working with lexical and linguistic resources wanting to learn about a new paradigm for modelling, publishing and exploiting linguistic resources.

Machine Learning - The Complete Step-By-Step Guide To Learning and Understanding Machine Learning From Beginners, Intermediate... Machine Learning - The Complete Step-By-Step Guide To Learning and Understanding Machine Learning From Beginners, Intermediate Advanced, To Expert Concepts and Techniques (Paperback)
Peter Bradley
R461 Discovery Miles 4 610 Ships in 10 - 15 working days
Neural Networks for Natural Language Processing (Paperback): Sumathi S., Janani M Neural Networks for Natural Language Processing (Paperback)
Sumathi S., Janani M
R5,566 Discovery Miles 55 660 Ships in 10 - 15 working days

Information in today's advancing world is rapidly expanding and becoming widely available. This eruption of data has made handling it a daunting and time-consuming task. Natural language processing (NLP) is a method that applies linguistics and algorithms to large amounts of this data to make it more valuable. NLP improves the interaction between humans and computers, yet there remains a lack of research that focuses on the practical implementations of this trending approach. Neural Networks for Natural Language Processing is a collection of innovative research on the methods and applications of linguistic information processing and its computational properties. This publication will support readers with performing sentence classification and language generation using neural networks, apply deep learning models to solve machine translation and conversation problems, and apply deep structured semantic models on information retrieval and natural language applications. While highlighting topics including deep learning, query entity recognition, and information retrieval, this book is ideally designed for research and development professionals, IT specialists, industrialists, technology developers, data analysts, data scientists, academics, researchers, and students seeking current research on the fundamental concepts and techniques of natural language processing.

Hands-On Deep Learning with Go - A practical guide to building and implementing neural network models using Go (Paperback):... Hands-On Deep Learning with Go - A practical guide to building and implementing neural network models using Go (Paperback)
Gareth Seneque, Darrell Chua
R1,269 Discovery Miles 12 690 Ships in 10 - 15 working days

Apply modern deep learning techniques to build and train deep neural networks using Gorgonia Key Features Gain a practical understanding of deep learning using Golang Build complex neural network models using Go libraries and Gorgonia Take your deep learning model from design to deployment with this handy guide Book DescriptionGo is an open source programming language designed by Google for handling large-scale projects efficiently. The Go ecosystem comprises some really powerful deep learning tools such as DQN and CUDA. With this book, you'll be able to use these tools to train and deploy scalable deep learning models from scratch. This deep learning book begins by introducing you to a variety of tools and libraries available in Go. It then takes you through building neural networks, including activation functions and the learning algorithms that make neural networks tick. In addition to this, you'll learn how to build advanced architectures such as autoencoders, restricted Boltzmann machines (RBMs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), and more. You'll also understand how you can scale model deployments on the AWS cloud infrastructure for training and inference. By the end of this book, you'll have mastered the art of building, training, and deploying deep learning models in Go to solve real-world problems. What you will learn Explore the Go ecosystem of libraries and communities for deep learning Get to grips with Neural Networks, their history, and how they work Design and implement Deep Neural Networks in Go Get a strong foundation of concepts such as Backpropagation and Momentum Build Variational Autoencoders and Restricted Boltzmann Machines using Go Build models with CUDA and benchmark CPU and GPU models Who this book is forThis book is for data scientists, machine learning engineers, and AI developers who want to build state-of-the-art deep learning models using Go. Familiarity with basic machine learning concepts and Go programming is required to get the best out of this book.

Deep Learning with TensorFlow 2 and Keras - Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras... Deep Learning with TensorFlow 2 and Keras - Regression, ConvNets, GANs, RNNs, NLP, and more with TensorFlow 2 and the Keras API, 2nd Edition (Paperback, 2nd Revised edition)
Antonio Gulli, Amita Kapoor, Sujit Pal
R1,142 Discovery Miles 11 420 Ships in 10 - 15 working days

Build machine and deep learning systems with the newly released TensorFlow 2 and Keras for the lab, production, and mobile devices Key Features Introduces and then uses TensorFlow 2 and Keras right from the start Teaches key machine and deep learning techniques Understand the fundamentals of deep learning and machine learning through clear explanations and extensive code samples Book DescriptionDeep Learning with TensorFlow 2 and Keras, Second Edition teaches neural networks and deep learning techniques alongside TensorFlow (TF) and Keras. You'll learn how to write deep learning applications in the most powerful, popular, and scalable machine learning stack available. TensorFlow is the machine learning library of choice for professional applications, while Keras offers a simple and powerful Python API for accessing TensorFlow. TensorFlow 2 provides full Keras integration, making advanced machine learning easier and more convenient than ever before. This book also introduces neural networks with TensorFlow, runs through the main applications (regression, ConvNets (CNNs), GANs, RNNs, NLP), covers two working example apps, and then dives into TF in production, TF mobile, and using TensorFlow with AutoML. What you will learn Build machine learning and deep learning systems with TensorFlow 2 and the Keras API Use Regression analysis, the most popular approach to machine learning Understand ConvNets (convolutional neural networks) and how they are essential for deep learning systems such as image classifiers Use GANs (generative adversarial networks) to create new data that fits with existing patterns Discover RNNs (recurrent neural networks) that can process sequences of input intelligently, using one part of a sequence to correctly interpret another Apply deep learning to natural human language and interpret natural language texts to produce an appropriate response Train your models on the cloud and put TF to work in real environments Explore how Google tools can automate simple ML workflows without the need for complex modeling Who this book is forThis book is for Python developers and data scientists who want to build machine learning and deep learning systems with TensorFlow. This book gives you the theory and practice required to use Keras, TensorFlow 2, and AutoML to build machine learning systems. Some knowledge of machine learning is expected.

Prominent Feature Extraction for Sentiment Analysis (Paperback, Softcover reprint of the original 1st ed. 2016): Basant... Prominent Feature Extraction for Sentiment Analysis (Paperback, Softcover reprint of the original 1st ed. 2016)
Basant Agarwal, Namita Mittal
R2,957 Discovery Miles 29 570 Ships in 10 - 15 working days

The objective of this monograph is to improve the performance of the sentiment analysis model by incorporating the semantic, syntactic and common-sense knowledge. This book proposes a novel semantic concept extraction approach that uses dependency relations between words to extract the features from the text. Proposed approach combines the semantic and common-sense knowledge for the better understanding of the text. In addition, the book aims to extract prominent features from the unstructured text by eliminating the noisy, irrelevant and redundant features. Readers will also discover a proposed method for efficient dimensionality reduction to alleviate the data sparseness problem being faced by machine learning model. Authors pay attention to the four main findings of the book : -Performance of the sentiment analysis can be improved by reducing the redundancy among the features. Experimental results show that minimum Redundancy Maximum Relevance (mRMR) feature selection technique improves the performance of the sentiment analysis by eliminating the redundant features. - Boolean Multinomial Naive Bayes (BMNB) machine learning algorithm with mRMR feature selection technique performs better than Support Vector Machine (SVM) classifier for sentiment analysis. - The problem of data sparseness is alleviated by semantic clustering of features, which in turn improves the performance of the sentiment analysis. - Semantic relations among the words in the text have useful cues for sentiment analysis. Common-sense knowledge in form of ConceptNet ontology acquires knowledge, which provides a better understanding of the text that improves the performance of the sentiment analysis.

Computer Aided Writing (Hardcover, 1st ed. 2020): Andre Klahold, Madjid Fathi Computer Aided Writing (Hardcover, 1st ed. 2020)
Andre Klahold, Madjid Fathi
R4,485 Discovery Miles 44 850 Ships in 10 - 15 working days

This book deals with "Computer Aided Writing", CAW for short. The contents of that is a sector of Knowledge based technics and Knowledge Management. The role of Knowledge Management in social media, education and Industry 4.0 is out of question. More important is the expectation of combining Knowledge Management and Cognitive Technology, which needs more and more new innovations in this field to face recent problems in social and technological areas. The book is intended to provide an overview of the state of research in this field, show the extent to which computer assistance in writing is already being used and present current research contributions. After a brief introduction into the history of writing and the tools that were created, the current developments are examined on the basis of a formal writing model. Tools such as word processing and content management systems will be discussed in detail. The special form of writing, "journalism", is used to examine the effects of Computer Aided Writing. We dedicate a separate chapter to the topic of research, since it is of essential importance in the writing process. With Knowledge Discovery from Text (KDT) and recommendation systems we enter the field of Knowledge Management in the context of Computer Aided Writing. Finally, we will look at methods for automated text generation before giving a final outlook on future developments.

Deep Learning for Natural Language Processing - Solve your natural language processing problems with smart deep neural networks... Deep Learning for Natural Language Processing - Solve your natural language processing problems with smart deep neural networks (Paperback)
Karthiek Reddy Bokka, Shubhangi Hora, Tanuj Jain, Monicah Wambugu
R985 Discovery Miles 9 850 Ships in 10 - 15 working days

Gain the knowledge of various deep neural network architectures and their application areas to conquer your NLP issues. Key Features Gain insights into the basic building blocks of natural language processing Learn how to select the best deep neural network to solve your NLP problems Explore convolutional and recurrent neural networks and long short-term memory networks Book DescriptionApplying deep learning approaches to various NLP tasks can take your computational algorithms to a completely new level in terms of speed and accuracy. Deep Learning for Natural Language Processing starts off by highlighting the basic building blocks of the natural language processing domain. The book goes on to introduce the problems that you can solve using state-of-the-art neural network models. After this, delving into the various neural network architectures and their specific areas of application will help you to understand how to select the best model to suit your needs. As you advance through this deep learning book, you'll study convolutional, recurrent, and recursive neural networks, in addition to covering long short-term memory networks (LSTM). Understanding these networks will help you to implement their models using Keras. In the later chapters, you will be able to develop a trigger word detection application using NLP techniques such as attention model and beam search. By the end of this book, you will not only have sound knowledge of natural language processing but also be able to select the best text pre-processing and neural network models to solve a number of NLP issues. What you will learn Understand various pre-processing techniques for deep learning problems Build a vector representation of text using word2vec and GloVe Create a named entity recognizer and parts-of-speech tagger with Apache OpenNLP Build a machine translation model in Keras Develop a text generation application using LSTM Build a trigger word detection application using an attention model Who this book is forIf you're an aspiring data scientist looking for an introduction to deep learning in the NLP domain, this is just the book for you. Strong working knowledge of Python, linear algebra, and machine learning is a must.

Hands-On Neural Networks - Learn how to build and train your first neural network model using Python (Paperback): Leonardo De... Hands-On Neural Networks - Learn how to build and train your first neural network model using Python (Paperback)
Leonardo De Marchi, Laura Mitchell
R1,057 Discovery Miles 10 570 Ships in 10 - 15 working days

Design and create neural networks with deep learning and artificial intelligence principles using OpenAI Gym, TensorFlow, and Keras Key Features Explore neural network architecture and understand how it functions Learn algorithms to solve common problems using back propagation and perceptrons Understand how to apply neural networks to applications with the help of useful illustrations Book DescriptionNeural networks play a very important role in deep learning and artificial intelligence (AI), with applications in a wide variety of domains, right from medical diagnosis, to financial forecasting, and even machine diagnostics. Hands-On Neural Networks is designed to guide you through learning about neural networks in a practical way. The book will get you started by giving you a brief introduction to perceptron networks. You will then gain insights into machine learning and also understand what the future of AI could look like. Next, you will study how embeddings can be used to process textual data and the role of long short-term memory networks (LSTMs) in helping you solve common natural language processing (NLP) problems. The later chapters will demonstrate how you can implement advanced concepts including transfer learning, generative adversarial networks (GANs), autoencoders, and reinforcement learning. Finally, you can look forward to further content on the latest advancements in the field of neural networks. By the end of this book, you will have the skills you need to build, train, and optimize your own neural network model that can be used to provide predictable solutions. What you will learn Learn how to train a network by using backpropagation Discover how to load and transform images for use in neural networks Study how neural networks can be applied to a varied set of applications Solve common challenges faced in neural network development Understand the transfer learning concept to solve tasks using Keras and Visual Geometry Group (VGG) network Get up to speed with advanced and complex deep learning concepts like LSTMs and NLP Explore innovative algorithms like GANs and deep reinforcement learning Who this book is forIf you are interested in artificial intelligence and deep learning and want to further your skills, then this intermediate-level book is for you. Some knowledge of statistics will help you get the most out of this book.

Automatic Syntactic Analysis Based on Selectional Preferences (Paperback, Softcover reprint of the original 1st ed. 2018):... Automatic Syntactic Analysis Based on Selectional Preferences (Paperback, Softcover reprint of the original 1st ed. 2018)
Alexander Gelbukh, Hiram Calvo
R2,957 Discovery Miles 29 570 Ships in 10 - 15 working days

This book describes effective methods for automatically analyzing a sentence, based on the syntactic and semantic characteristics of the elements that form it. To tackle ambiguities, the authors use selectional preferences (SP), which measure how well two words fit together semantically in a sentence. Today, many disciplines require automatic text analysis based on the syntactic and semantic characteristics of language and as such several techniques for parsing sentences have been proposed. Which is better? In this book the authors begin with simple heuristics before moving on to more complex methods that identify nouns and verbs and then aggregate modifiers, and lastly discuss methods that can handle complex subordinate and relative clauses. During this process, several ambiguities arise. SP are commonly determined on the basis of the association between a pair of words. However, in many cases, SP depend on more words. For example, something (such as grass) may be edible, depending on who is eating it (a cow?). Moreover, things such as popcorn are usually eaten at the movies, and not in a restaurant. The authors deal with these phenomena from different points of view.

Court Interpreters and Fair Trials (Paperback, Softcover reprint of the original 1st ed. 2018): John Henry Dingfelder Stone Court Interpreters and Fair Trials (Paperback, Softcover reprint of the original 1st ed. 2018)
John Henry Dingfelder Stone
R3,214 Discovery Miles 32 140 Ships in 10 - 15 working days

Globalization has increased the number of individuals in criminal proceedings who are unable to understand the language of the courtroom, and as a result the number of court interpreters has also increased. But unsupervised interpreters can severely undermine the fairness of a criminal proceeding. In this innovative and methodological new study, Dingfelder Stone comprehensively examines the multitudes of mistakes made by interpreters, and explores the resultant legal and practical implications. Whilst scholars of interpreting studies have researched the prevalence of interpreter error for decades, the effect of these mistakes on criminal proceedings has largely gone unanalyzed by legal scholars. Drawing upon both interpreting studies research and legal scholarship alike, this engaging and timely study analyzes the impact of court interpreters on the right to a fair trial under international law, which forms the minimum baseline standard for national systems.

Turkish Natural Language Processing (Paperback, Softcover reprint of the original 1st ed. 2018): Kemal Oflazer, Murat Saraclar Turkish Natural Language Processing (Paperback, Softcover reprint of the original 1st ed. 2018)
Kemal Oflazer, Murat Saraclar
R2,985 Discovery Miles 29 850 Ships in 10 - 15 working days

This book brings together work on Turkish natural language and speech processing over the last 25 years, covering numerous fundamental tasks ranging from morphological processing and language modeling, to full-fledged deep parsing and machine translation, as well as computational resources developed along the way to enable most of this work. Owing to its complex morphology and free constituent order, Turkish has proved to be a fascinating language for natural language and speech processing research and applications. After an overview of the aspects of Turkish that make it challenging for natural language and speech processing tasks, this book discusses in detail the main tasks and applications of Turkish natural language and speech processing. A compendium of the work on Turkish natural language and speech processing, it is a valuable reference for new researchers considering computational work on Turkish, as well as a one-stop resource for commercial and research institutions planning to develop applications for Turkish. It also serves as a blueprint for similar work on other Turkic languages such as Azeri, Turkmen and Uzbek.

Natural Language Processing with Java Cookbook - Over 70 recipes to create linguistic and language translation applications... Natural Language Processing with Java Cookbook - Over 70 recipes to create linguistic and language translation applications using Java libraries (Paperback)
Richard M Reese
R1,207 Discovery Miles 12 070 Ships in 10 - 15 working days

A problem-solution guide to encounter various NLP tasks utilizing Java open source libraries and cloud-based solutions Key Features Perform simple-to-complex NLP text processing tasks using modern Java libraries Extract relationships between different text complexities using a problem-solution approach Utilize cloud-based APIs to perform machine translation operations Book DescriptionNatural Language Processing (NLP) has become one of the prime technologies for processing very large amounts of unstructured data from disparate information sources. This book includes a wide set of recipes and quick methods that solve challenges in text syntax, semantics, and speech tasks. At the beginning of the book, you'll learn important NLP techniques, such as identifying parts of speech, tagging words, and analyzing word semantics. You will learn how to perform lexical analysis and use machine learning techniques to speed up NLP operations. With independent recipes, you will explore techniques for customizing your existing NLP engines/models using Java libraries such as OpenNLP and the Stanford NLP library. You will also learn how to use NLP processing features from cloud-based sources, including Google and Amazon's AWS. You will master core tasks, such as stemming, lemmatization, part-of-speech tagging, and named entity recognition. You will also learn about sentiment analysis, semantic text similarity, language identification, machine translation, and text summarization. By the end of this book, you will be ready to become a professional NLP expert using a problem-solution approach to analyze any sort of text, sentences, or semantic words. What you will learn Explore how to use tokenizers in NLP processing Implement NLP techniques in machine learning and deep learning applications Identify sentences within the text and learn how to train specialized NER models Learn how to classify documents and perform sentiment analysis Find semantic similarities between text elements and extract text from a variety of sources Preprocess text from a variety of data sources Learn how to identify and translate languages Who this book is forThis book is for data scientists, NLP engineers, and machine learning developers who want to perform their work on linguistic applications faster with the use of popular libraries on JVM machines. This book will help you build real-world NLP applications using a recipe-based approach. Prior knowledge of Natural Language Processing basics and Java programming is expected.

PyTorch Deep Learning Hands-On - Build CNNs, RNNs, GANs, reinforcement learning, and more, quickly and easily (Paperback):... PyTorch Deep Learning Hands-On - Build CNNs, RNNs, GANs, reinforcement learning, and more, quickly and easily (Paperback)
Sherin Thomas, Sudhanshu Passi
R1,159 Discovery Miles 11 590 Ships in 10 - 15 working days

Hands-on projects cover all the key deep learning methods built step-by-step in PyTorch Key Features Internals and principles of PyTorch Implement key deep learning methods in PyTorch: CNNs, GANs, RNNs, reinforcement learning, and more Build deep learning workflows and take deep learning models from prototyping to production Book DescriptionPyTorch Deep Learning Hands-On is a book for engineers who want a fast-paced guide to doing deep learning work with Pytorch. It is not an academic textbook and does not try to teach deep learning principles. The book will help you most if you want to get your hands dirty and put PyTorch to work quickly. PyTorch Deep Learning Hands-On shows how to implement the major deep learning architectures in PyTorch. It covers neural networks, computer vision, CNNs, natural language processing (RNN), GANs, and reinforcement learning. You will also build deep learning workflows with the PyTorch framework, migrate models built in Python to highly efficient TorchScript, and deploy to production using the most sophisticated available tools. Each chapter focuses on a different area of deep learning. Chapters start with a refresher on how the model works, before sharing the code you need to implement them in PyTorch. This book is ideal if you want to rapidly add PyTorch to your deep learning toolset. What you will learnUse PyTorch to build: Simple Neural Networks - build neural networks the PyTorch way, with high-level functions, optimizers, and more Convolutional Neural Networks - create advanced computer vision systems Recurrent Neural Networks - work with sequential data such as natural language and audio Generative Adversarial Networks - create new content with models including SimpleGAN and CycleGAN Reinforcement Learning - develop systems that can solve complex problems such as driving or game playing Deep Learning workflows - move effectively from ideation to production with proper deep learning workflow using PyTorch and its utility packages Production-ready models - package your models for high-performance production environments Who this book is forMachine learning engineers who want to put PyTorch to work.

Julia High Performance - Optimizations, distributed computing, multithreading, and GPU programming with Julia 1.0 and beyond,... Julia High Performance - Optimizations, distributed computing, multithreading, and GPU programming with Julia 1.0 and beyond, 2nd Edition (Paperback, 2nd Revised edition)
Avik Sengupta; Foreword by Alan Edelman
R895 Discovery Miles 8 950 Ships in 10 - 15 working days

Design and develop high-performance programs in Julia 1.0 Key Features Learn the characteristics of high-performance Julia code Use the power of the GPU to write efficient numerical code Speed up your computation with the help of newly introduced shared memory multi-threading in Julia 1.0 Book DescriptionJulia is a high-level, high-performance dynamic programming language for numerical computing. If you want to understand how to avoid bottlenecks and design your programs for the highest possible performance, then this book is for you. The book starts with how Julia uses type information to achieve its performance goals, and how to use multiple dispatches to help the compiler emit high-performance machine code. After that, you will learn how to analyze Julia programs and identify issues with time and memory consumption. We teach you how to use Julia's typing facilities accurately to write high-performance code and describe how the Julia compiler uses type information to create fast machine code. Moving ahead, you'll master design constraints and learn how to use the power of the GPU in your Julia code and compile Julia code directly to the GPU. Then, you'll learn how tasks and asynchronous IO help you create responsive programs and how to use shared memory multithreading in Julia. Toward the end, you will get a flavor of Julia's distributed computing capabilities and how to run Julia programs on a large distributed cluster. By the end of this book, you will have the ability to build large-scale, high-performance Julia applications, design systems with a focus on speed, and improve the performance of existing programs. What you will learn Understand how Julia code is transformed into machine code Measure the time and memory taken by Julia programs Create fast machine code using Julia's type information Define and call functions without compromising Julia's performance Accelerate your code via the GPU Use tasks and asynchronous IO for responsive programs Run Julia programs on large distributed clusters Who this book is forThis book is for beginners and intermediate Julia programmers who are interested in high-performance technical programming. A basic knowledge of Julia programming is assumed.

JavaScript - JavaScript Programming Made Easy for Beginners & Intermediates (Step By Step With Hands On Projects) (Paperback):... JavaScript - JavaScript Programming Made Easy for Beginners & Intermediates (Step By Step With Hands On Projects) (Paperback)
Berg Craig
R482 R393 Discovery Miles 3 930 Save R89 (18%) Ships in 10 - 15 working days
Formalizing Natural Languages with NooJ 2018 and Its Natural Language Processing Applications - 12th International Conference,... Formalizing Natural Languages with NooJ 2018 and Its Natural Language Processing Applications - 12th International Conference, NooJ 2018, Palermo, Italy, June 20-22, 2018, Revised Selected Papers (Paperback, 1st ed. 2019)
Ignazio Mauro Mirto, Mario Monteleone, Max Silberztein
R1,557 Discovery Miles 15 570 Ships in 10 - 15 working days

This book constitutes the refereed proceedings of the 12th International Conference, NooJ 2018, held in Palermo, Italy, in June 2018.The 17 revised full papers and 3 short papers presented in this volume were carefully reviewed and selected from 48 submissions. NooJ is a linguistic development environment that provides tools for linguists to construct linguistic resources that formalize a large gamut of linguistic phenomena: typography, orthography, lexicons for simple words, multiword units and discontinuous expressions, inflectional and derivational morphology, local, structural and transformational syntax, and semantics. The papers in this volume are organized in topical sections on vocabulary and morphology; syntax and semantics; and natural language processing applications.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Graph Learning and Network Science for…
Muskan Garg, Amit Kumar Gupta, … Hardcover R3,253 Discovery Miles 32 530
Metalanguages for Dissecting Translation…
Rei Miyata, Masaru Yamada, … Hardcover R4,149 Discovery Miles 41 490
Text-based intelligent Systems - Current…
Paul S. Jacobs Hardcover R2,012 R1,154 Discovery Miles 11 540
Semantic Structures (RLE Linguistics B…
David L. Waltz Hardcover R4,147 Discovery Miles 41 470
Morphological Aspects of Language…
Laurie Beth Feldman Hardcover R4,024 Discovery Miles 40 240
Natural Language Understanding and…
Masao Yokota Paperback R1,461 Discovery Miles 14 610
New Methods In Language Processing
D.B. Jones, H. Somers Hardcover R3,998 Discovery Miles 39 980
How to Speak Whale - A Voyage into the…
Tom Mustill Hardcover R467 Discovery Miles 4 670
Natural Language Processing - Semantic…
Epaminondas Kapetanios, Doina Tatar, … Hardcover R4,461 Discovery Miles 44 610
Natural Language Processing - A Machine…
Yue Zhang, Zhiyang Teng Hardcover R1,839 Discovery Miles 18 390

 

Partners