|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
Implement neural network models in R 3.5 using TensorFlow, Keras,
and MXNet Key Features Use R 3.5 for building deep learning models
for computer vision and text Apply deep learning techniques in
cloud for large-scale processing Build, train, and optimize neural
network models on a range of datasets Book DescriptionDeep learning
is a powerful subset of machine learning that is very successful in
domains such as computer vision and natural language processing
(NLP). This second edition of R Deep Learning Essentials will open
the gates for you to enter the world of neural networks by building
powerful deep learning models using the R ecosystem. This book will
introduce you to the basic principles of deep learning and teach
you to build a neural network model from scratch. As you make your
way through the book, you will explore deep learning libraries,
such as Keras, MXNet, and TensorFlow, and create interesting deep
learning models for a variety of tasks and problems, including
structured data, computer vision, text data, anomaly detection, and
recommendation systems. You'll cover advanced topics, such as
generative adversarial networks (GANs), transfer learning, and
large-scale deep learning in the cloud. In the concluding chapters,
you will learn about the theoretical concepts of deep learning
projects, such as model optimization, overfitting, and data
augmentation, together with other advanced topics. By the end of
this book, you will be fully prepared and able to implement deep
learning concepts in your research work or projects. What you will
learn Build shallow neural network prediction models Prevent models
from overfitting the data to improve generalizability Explore
techniques for finding the best hyperparameters for deep learning
models Create NLP models using Keras and TensorFlow in R Use deep
learning for computer vision tasks Implement deep learning tasks,
such as NLP, recommendation systems, and autoencoders Who this book
is forThis second edition of R Deep Learning Essentials is for
aspiring data scientists, data analysts, machine learning
developers, and deep learning enthusiasts who are well versed in
machine learning concepts and are looking to explore the deep
learning paradigm using R. Fundamental understanding of the R
language is necessary to get the most out of this book.
Incorporate intelligence to your data-driven business insights and
high accuracy business solutions Key Features Explore IBM Watson
capabilities such as Natural Language Processing (NLP) and machine
learning Build projects to adopt IBM Watson across retail, banking,
and healthcare Learn forecasting, anomaly detection, and pattern
recognition with ML techniques Book DescriptionIBM Watson provides
fast, intelligent insight in ways that the human brain simply can't
match. Through eight varied projects, this book will help you
explore the computing and analytical capabilities of IBM Watson.
The book begins by refreshing your knowledge of IBM Watson's basic
data preparation capabilities, such as adding and exploring data to
prepare it for being applied to models. The projects covered in
this book can be developed for different industries, including
banking, healthcare, media, and security. These projects will
enable you to develop an AI mindset and guide you in developing
smart data-driven projects, including automating supply chains,
analyzing sentiment in social media datasets, and developing
personalized recommendations. By the end of this book, you'll have
learned how to develop solutions for process automation, and you'll
be able to make better data-driven decisions to deliver an
excellent customer experience. What you will learn Build a smart
dialog system with cognitive assistance solutions Design a text
categorization model and perform sentiment analysis on social media
datasets Develop a pattern recognition application and identify
data irregularities smartly Analyze trip logs from a driving
services company to determine profit Provide insights into an
organization's supply chain data and processes Create personalized
recommendations for retail chains and outlets Test forecasting
effectiveness for better sales prediction strategies Who this book
is forThis book is for data scientists, AI engineers, NLP
engineers, machine learning engineers, and data analysts who wish
to build next-generation analytics applications. Basic familiarity
with cognitive computing and sound knowledge of any programming
language is all you need to understand the projects covered in this
book.
With Hands-On Recommendation Systems with Python, learn the tools
and techniques required in building various kinds of powerful
recommendation systems (collaborative, knowledge and content based)
and deploying them to the web Key Features Build industry-standard
recommender systems Only familiarity with Python is required No
need to wade through complicated machine learning theory to use
this book Book DescriptionRecommendation systems are at the heart
of almost every internet business today; from Facebook to Netflix
to Amazon. Providing good recommendations, whether it's friends,
movies, or groceries, goes a long way in defining user experience
and enticing your customers to use your platform. This book shows
you how to do just that. You will learn about the different kinds
of recommenders used in the industry and see how to build them from
scratch using Python. No need to wade through tons of machine
learning theory-you'll get started with building and learning about
recommenders as quickly as possible.. In this book, you will build
an IMDB Top 250 clone, a content-based engine that works on movie
metadata. You'll use collaborative filters to make use of customer
behavior data, and a Hybrid Recommender that incorporates content
based and collaborative filtering techniques With this book, all
you need to get started with building recommendation systems is a
familiarity with Python, and by the time you're fnished, you will
have a great grasp of how recommenders work and be in a strong
position to apply the techniques that you will learn to your own
problem domains. What you will learn Get to grips with the
different kinds of recommender systems Master data-wrangling
techniques using the pandas library Building an IMDB Top 250 Clone
Build a content based engine to recommend movies based on movie
metadata Employ data-mining techniques used in building
recommenders Build industry-standard collaborative filters using
powerful algorithms Building Hybrid Recommenders that incorporate
content based and collaborative fltering Who this book is forIf you
are a Python developer and want to develop applications for social
networking, news personalization or smart advertising, this is the
book for you. Basic knowledge of machine learning techniques will
be helpful, but not mandatory.
Explore various approaches to organize and extract useful text from
unstructured data using Java Key Features Use deep learning and NLP
techniques in Java to discover hidden insights in text Work with
popular Java libraries such as CoreNLP, OpenNLP, and Mallet Explore
machine translation, identifying parts of speech, and topic
modeling Book DescriptionNatural Language Processing (NLP) allows
you to take any sentence and identify patterns, special names,
company names, and more. The second edition of Natural Language
Processing with Java teaches you how to perform language analysis
with the help of Java libraries, while constantly gaining insights
from the outcomes. You'll start by understanding how NLP and its
various concepts work. Having got to grips with the basics, you'll
explore important tools and libraries in Java for NLP, such as
CoreNLP, OpenNLP, Neuroph, and Mallet. You'll then start performing
NLP on different inputs and tasks, such as tokenization, model
training, parts-of-speech and parsing trees. You'll learn about
statistical machine translation, summarization, dialog systems,
complex searches, supervised and unsupervised NLP, and more. By the
end of this book, you'll have learned more about NLP, neural
networks, and various other trained models in Java for enhancing
the performance of NLP applications. What you will learn Understand
basic NLP tasks and how they relate to one another Discover and use
the available tokenization engines Apply search techniques to find
people, as well as things, within a document Construct solutions to
identify parts of speech within sentences Use parsers to extract
relationships between elements of a document Identify topics in a
set of documents Explore topic modeling from a document Who this
book is forNatural Language Processing with Java is for you if you
are a data analyst, data scientist, or machine learning engineer
who wants to extract information from a language using Java.
Knowledge of Java programming is needed, while a basic
understanding of statistics will be useful but not mandatory.
Build smart applications by implementing real-world artificial
intelligence projects Key Features Explore a variety of AI projects
with Python Get well-versed with different types of neural networks
and popular deep learning algorithms Leverage popular Python deep
learning libraries for your AI projects Book DescriptionArtificial
Intelligence (AI) is the newest technology that's being employed
among varied businesses, industries, and sectors. Python Artificial
Intelligence Projects for Beginners demonstrates AI projects in
Python, covering modern techniques that make up the world of
Artificial Intelligence. This book begins with helping you to build
your first prediction model using the popular Python library,
scikit-learn. You will understand how to build a classifier using
an effective machine learning technique, random forest, and
decision trees. With exciting projects on predicting bird species,
analyzing student performance data, song genre identification, and
spam detection, you will learn the fundamentals and various
algorithms and techniques that foster the development of these
smart applications. In the concluding chapters, you will also
understand deep learning and neural network mechanisms through
these projects with the help of the Keras library. By the end of
this book, you will be confident in building your own AI projects
with Python and be ready to take on more advanced projects as you
progress What you will learn Build a prediction model using
decision trees and random forest Use neural networks, decision
trees, and random forests for classification Detect YouTube comment
spam with a bag-of-words and random forests Identify handwritten
mathematical symbols with convolutional neural networks Revise the
bird species identifier to use images Learn to detect positive and
negative sentiment in user reviews Who this book is forPython
Artificial Intelligence Projects for Beginners is for Python
developers who want to take their first step into the world of
Artificial Intelligence using easy-to-follow projects. Basic
working knowledge of Python programming is expected so that you're
able to play around with code
Build and deploy powerful neural network models using the latest
Java deep learning libraries Key Features Understand DL with Java
by implementing real-world projects Master implementations of
various ANN models and build your own DL systems Develop
applications using NLP, image classification, RL, and GPU
processing Book DescriptionJava is one of the most widely used
programming languages. With the rise of deep learning, it has
become a popular choice of tool among data scientists and machine
learning experts. Java Deep Learning Projects starts with an
overview of deep learning concepts and then delves into advanced
projects. You will see how to build several projects using
different deep neural network architectures such as multilayer
perceptrons, Deep Belief Networks, CNN, LSTM, and Factorization
Machines. You will get acquainted with popular deep and machine
learning libraries for Java such as Deeplearning4j, Spark ML, and
RankSys and you'll be able to use their features to build and
deploy projects on distributed computing environments. You will
then explore advanced domains such as transfer learning and deep
reinforcement learning using the Java ecosystem, covering various
real-world domains such as healthcare, NLP, image classification,
and multimedia analytics with an easy-to-follow approach. Expert
reviews and tips will follow every project to give you insights and
hacks. By the end of this book, you will have stepped up your
expertise when it comes to deep learning in Java, taking it beyond
theory and be able to build your own advanced deep learning
systems. What you will learn Master deep learning and neural
network architectures Build real-life applications covering image
classification, object detection, online trading, transfer
learning, and multimedia analytics using DL4J and open-source APIs
Train ML agents to learn from data using deep reinforcement
learning Use factorization machines for advanced movie
recommendations Train DL models on distributed GPUs for faster deep
learning with Spark and DL4J Ease your learning experience through
69 FAQs Who this book is forIf you are a data scientist, machine
learning professional, or deep learning practitioner keen to expand
your knowledge by delving into the practical aspects of deep
learning with Java, then this book is what you need! Get ready to
build advanced deep learning models to carry out complex numerical
computations. Some basic understanding of machine learning concepts
and a working knowledge of Java are required.
Powerful smart applications using deep learning algorithms to
dominate numerical computing, deep learning, and functional
programming. Key Features Explore machine learning techniques with
prominent open source Scala libraries such as Spark ML, H2O, MXNet,
Zeppelin, and DeepLearning4j Solve real-world machine learning
problems by delving complex numerical computing with Scala
functional programming in a scalable and faster way Cover all key
aspects such as collection, storing, processing, analyzing, and
evaluation required to build and deploy machine models on computing
clusters using Scala Play framework. Book DescriptionMachine
learning has had a huge impact on academia and industry by turning
data into actionable information. Scala has seen a steady rise in
adoption over the past few years, especially in the fields of data
science and analytics. This book is for data scientists, data
engineers, and deep learning enthusiasts who have a background in
complex numerical computing and want to know more hands-on machine
learning application development. If you're well versed in machine
learning concepts and want to expand your knowledge by delving into
the practical implementation of these concepts using the power of
Scala, then this book is what you need! Through 11 end-to-end
projects, you will be acquainted with popular machine learning
libraries such as Spark ML, H2O, DeepLearning4j, and MXNet. At the
end, you will be able to use numerical computing and functional
programming to carry out complex numerical tasks to develop, build,
and deploy research or commercial projects in a production-ready
environment. What you will learn Apply advanced regression
techniques to boost the performance of predictive models Use
different classification algorithms for business analytics Generate
trading strategies for Bitcoin and stock trading using ensemble
techniques Train Deep Neural Networks (DNN) using H2O and Spark ML
Utilize NLP to build scalable machine learning models Learn how to
apply reinforcement learning algorithms such as Q-learning for
developing ML application Learn how to use autoencoders to develop
a fraud detection application Implement LSTM and CNN models using
DeepLearning4j and MXNet Who this book is forIf you want to
leverage the power of both Scala and Spark to make sense of Big
Data, then this book is for you. If you are well versed with
machine learning concepts and wants to expand your knowledge by
delving into the practical implementation using the power of Scala,
then this book is what you need! Strong understanding of Scala
Programming language is recommended. Basic familiarity with machine
Learning techniques will be more helpful.
This book addresses the issue of how the user's level of domain
knowledge affects interaction with a computer system. It
demonstrates the feasibility of incorporating a model of user's
domain knowledge into a natural language generation system.
Learn to build expert NLP and machine learning projects using NLTK
and other Python libraries About This Book * Break text down into
its component parts for spelling correction, feature extraction,
and phrase transformation * Work through NLP concepts with simple
and easy-to-follow programming recipes * Gain insights into the
current and budding research topics of NLP Who This Book Is For If
you are an NLP or machine learning enthusiast and an intermediate
Python programmer who wants to quickly master NLTK for natural
language processing, then this Learning Path will do you a lot of
good. Students of linguistics and semantic/sentiment analysis
professionals will find it invaluable. What You Will Learn * The
scope of natural language complexity and how they are processed by
machines * Clean and wrangle text using tokenization and chunking
to help you process data better * Tokenize text into sentences and
sentences into words * Classify text and perform sentiment analysis
* Implement string matching algorithms and normalization techniques
* Understand and implement the concepts of information retrieval
and text summarization * Find out how to implement various NLP
tasks in Python In Detail Natural Language Processing is a field of
computational linguistics and artificial intelligence that deals
with human-computer interaction. It provides a seamless interaction
between computers and human beings and gives computers the ability
to understand human speech with the help of machine learning. The
number of human-computer interaction instances are increasing so
it's becoming imperative that computers comprehend all major
natural languages. The first NLTK Essentials module is an
introduction on how to build systems around NLP, with a focus on
how to create a customized tokenizer and parser from scratch. You
will learn essential concepts of NLP, be given practical insight
into open source tool and libraries available in Python, shown how
to analyze social media sites, and be given tools to deal with
large scale text. This module also provides a workaround using some
of the amazing capabilities of Python libraries such as NLTK,
scikit-learn, pandas, and NumPy. The second Python 3 Text
Processing with NLTK 3 Cookbook module teaches you the essential
techniques of text and language processing with simple,
straightforward examples. This includes organizing text corpora,
creating your own custom corpus, text classification with a focus
on sentiment analysis, and distributed text processing methods. The
third Mastering Natural Language Processing with Python module will
help you become an expert and assist you in creating your own NLP
projects using NLTK. You will be guided through model development
with machine learning tools, shown how to create training data, and
given insight into the best practices for designing and building
NLP-based applications using Python. This Learning Path combines
some of the best that Packt has to offer in one complete, curated
package and is designed to help you quickly learn text processing
with Python and NLTK. It includes content from the following Packt
products: * NTLK essentials by Nitin Hardeniya * Python 3 Text
Processing with NLTK 3 Cookbook by Jacob Perkins * Mastering
Natural Language Processing with Python by Deepti Chopra, Nisheeth
Joshi, and Iti Mathur Style and approach This comprehensive course
creates a smooth learning path that teaches you how to get started
with Natural Language Processing using Python and NLTK. You'll
learn to create effective NLP and machine learning projects using
Python and NLTK.
Rothkegel argues that text production is the result of interaction
between text knowledge and object knowledge - the conventional
ordering and presentation of knowledge for communicative purposes
and the conceptual organisation of world knowledge.
Spoken Dialogue Systems Technology and Design covers key topics in
the field of spoken language dialogue interaction from a variety of
leading researchers. It brings together several perspectives in the
areas of corpus annotation and analysis, dialogue system
construction, as well as theoretical perspectives on communicative
intention, context-based generation, and modelling of discourse
structure. These topics are all part of the general research and
development within the area of discourse and dialogue with an
emphasis on dialogue systems; corpora and corpus tools and semantic
and pragmatic modelling of discourse and dialogue.
"Advances in Non-Linear Modeling for Speech Processing" includes
advanced topics in non-linear estimation and modeling techniques
along with their applications to speaker recognition.
Non-linear aeroacoustic modeling approach is used to estimate the
important fine-structure speech events, which are not revealed by
the short time Fourier transform (STFT). This aeroacostic modeling
approach provides the impetus for the high resolution Teager energy
operator (TEO). This operator is characterized by a time resolution
that can track rapid signal energy changes within a glottal
cycle.
The cepstral features like linear prediction cepstral coefficients
(LPCC) and mel frequency cepstral coefficients (MFCC) are computed
from the magnitude spectrum of the speech frame and the phase
spectra is neglected. To overcome the problem of neglecting the
phase spectra, the speech production system can be represented as
an amplitude modulation-frequency modulation (AM-FM) model. To
demodulate the speech signal, to estimation the amplitude envelope
and instantaneous frequency components, the energy separation
algorithm (ESA) and the Hilbert transform demodulation (HTD)
algorithm are discussed.
Different features derived using above non-linear modeling
techniques are used to develop a speaker identification system.
Finally, it is shown that, the fusion of speech production and
speech perception mechanisms can lead to a robust feature set.
What are mental concepts? Why do they work the way they do? How can
they be captured in language? How can they be captured in a
computer? The authors describe the development of, and clearly
explain, the underlying linguistic theory and the working software
they have developed over 40 years to store declarative knowledge in
a computer fully to the same level as language, knowledge
accessible via ordinary conversation. During this 40 year project
there was no epiphany, no "Eureka moment," except perhaps for the
day that their parser program successfully parsed a long sentence
for the first time, taking into account the contribution of every
word and punctuation mark. Their parser software can now parse a
whole paragraph of long sentences each comprising multiple
subordinate clauses with punctuation, to determine the paragraph's
global meaning. Among many practical applications for their
technology is precision communication with the Internet. The
authors show that knowledge stored in language is not unstructured
as is generally assumed. Rather they show that language expressions
are highly structured once the rules of syntax are understood.
Lexical words, grammaticals, punctuation marks, paragraphs and
poetry, single elimination tournaments, "grandmother cells,"
calculator algorithms are just a few of the topics explored in this
smart, witty, and eclectic tour through natural language
understanding by a computer. Illustrated with flow-of-meaning-trees
and easily followed Mensa tables this essay outlines a wide-ranging
theory of language and thought and its transition to computers.
John W. Gorman, a Masters in Engineering from the University of
Auckland, joined his father, John G. Gorman, Lasker Award winning
medical researcher, in their enterprise twenty years ago to solve
the until now intractable problem of computer understanding of
thought and language. An Essay Concerning Computer Understanding
will provoke linguists, neuroscientists, software designers,
advertisers, poets, and the just plain curious. The book suggests
many opportunities for future research in linguistic theory and
cognitive science employing hands on experiments with computer
models of knowledge and the brain. Discover the theory and practice
of computer understanding that has computational linguists
everywhere taking notice.
For 50 years the natural language interface has tempted and
challenged researchers and the public in equal measure. As advanced
domains such as robotic systems mature over the next ten years, the
need for effective language interfaces will become more significant
as the disparity between physical and language ability becomes more
evident. Natural language conversation with robots and other
situated systems will not only require a clear understanding of
theories of language use, models of spatial representation and
reasoning, and theories of intentional action and agency - but will
also require that all of these models be made accessible within
tractable dialogue processing frameworks. While such issues pose
research questions which are significant, particularly when we
consider them in the light of the many other challenges in language
processing and spatial theory, the benefits of competence in
situated dialogue to the fields of robotics, geographic information
systems, game design, and applied artificial intelligence cannot be
underestimated. This book examines the burgeoning field of Situated
Dialogue Systems and describes for the first time a complete
computational model of situated dialogue competence for practical
dialogue systems. The book can be broadly broken down into two
parts. The first three chapters examine on one hand the issues
which complicate the computational modelling of situated dialogue,
i.e., issues of agency and spatial language competence, and on the
other hand examines theories of dialogue modelling and management
with respect to the needs of the situated domain. The second part
of the book then details a situated dialogue processing
architecture. Novel features of this architecture include the
modular integration of an intentionality model alongside an
exchange-structure based organization of discourse, plus the use of
a functional contextualization process that operates over both
implicit and explicit content in user contributions. The
architecture is described at a course level, but in sufficient
detail for others to use as a starting point in their own
explorations of situated language intelligence.
Data mining is a mature technology. The prediction problem, looking
for predictive patterns in data, has been widely studied. Strong
me- ods are available to the practitioner. These methods process
structured numerical information, where uniform measurements are
taken over a sample of data. Text is often described as
unstructured information. So, it would seem, text and numerical
data are different, requiring different methods. Or are they? In
our view, a prediction problem can be solved by the same methods,
whether the data are structured - merical measurements or
unstructured text. Text and documents can be transformed into
measured values, such as the presence or absence of words, and the
same methods that have proven successful for pred- tive data mining
can be applied to text. Yet, there are key differences. Evaluation
techniques must be adapted to the chronological order of
publication and to alternative measures of error. Because the data
are documents, more specialized analytical methods may be preferred
for text. Moreover, the methods must be modi?ed to accommodate very
high dimensions: tens of thousands of words and documents. Still,
the central themes are similar.
 |
Research and Advanced Technology for Digital Libraries
- 14th European Conference, ECDL 2010, Glasgow, UK, September 6-10, 2010, Proceedings
(Paperback, Edition.)
Mounia Lalmas, Joemon Jose, Andreas Rauber, Roberto Sebastiani, Ingo Frommholz
|
R1,606
Discovery Miles 16 060
|
Ships in 10 - 15 working days
|
|
In the 14 years since its ?rst edition back in 1997, the European
Conference on Research and Advanced Technology for Digital
Libraries (ECDL) has become the reference meeting for an
interdisciplinary community of researchers and practitioners whose
professional activities revolve around the theme of d- th ital
libraries. This volume contains the proceedings of ECDL 2010, the
14 conference in this series, which, following Pisa (1997),
Heraklion (1998), Paris
(1999),Lisbon(2000),Darmstadt(2001),Rome(2002),Trondheim(2003),Bath
(2004), Vienna (2005), Alicante (2006), Budapest (2007), Aarhus
(2008), and Corfu (2009), was held in Glasgow, UK, during September
6-10, 2010. th Asidefrombeingthe14 edition of ECDL, this was also
the last, at least with this name since starting with 2011, ECDL
will be renamed (so as to avoid acronym con?icts with the European
Computer Driving Licence) to TPLD, standing for the Conference on
Theory and Practice of Digital Libraries. We hope you all will join
us for TPDL 2011 in Berlin! For ECDL 2010 separate calls for
papers, posters and demos were issued, - sulting in the submission
to the conference of 102 full papers, 40 posters and 13 demos. This
year, for the full papers, ECDL experimented with a novel, two-tier
reviewing model, with the aim of further improving the quality of
the resu- ing program. A ?rst-tier Program Committee of 87 members
was formed, and a further Senior Program Committee composed of 15
senior members of the DL community was set up.
This book is aimed at providing an overview of several aspects of
semantic role labeling. Chapter 1 begins with linguistic background
on the definition of semantic roles and the controversies
surrounding them. Chapter 2 describes how the theories have led to
structured lexicons such as FrameNet, VerbNet and the PropBank
Frame Files that in turn provide the basis for large scale semantic
annotation of corpora. This data has facilitated the development of
automatic semantic role labeling systems based on supervised
machine learning techniques. Chapter 3 presents the general
principles of applying both supervised and unsupervised machine
learning to this task, with a description of the standard stages
and feature choices, as well as giving details of several specific
systems. Recent advances include the use of joint inference to take
advantage of context sensitivities, and attempts to improve
performance by closer integration of the syntactic parsing task
with semantic role labeling. Chapter 3 also discusses the impact
the granularity of the semantic roles has on system performance.
Having outlined the basic approach with respect to English, Chapter
4 goes on to discuss applying the same techniques to other
languages, using Chinese as the primary example. Although
substantial training data is available for Chinese, this is not the
case for many other languages, and techniques for projecting
English role labels onto parallel corpora are also presented. Table
of Contents: Preface / Semantic Roles / Available Lexical Resources
/ Machine Learning for Semantic Role Labeling / A Cross-Lingual
Perspective / Summary
Natural Language Processing as a Foundation of the Semantic Web
argues that Natural Language Processing (NLP) does, and will
continue to, underlie the Semantic Web (SW), including its initial
construction from unstructured sources like the World Wide Web, in
several different ways, and whether its advocates realise this or
not. Chiefly, it argues, such NLP activity is the only way up to a
defensible notion of meaning at conceptual levels based on lower
level empirical computations over usage. The claim being made is
definitely not logic-bad, NLP-good in any simple-minded way, but
that the SW will be a fascinating interaction of these two
methodologies, like the WWW (which, as the authors explain, has
been a fruitful field for statistical NLP research) but with deeper
content. Only NLP technologies (and chiefly information extraction)
will be able to provide the requisite resource description
framework (RDF) knowledge stores for the SW from existing WWW
(unstructured) text databases, and in the vast quantities needed.
There is no alternative at this point, since a wholly or mostly
hand-crafted SW is also unthinkable, as is a SW built from scratch
and without reference to the WWW. It is also assumed here that,
whatever the limitations on current SW representational power drawn
attention to here, the SW will continue to grow in a distributed
manner so as to serve the needs of scientists, even if it is not
perfect. The WWW has already shown how an imperfect artefact can
become indispensable. Natural Language Processing as a Foundation
of the Semantic Web will appeal to researchers, practitioners and
anyone with an interest in NLP, the philosophy of language,
cognitive science, the Semantic Web and Web Science generally, as
well as providing a magisterial and controversial overview of the
history of artificial intelligence
|
|