|
Showing 1 - 2 of
2 matches in All Departments
Thanks to the availability of texts on the Web in recent years,
increased knowledge and information have been made available to
broader audiences. However, the way in which a text is written-its
vocabulary, its syntax-can be difficult to read and understand for
many people, especially those with poor literacy, cognitive or
linguistic impairment, or those with limited knowledge of the
language of the text. Texts containing uncommon words or long and
complicated sentences can be difficult to read and understand by
people as well as difficult to analyze by machines. Automatic text
simplification is the process of transforming a text into another
text which, ideally conveying the same message, will be easier to
read and understand by a broader audience. The process usually
involves the replacement of difficult or unknown phrases with
simpler equivalents and the transformation of long and
syntactically complex sentences into shorter and less complex ones.
Automatic text simplification, a research topic which started 20
years ago, now has taken on a central role in natural language
processing research not only because of the interesting challenges
it posesses but also because of its social implications. This book
presents past and current research in text simplification,
exploring key issues including automatic readability assessment,
lexical simplification, and syntactic simplification. It also
provides a detailed account of machine learning techniques
currently used in simplification, describes full systems designed
for specific languages and target audiences, and offers available
resources for research and development together with text
simplification evaluation techniques.
Information extraction (IE) and text summarization (TS) are
powerful technologies for finding relevant pieces of information in
text and presenting them to the user in condensed form. The ongoing
information explosion makes IE and TS critical for successful
functioning within the information society. These technologies face
particular challenges due to the inherent multi-source nature of
the information explosion. The technologies must now handle not
isolated texts or individual narratives, but rather large-scale
repositories and streams---in general, in multiple
languages---containing a multiplicity of perspectives, opinions, or
commentaries on particular topics, entities or events. There is
thus a need to adapt existing techniques and develop new ones to
deal with these challenges. This volume contains a selection of
papers that present a variety of methodologies for content
identification and extraction, as well as for content fusion and
regeneration. The chapters cover various aspects of the challenges,
depending on the nature of the information sought---names vs.
events,--- and the nature of the sources---news streams vs. image
captions vs. scientific research papers, etc. This volume aims to
offer a broad and representative sample of studies from this very
active research field.
|
You may like...
Not available
Loot
Nadine Gordimer
Paperback
(2)
R205
R168
Discovery Miles 1 680
|