Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
|||
Showing 1 - 8 of 8 matches in All Departments
This book comprises a set of articles that specify the methodology of text mining, describe the creation of lexical resources in the framework of text mining and use text mining for various tasks in natural language processing (NLP). The analysis of large amounts of textual data is a prerequisite to build lexical resources such as dictionaries and ontologies and also has direct applications in automated text processing in fields such as history, healthcare and mobile applications, just to name a few. This volume gives an update in terms of the recent gains in text mining methods and reflects the most recent achievements with respect to the automatic build-up of large lexical resources. It addresses researchers that already perform text mining, and those who want to enrich their battery of methods. Selected articles can be used to support graduate-level teaching. The book is suitable for all readers that completed undergraduate studies of computational linguistics, quantitative linguistics, computer science and computational humanities. It assumes basic knowledge of computer science and corpus processing as well as of statistics.
Current language technology is dominated by approaches that either enumerate a large set of rules, or are focused on a large amount of manually labelled data. The creation of both is time-consuming and expensive, which is commonly thought to be the reason why automated natural language understanding has still not made its way into "real-life" applications yet. This book sets an ambitious goal: to shift the development of
language processing systems to a much more automated setting than
previous works. A new approach is defined: what if computers
analysed large samples of language data on their own, identifying
structural regularities that perform the necessary abstractions and
generalisations in order to better understand language in the
process? The target audience are academics on all levels (undergraduate and graduate students, lecturers and professors) working in the fields of natural language processing and computational linguistics, as well as natural language engineers who are seeking to improve their systems.
This book comprises a set of articles that specify the methodology of text mining, describe the creation of lexical resources in the framework of text mining and use text mining for various tasks in natural language processing (NLP). The analysis of large amounts of textual data is a prerequisite to build lexical resources such as dictionaries and ontologies and also has direct applications in automated text processing in fields such as history, healthcare and mobile applications, just to name a few. This volume gives an update in terms of the recent gains in text mining methods and reflects the most recent achievements with respect to the automatic build-up of large lexical resources. It addresses researchers that already perform text mining, and those who want to enrich their battery of methods. Selected articles can be used to support graduate-level teaching. The book is suitable for all readers that completed undergraduate studies of computational linguistics, quantitative linguistics, computer science and computational humanities. It assumes basic knowledge of computer science and corpus processing as well as of statistics.
Current language technology is dominated by approaches that either enumerate a large set of rules, or are focused on a large amount of manually labelled data. The creation of both is time-consuming and expensive, which is commonly thought to be the reason why automated natural language understanding has still not made its way into "real-life" applications yet. This book sets an ambitious goal: to shift the development of language processing systems to a much more automated setting than previous works. A new approach is defined: what if computers analysed large samples of language data on their own, identifying structural regularities that perform the necessary abstractions and generalisations in order to better understand language in the process? After defining the framework of Structure Discovery and shedding light on the nature and the graphic structure of natural language data, several procedures are described that do exactly this: let the computer discover structures without supervision in order to boost the performance of language technology applications. Here, multilingual documents are sorted by language, word classes are identified, and semantic ambiguities are discovered and resolved without using a dictionary or other explicit human input. The book concludes with an outlook on the possibilities implied by this paradigm and sets the methods in perspective to human computer interaction. The target audience are academics on all levels (undergraduate and graduate students, lecturers and professors) working in the fields of natural language processing and computational linguistics, as well as natural language engineers who are seeking to improve their systems.
The two-volume set LNCS 8218 and 8219 constitutes the refereed proceedings of the 12th International Semantic Web Conference, ISWC 2013, held in Sydney, Australia, in October 2013. The International Semantic Web Conference is the premier forum for Semantic Web research, where cutting edge scientific results and technological innovations are presented, where problems and solutions are discussed, and where the future of this vision is being developed. It brings together specialists in fields such as artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, human-computer interaction, natural language processing, and the social sciences. Part 1 (LNCS 8218) contains a total of 45 papers which were presented in the research track. They were carefully reviewed and selected from 210 submissions. Part 2 (LNCS 8219) contains 16 papers from the in-use track which were accepted from 90 submissions. In addition, it presents 10 contributions to the evaluations and experiments track and 5 papers of the doctoral consortium.
The two-volume set LNCS 8218 and 8219 constitutes the refereed proceedings of the 12th International Semantic Web Conference, ISWC 2013, held in Sydney, Australia, in October 2013. The International Semantic Web Conference is the premier forum for Semantic Web research, where cutting edge scientific results and technological innovations are presented, where problems and solutions are discussed, and where the future of this vision is being developed. It brings together specialists in fields such as artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, human-computer interaction, natural language processing, and the social sciences. Part 1 (LNCS 8218) contains a total of 45 papers which were presented in the research track. They were carefully reviewed and selected from 210 submissions. Part 2 (LNCS 8219) contains 16 papers from the in-use track which were accepted from 90 submissions. In addition, it presents 10 contributions to the evaluations and experiments track and 5 papers of the doctoral consortium.
This book constitutes the refereed conference proceedings of the 25th International Conference on Language Processing and Knowledge in the Web, GSCL 2013, held in Darmstadt, Germany, in September 2013. The 20 revised full papers were carefully selected from numerous submissions and cover topics on language processing and knowledge in the Web on several important dimensions, such as computational linguistics, language technology, and processing of unstructured textual content in the Web.
This book constitutes the refereed proceedings of the 20th International Conference on Applications of Natural Language to Information Systems, NLDB 2015, held in Passau, Germany, in June 2015. The 18 full papers, 15 short papers, 14 poster and demonstration papers presented were carefully reviewed and selected from 100 submissions. The papers cover the following topics: information extraction, distributional semantics, querying and question answering systems, context-aware NLP, cognitive and semantic computing, sentiment and opinion analysis, information extraction and social media, NLP and usability, text classification and extraction, and posters and demonstrations.
|
You may like...
Become A Better Writer - How To Write…
Donald Powers, Greg Rosenberg
Paperback
Language Isolates II: Kanoe to Yurakare…
Patience Epps, Lev Michael
Hardcover
R9,973
Discovery Miles 99 730
Vox Populi - Populism as a Rhetorical…
Ingeborg van der Geest, Henrike Jansen, …
Hardcover
R3,228
Discovery Miles 32 280
Hate Speech - Linguistic Perspectives
Victoria Guillen-Nieto
Hardcover
R3,447
Discovery Miles 34 470
Medical English as a Lingua Franca
M. Gregory Tweedie, Robert C. Johnson
Hardcover
Arapaho Stories, Songs, and Prayers - A…
Andrew Cowell, Alonzo Moss, …
Hardcover
R1,683
Discovery Miles 16 830
Figurative Language - Cross-Cultural and…
Dmitrij Dobrovol'skij, Elisabeth Piirainen
Hardcover
R4,196
Discovery Miles 41 960
|