0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (2)
  • -
Status
Brand

Showing 1 - 2 of 2 matches in All Departments

Foundation Models for Natural Language Processing - Pre-trained Language Models Integrating Media (Hardcover, 1st ed. 2023):... Foundation Models for Natural Language Processing - Pre-trained Language Models Integrating Media (Hardcover, 1st ed. 2023)
Gerhard Paaß, Sven Giesselbach
R1,377 Discovery Miles 13 770 Ships in 12 - 17 working days

This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts.  Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models.  After a brief introduction to basic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.

Foundation Models for Natural Language Processing - Pre-trained Language Models Integrating Media (Paperback, 1st ed. 2023):... Foundation Models for Natural Language Processing - Pre-trained Language Models Integrating Media (Paperback, 1st ed. 2023)
Gerhard Paaß, Sven Giesselbach
R1,481 Discovery Miles 14 810 Ships in 10 - 15 working days

This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts.  Over the recent years, a revolutionary new paradigm has been developed for training models for NLP. These models are first pre-trained on large collections of text documents to acquire general syntactic knowledge and semantic information. Then, they are fine-tuned for specific tasks, which they can often solve with superhuman accuracy. When the models are large enough, they can be instructed by prompts to solve new tasks without any fine-tuning. Moreover, they can be applied to a wide range of different media and problem domains, ranging from image and video processing to robot control learning. Because they provide a blueprint for solving many tasks in artificial intelligence, they have been called Foundation Models.  After a brief introduction to basic NLP models the main pre-trained language models BERT, GPT and sequence-to-sequence transformer are described, as well as the concepts of self-attention and context-sensitive embedding. Then, different approaches to improving these models are discussed, such as expanding the pre-training criteria, increasing the length of input texts, or including extra knowledge. An overview of the best-performing models for about twenty application areas is then presented, e.g., question answering, translation, story generation, dialog systems, generating images from text, etc. For each application area, the strengths and weaknesses of current models are discussed, and an outlook on further developments is given. In addition, links are provided to freely available program code. A concluding chapter summarizes the economic opportunities, mitigation of risks, and potential developments of AI.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
The Garbage Collection Handbook - The…
Richard Jones, Antony Hosking, … Paperback R1,477 Discovery Miles 14 770
Smart Cards, Tokens, Security and…
Keith Mayes, Konstantinos Markantonakis Paperback R2,534 Discovery Miles 25 340
Beyond Algorithms - Delivering AI for…
James,Luke, David Porter, … Paperback R1,481 R1,362 Discovery Miles 13 620
Information and Communication Technology…
Linawati, Made Sudiana Mahendra, … Paperback R3,092 Discovery Miles 30 920
Critical Information Infrastructures…
Grigore Havarneanu, Roberto Setola, … Paperback R2,616 Discovery Miles 26 160
Security, Privacy, and Applied…
Andrey Bogdanov, Somitra Sanadhya Paperback R1,990 Discovery Miles 19 900
Decision and Game Theory for Security…
Jens Grossklags, Jean Walrand Paperback R1,566 Discovery Miles 15 660
Distributed Computing and Internet…
Raja Natarajan, Gautam Barua, … Paperback R2,987 Discovery Miles 29 870
Algorithm Design: A Methodological…
Patrick Bosc, Marc Guyomard, … Paperback R1,617 Discovery Miles 16 170
Data: A Guide to Humans
Phil Harvey, Noelia Jimenez Martinez Hardcover R355 Discovery Miles 3 550

 

Partners