![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
The dream of automatic language translation is now closer thanks to recent advances in the techniques that underpin statistical machine translation. This class-tested textbook from an active researcher in the field, provides a clear and careful introduction to the latest methods and explains how to build machine translation systems for any two languages. It introduces the subject's building blocks from linguistics and probability, then covers the major models for machine translation: word-based, phrase-based, and tree-based, as well as machine translation evaluation, language modeling, discriminative training and advanced methods to integrate linguistic annotation. The book also reports the latest research, presents the major outstanding challenges, and enables novices as well as experienced researchers to make novel contributions to this exciting area. Ideal for students at undergraduate and graduate level, or for anyone interested in the latest developments in machine translation.
Unraveling the Voynich Codex reviews the historical, botanical, zoological, and iconographic evidence related to the Voynich Codex, one of the most enigmatic historic texts of all time. The bizarre Voynich Codex has often been referred to as the most mysterious book in the world. Discovered in an Italian Catholic college in 1912 by a Polish book dealer Wilfrid Voynich, it was eventually bequeathed to the Beinecke Rare Book and Manuscript Library of Yale University. It contains symbolic language that has defied translation by eminent cryptologists. The codex is encyclopedic in scope and contains sections known as herbal, pharmaceutical, balenological (nude nymphs bathing in pools), astrological, cosmological and a final section of text that may be prescriptions but could be poetry or incantations. Because the vellum has been carbon dated to the early 15th century and the manuscript was known to be in the collection of Emperor Rudolf II of the Holy Roman Empire sometime between 1607 and 1622, current dogma had assumed it a European manuscript of the 15th century. However, based on identification of New World plants, animals, a mineral, as well as cities and volcanos of Central Mexico, the authors of this book reveal that the codex is clearly a document of colonial New Spain. Furthermore, the illustrator and author are identified as native to Mesoamerica based on a name and ligated initials in the first botanical illustration. This breakthrough in Voynich studies indicates that the failure to decipher the manuscript has been the result of a basic misinterpretation of its origin in time and place. Tentative assignment of the Voynichese symbols also provides a key to decipherment based on Mesoamerican languages. A document from this time, free from filter or censor from either Spanish or Inquisitorial authorities has major importance in our understanding of life in 16th century Mexico. Publisher's Note: For the eBook editions, Voynichese symbols are only rendered properly in the PDF format.
This book presents a detailed description of Spoken Language Translator (SLT), one of the first major projects in the area of automatic speech translation. The SLT system can translate between English, French, and Swedish in the domain of air travel planning, using a vocabulary of about 1500 words, and with an accuracy of about 75 per cent. The greater part of the book describes the language processing components, which are largely built on top of the SRI Core Language Engine, using a combination of general grammars and techniques that allow them to be rapidly customized to specific domains. Speech recognition is based on Hidden Markov Mode technology, and uses versions of the SRI DECIPHER system. This account of the Spoken Language Translator should be an essential resource both for those who wish to know what is achievable in spoken-language translation today, and for those who wish to understand how to achieve it.
This book constitutes the refereed proceedings of the 4th International Symposium on Information Management in a Changing World, IMCW 2013, held in Limerick, Ireland, in September 2013. The 12 revised full papers presented together with three keynotes were carefully reviewed and selected from 31 submissions. The papers deal with the following topics: Cloud Architectures and Cultural Memory; Cloud Computing Beyond the Obvious: An Approach for Innovation; Cloud Computing: A New Generation of Technology Enables Deeper Collaboration; Evaluation of Conditions Regarding Cloud Computing Applications in Turkey, EU and the USA; Trustworthy Digital Images and the Cloud: Early Findings of the Records in the Cloud Project; Cloud Computing and Copyright: New Challenges in Legal Protection? Clouding Big Data: Information Privacy Considerations; The Influence of Recent Court Cases Relating to Copyright Changes in Cloud Computing Services in Japan; Government Participation in Digital Copyright Licensing in the Cloud Computing Environment; Evaluation of Information Security Approaches: A Defense Industry Organization Case; Information-Seeking Behavior of Undergraduate, Graduate, and Doctoral Students: A Survey of Istanbul University, Turkey; Students Readiness for E-Learning: An Assessment on Hacettepe University Department of Information Management; Evaluation of Scientific Disciplines in Turkey: A Citation Analysis Study.
This major new textbook provides a clearly-written, concise and accessible introduction to speech and language processing. Assuming knowledge of only the very basics of linguistics and written specifically for students with no technical background, it is the perfect starting point for anyone beginning to study the discipline. Students are introduced to topics such as digital signal processing, speech analysis and synthesis, finite-state machines, automatic speech recognition, parsing and probabilistic grammars, and are shown from a very elementary level how to work with two programming languages, C and Prolog. The accompanying CD-ROM contains all the software described in the book, along with a C compiler, Prolog interpreter and sound file editor, thus providing a self-contained, one-stop resource for the learner. Setting a firm grounding in speech and language processing and an invaluable foundation for further study, Introducing Speech and Language Processing is set to become the leading introduction to the field.
Learn how to solve practical NLP problems with the Flair Python framework, train sequence labeling models, work with text classifiers and word embeddings, and much more through hands-on practical exercises Key Features Backed by the community and written by an NLP expert Get an understanding of basic NLP problems and terminology Solve real-world NLP problems with Flair with the help of practical hands-on exercises Book DescriptionFlair is an easy-to-understand natural language processing (NLP) framework designed to facilitate training and distribution of state-of-the-art NLP models for named entity recognition, part-of-speech tagging, and text classification. Flair is also a text embedding library for combining different types of embeddings, such as document embeddings, Transformer embeddings, and the proposed Flair embeddings. Natural Language Processing with Flair takes a hands-on approach to explaining and solving real-world NLP problems. You'll begin by installing Flair and learning about the basic NLP concepts and terminology. You will explore Flair's extensive features, such as sequence tagging, text classification, and word embeddings, through practical exercises. As you advance, you will train your own sequence labeling and text classification models and learn how to use hyperparameter tuning in order to choose the right training parameters. You will learn about the idea behind one-shot and few-shot learning through a novel text classification technique TARS. Finally, you will solve several real-world NLP problems through hands-on exercises, as well as learn how to deploy Flair models to production. By the end of this Flair book, you'll have developed a thorough understanding of typical NLP problems and you'll be able to solve them with Flair. What you will learn Gain an understanding of core NLP terminology and concepts Get to grips with the capabilities of the Flair NLP framework Find out how to use Flair's state-of-the-art pre-built models Build custom sequence labeling models, embeddings, and classifiers Learn about a novel text classification technique called TARS Discover how to build applications with Flair and how to deploy them to production Who this book is forThis Flair NLP book is for anyone who wants to learn about NLP through one of the most beginner-friendly, yet powerful Python NLP libraries out there. Software engineering students, developers, data scientists, and anyone who is transitioning into NLP and is interested in learning about practical approaches to solving problems with Flair will find this book useful. The book, however, is not recommended for readers aiming to get an in-depth theoretical understanding of the mathematics behind NLP. Beginner-level knowledge of Python programming is required to get the most out of this book.
Lernen mit elektronischen Dokumenten wird immer wichtiger. Der entscheidende Vorteil des Mediums Computer ist die MAglichkeit, dynamische Dokumente zu erzeugen. Diese Dynamik kann zum einen in den einzelnen Inhalten liegen (Animationen, Simulationen) oder in der Erstellung der Dokumente (adaptive Anpassung an die einzelnen Benutzer/innen). Die Lerndokumente liegen dafA1/4r in Modulen, nicht als ein groAes Dokument vor. Um diese Vorteile nutzen zu kAnnen, mA1/4ssen die Module beschrieben sein. Das Buch bietet ein Beschreibungsschema, mit dem aus einer Wissensbasis von unzusammenhAngenden Modulen ein gut lesbares und auf die BedA1/4rfnisse der einzelnen Leser/innen angepasstes webbasiertes Dokument erstellt werden kann.
Social media platforms are one of the main generators of textual data where people around the world share their daily life experiences and information with online society. The social, personal, and professional lives of people on these social networking sites generate not only a huge amount of data but also open doors for researchers and academicians with numerous research opportunities. This ample amount of data needs advanced machine learning, deep learning, and intelligent tools and techniques to receive, process, and interpret the information to resolve real-life challenges and improve the online social lives of people. Advanced Applications of NLP and Deep Learning in Social Media Data bridges the gap between natural language processing (NLP), advanced machine learning, deep learning, and online social media. It hopes to build a better and safer social media space by making human language available on different social media platforms intelligible for machines with the blessings of AI. Covering topics such as machine learning-based prediction, emotion recognition, and high-dimensional text clustering, this premier reference source is an essential resource for OSN service providers, psychiatrists, psychologists, clinicians, sociologists, students and educators of higher education, librarians, researchers, and academicians.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
This white paper is part of a series that promotes knowledge about language technology and its potential. It addresses educators, journalists, politicians, language communities and others. The availability and use of language technology in Europe varies between languages. Consequently, the actions that are required to further support research and development of language technologies also differ for each language. The required actions depend on many factors, such as the complexity of a given language and the size of its community. META-NET, a Network of Excellence funded by the European Commission, has conducted an analysis of current language resources and technologies. This analysis focused on the 23 official European languages as well as other important national and regional languages in Europe. The results of this analysis suggest that there are many significant research gaps for each language. A more detailed expert analysis and assessment of the current situation will help maximise the impact of additional research and minimize any risks. META-NET consists of 54 research centres from 33 countries that are working with stakeholders from commercial businesses, government agencies, industry, research organisations, software companies, technology providers and European universities. Together, they are creating a common technology vision while developing a strategic research agenda that shows how language technology applications can address any research gaps by 2020.
Die zentrale Aufgabe einer zukunftsorientierten Computerlinguistik
ist die Entwicklung kognitiver Maschinen, mit denen Menschen in
ihrer jeweiligen Sprache frei reden kAnnen. Langfristig umfaAt
diese Zielsetzung eine funktional ausgerichtete Theoriebildung,
eine objektive Verifikationsmethode und eine FA1/4lle praktischer
Anwendungen.
Das aktuelle Wissen der Welt im Spiegel einer Weltausstellung: Wie stellt man das dar und wie macht man es Interessierten zugAnglich - in der Ausstellung, in Publikationen, im Funk und A1/4ber das Internet? Was man alles auf einer Weltausstellung an der Schwelle zum dritten Jahrtausend sehen und erfahren kann, sprengt in FA1/4lle und Vielfalt jeden individuell faAbaren Rahmen. Schmitz-Esser zeigt in seinem Buch, wie der Besucher wahlweise in vier Sprachen die Weltausstellung erleben und die Quintessenz davon mitnehmen kann. ErmAglicht wird dies durch das Konzept des virtuellen "Wissens in der Kapsel," das so aufbereitet ist, daA es in allen gAngigen medialen Formen und fA1/4r unterschiedlichste Wege der Aneignung eingesetzt werden kann. Die LAsung ist nicht nur eine Sache der Informatik und Informationstechnologie, sondern vielmehr auch eine Herausforderung an Informationswissenschaft und Computerlinguistik. Das Buch stellt Ziel, Ansatz, Komponenten und Voraussetzungen dafA1/4r dar.
Wer professionelle Multimedia-Anwendungen entwickelt, muss vielfaltige organisatorische, gestalterische, technische und juristische Aspekte berucksichtigen. Der Leitfaden liefert Ihnen Schritt fur Schritt das notwendige Know-how von der Konzeption bis zur Realisierung. Praktische Tips, Checklisten, Tabellen, Kalkulationshilfen und Produktionsablaufplane unterstutzen Sie beim effizienten Projektmanagement.
Das DAGM-Symposium findet dieses Jahr zum ersten Mal in Stuttgart statt. Das zwanzigjahrige JubiUium nahmen die Vorstandsmitglieder der DAGM zum Anlaf3, von der bisherigen Praxis, nur universitare Veranstalter mit der Organi sation und Durchfiihrung der Tagung zu beauftragen, abzuriicken und die Ta an der Universitat und zwei gungsleitung drei Wissenschaftlem, von denen einer tatig sind, zu iibertragen. Diese enge Verkniipfung zwischen Wis in der Industrie senschaft und Industrie wird dadurch noch weiter vertieft, daf3 gleichzeitig zum Symposium die erfolgreiche Industriemesse VISION'98 unter demselben Dach auf dem Gelande der Messe Stuttgart (Killesberg) ausgerichtet wird und beide Veranstaltungen wechselseitig fiir jede Gruppe offen sind. Die Verfahren der Mustererkennung haben sich in den letzten Jahren deut lich von ihren anfanglich vorhandenen Unzulanglichkeiten und ihrer geringen Robustheit befreien konnen und dadurch auf vielen Wegen ihren Eingang in industrielle Anwendungen gefunden. Mittlerweile wird die Bildverarbeitung als eine der Schliisseltechnologien betrachtet, und die entsprechenden Branchen er warten zweistellige Zuwachsraten bis ins nachste Jahrtausend hinein."
This book constitutes the refereed proceedings of the International Conference on Information Systems for Indian Languages, ICISIL 2011, held in Patiala, India, in March 2011. The 63 revised papers presented were carefully reviewed and selected from 126 paper submissions (full papers as well as poster papers) and 25 demo submissions. The papers address all current aspects on localization, e-governance, Web content accessibility, search engine and information retrieval systems, online and offline OCR, handwriting recognition, machine translation and transliteration, and text-to-speech and speech recognition - all with a particular focus on Indic scripts and languages.
Der Tagungsband der 28. Jahrestagung der Gesellschaft fur Informatik gibt einen Uberblick uber diejenigen Trends in den Gebieten Bild- und Sprachverarbeitung, die fur die weitere Entwicklung der Informatik eine Schlusselrolle spielen. In den Beitragen werden Resultate der Spitzenforschung prasentiert, Anwendungen aus der Industrie formuliert und die gesellschaftliche Relevanz der betrachteten Themengebiete beleuchtet.
Das Buch bietet eine kompakte Einfuhrung in die Grundlagen und Techniken des UEbersetzerbaus. UEbersetzer transformieren Texte einer Quellsprache, deren Struktur durch eine formale Grammatik beschrieben ist, in eine Zielsprache. Die UEbersetzung imperativer Programmiersprachen in Maschinensprache ist dabei nur ein Spezialfall. Dieses Lehrbuch betont die vielseitige Verwendbarkeit von UEbersetzerbau-Techniken. Insbesondere kann man mit Methoden der Syntaxanalyse Strukturen in Texten, Dateien oder Byte-Stroemen identifizieren. Ein weiterer Schwerpunkt liegt in der Verbindung von Theorie und Praxis und der Einubung der Benutzung von Werkzeugen wie Lex und Yacc. So wird u.a. die vollstandige Implementierung eines UEbersetzers einer einfachen Dokument-Beschreibungssprache nach LaTeX vorgefuhrt. Angemessen berucksichtigt wird auch die Implementierung imperativer und funktionaler Sprachen. Das didaktisch ansprechende Buch enthalt UEbungsaufgaben mit Loesungen und ist auch zum Selbststudium geeignet.
Die Anwendungsprogramme des Office-Pakets stellen heute in der Wirtschaftspraxis ein wichtiges Arbeitsmittel dar. Auch die Nutzung des Internet als Informationsquelle und Kommunikationsmittel ist weithin in der taglichen Arbeit unverzichtbar geworden. Das vorliegende Lehrbuch bietet Studenten und Praktikern eine kompakte Einfuhrung in die Office-Programme, die Grundlagen der EDV und das Internet: -Hardware, Software und Netze; Word, Access, Excel, Powerpoint und die Moglichkeiten der Internet-Nutzung. Das Buch zeichnet sich durch klare Darstellung und Beschrankung auf das Wesentliche aus. Es geht uber die Bedienung der Programme hinaus, indem es weiterfuhrende Informationen zum effektiven Umgang mit den Programmen gibt. Die Verwendung eines durchgangigen Fallbeispiels macht es besonders anschaulich."
'A must-read' New Scientist 'Fascinating' Greta Thunberg 'Enthralling' George Monbiot 'Brilliant' Philip Hoare A thrilling investigation into the pioneering world of animal communication, where big data and artificial intelligence are changing our relationship with animals forever In 2015, wildlife filmmaker Tom Mustill was whale watching when a humpback breached onto his kayak and nearly killed him. After a video clip of the event went viral, Tom found himself inundated with theories about what happened. He became obsessed with trying to find out what the whale had been thinking and sometimes wished he could just ask it. In the process of making a film about his experience, he discovered that might not be such a crazy idea. This is a story about the pioneers in a new age of discovery, whose cutting-edge developments in natural science and technology are taking us to the brink of decoding animal communication - and whales, with their giant mammalian brains and sophisticated vocalisations, offer one of the most realistic opportunities for us to do so. Using 'underwater ears,' robotic fish, big data and machine intelligence, leading scientists and tech-entrepreneurs across the world are working to turn the fantasy of Dr Dolittle into a reality, upending much of what we know about these mysterious creatures. But what would it mean if we were to make contact? And with climate change threatening ever more species with extinction, would doing so alter our approach to the natural world? Enormously original and hugely entertaining, How to Speak Whale is an unforgettable look at how close we truly are to communicating with another species - and how doing so might change our world beyond recognition.
Ausgehend von der Theorie der Fuzzy-Mengen und Fuzzy-Logik werden neue Methoden zur Analyse unscharfer Daten entwickelt. Dazu wird die Theorie der Formalen Begriffsanalyse in einer Reihe von Methoden und Verfahren erweitert und somit der Forderung von Anwendern nach MAglichkeiten zur begriffsanalytischen Erfassung unscharfer Daten Rechnung getragen. Die benAtigten theoretischen Grundlagen werden einfA1/4hrend bereitgestellt, die mathematische Darstellung wird an leicht nachvollziehbaren praktischen Beispielen veranschaulicht. Das Buch wendet sich damit gleichermaAen an Informatiker, Mathematiker aus Gebieten wie Datenanalyse oder Fuzzy-Logik sowie an Anwender aus Wissenschaft und Industrie.
|
You may like...
Foundation Models for Natural Language…
Gerhard PaaĆ, Sven Giesselbach
Hardcover
R884
Discovery Miles 8 840
Annotation, Exploitation and Evaluation…
Silvia Hansen-Schirra, Sambor Grucza
Hardcover
R919
Discovery Miles 9 190
|