![]() |
Welcome to Loot.co.za!
Sign in / Register |Wishlists & Gift Vouchers |Help | Advanced search
|
Your cart is empty |
||
|
Books > Computing & IT > Applications of computing > Artificial intelligence > Natural language & machine translation
This book constitutes the refereed proceedings of the 25th Brazilian Symposium on Formal Methods, SBMF 2022, which was held virtually in December 2022. The 8 regular papers presented in this book were carefully reviewed and selected from 15 submissions. The symposium focuses on the development, dissemination, and use of formal methods for the construction of high-quality computational systems, aiming to promote opportunities for researchers and practitioners with an interest in formal methods to discuss the recent advances in this area.
This book constitutes revised selected papers from the thoroughly refereed proceedings of the 10th International Conference on Analysis of Images, Social Networks and Texts, AIST 2021, held in Tbilisi, Georgia, during December 16-18, 2021. The 20 full papers and 5 short papers included in this book were carefully reviewed and selected from 118 submissions. They were organized in topical sections as follows: Invited papers; natural language processing; computer vision; data analysis and machine learning; social network analysis; and theoretical machine learning and optimization.
This open access book provides an in-depth description of the EU project European Language Grid (ELG). Its motivation lies in the fact that Europe is a multilingual society with 24 official European Union Member State languages and dozens of additional languages including regional and minority languages. The only meaningful way to enable multilingualism and to benefit from this rich linguistic heritage is through Language Technologies (LT) including Natural Language Processing (NLP), Natural Language Understanding (NLU), Speech Technologies and language-centric Artificial Intelligence (AI) applications. The European Language Grid provides a single umbrella platform for the European LT community, including research and industry, effectively functioning as a virtual home, marketplace, showroom, and deployment centre for all services, tools, resources, products and organisations active in the field. Today the ELG cloud platform already offers access to more than 13,000 language processing tools and language resources. It enables all stakeholders to deposit, upload and deploy their technologies and datasets. The platform also supports the long-term objective of establishing digital language equality in Europe by 2030 - to create a situation in which all European languages enjoy equal technological support. This is the very first book dedicated to Language Technology and NLP platforms. Cloud technology has only recently matured enough to make the development of a platform like ELG feasible on a larger scale. The book comprehensively describes the results of the ELG project. Following an introduction, the content is divided into four main parts: (I) ELG Cloud Platform; (II) ELG Inventory of Technologies and Resources; (III) ELG Community and Initiative; and (IV) ELG Open Calls and Pilot Projects.
The 39-volume set, comprising the LNCS books 13661 until 13699, constitutes the refereed proceedings of the 17th European Conference on Computer Vision, ECCV 2022, held in Tel Aviv, Israel, during October 23-27, 2022. The 1645 papers presented in these proceedings were carefully reviewed and selected from a total of 5804 submissions. The papers deal with topics such as computer vision; machine learning; deep neural networks; reinforcement learning; object recognition; image classification; image processing; object detection; semantic segmentation; human pose estimation; 3d reconstruction; stereo vision; computational photography; neural networks; image coding; image reconstruction; object recognition; motion estimation.
This book constitutes the thoroughly revised selected papers from the 18th International Symposium, FACS 2022, which was held online in November 2022.The 12 full papers and 1 short paper were carefully reviewed and selected from 25 submissions. FACS 2021 is focusing on the areas of component software and formal methods in order to promote a deeper understanding of how formal methods can or should be used to make component-based software development succeed.
This book constitutes the refereed proceedings of the 18th International Conference on Frontiers in Handwriting Recognition, ICFHR 2022, which took place in Hyderabad, India, during December 4-7, 2022. The 36 full papers and 1 short paper presented in this volume were carefully reviewed and selected from 61 submissions. The contributions were organized in topical sections as follows: Historical Document Processing; Signature Verification and Writer Identification; Symbol and Graphics Recognition; Handwriting Recognition and Understanding; Handwriting Datasets and Synthetic Handwriting Generation; Document Analysis and Processing.
This book constitutes the proceedings of the 21st China National Conference on Computational Linguistics, CCL 2022, held in Nanchang, China, in October 2022. The 22 full English-language papers in this volume were carefully reviewed and selected from 293 Chinese and English submissions. The conference papers are categorized into the following topical sub-headings: Linguistics and Cognitive Science; Fundamental Theory and Methods of Computational Linguistics; Information Retrieval, Dialogue and Question Answering; Text Generation and Summarization; Knowledge Graph and Information Extraction; Machine Translation and Multilingual Information Processing; Minority Language Information Processing; Language Resource and Evaluation; NLP Applications.
Get a hands-on introduction to Transformer architecture using the Hugging Face library. This book explains how Transformers are changing the AI domain, particularly in the area of natural language processing. This book covers Transformer architecture and its relevance in natural language processing (NLP). It starts with an introduction to NLP and a progression of language models from n-grams to a Transformer-based architecture. Next, it offers some basic Transformers examples using the Google colab engine. Then, it introduces the Hugging Face ecosystem and the different libraries and models provided by it. Moving forward, it explains language models such as Google BERT with some examples before providing a deep dive into Hugging Face API using different language models to address tasks such as sentence classification, sentiment analysis, summarization, and text generation. After completing Introduction to Transformers for NLP, you will understand Transformer concepts and be able to solve problems using the Hugging Face library. What You Will Learn Understand language models and their importance in NLP and NLU (Natural Language Understanding) Master Transformer architecture through practical examples Use the Hugging Face library in Transformer-based language models Create a simple code generator in Python based on Transformer architecture Who This Book Is ForData Scientists and software developers interested in developing their skills in NLP and NLU (Natural Language Understanding)
This book constitutes the refereed proceedings of the 4th International Workshop on Explainable and Transparent AI and Multi-Agent Systems, EXTRAAMAS 2022, held virtually during May 9-10, 2022. The 14 full papers included in this book were carefully reviewed and selected from 25 submissions. They were organized in topical sections as follows: explainable machine learning; explainable neuro-symbolic AI; explainable agents; XAI measures and metrics; and AI & law.
This work presents a discourse-aware Text Simplification approach that splits and rephrases complex English sentences within the semantic context in which they occur. Based on a linguistically grounded transformation stage, complex sentences are transformed into shorter utterances with a simple canonical structure that can be easily analyzed by downstream applications. To avoid breaking down the input into a disjointed sequence of statements that is difficult to interpret, the author incorporates the semantic context between the split propositions in the form of hierarchical structures and semantic relationships, thus generating a novel representation of complex assertions that puts a semantic layer on top of the simplified sentences. In a second step, she leverages the semantic hierarchy of minimal propositions to improve the performance of Open IE frameworks. She shows that such systems benefit in two dimensions. First, the canonical structure of the simplified sentences facilitates the extraction of relational tuples, leading to an improved precision and recall of the extracted relations. Second, the semantic hierarchy can be leveraged to enrich the output of existing Open IE approaches with additional meta-information, resulting in a novel lightweight semantic representation for complex text data in the form of normalized and context-preserving relational tuples.
This book constitutes the refereed proceedings of the 20th International Conference on Formal Modeling and Analysis of Timed Systems, FORMATS 2022, held in Warsaw, Poland, in September 2022. The 12 full papers together with 2 short papers that were carefully reviewed and selected from 30 submissions are presented in this volume with 3 full-length papers associated with invited/anniversary talks. The papers focus on topics such as modelling, design and analysis of timed computational systems. The conference aims in real-time issues in hardware design, performance analysis, real-time software, scheduling, semantics and verification of real-timed, hybrid and probabilistic systems.
This book constitutes the refereed proceedings of the 13th International Conference of the CLEF Association, CLEF 2022, held in Bologna, Italy in September 2022.The conference has a clear focus on experimental information retrieval with special attention to the challenges of multimodality, multilinguality, and interactive search ranging from unstructured to semi structures and structured data. The 7 full papers presented together with 3 short papers in this volume were carefully reviewed and selected from 14 submissions. This year, the contributions addressed the following challenges: authorship attribution, fake news detection and news tracking, noise-detection in automatically transferred relevance judgments, impact of online education on children's conversational search behavior, analysis of multi-modal social media content, knowledge graphs for sensitivity identification, a fusion of deep learning and logic rules for sentiment analysis, medical concept normalization and domain-specific information extraction. In addition to this, the volume presents 7 "best of the labs" papers which were reviewed as full paper submissions with the same review criteria. 14 lab overview papers were accepted and represent scientific challenges based on new datasets and real world problems in multimodal and multilingual information access.
This book constitutes the proceedings of the 21st EPIA Conference on Artificial Intelligence, EPIA 2022, which took place in Lisbon, Portugal, in August/September 2022. The 64 papers presented in this volume were carefully reviewed and selected from 85 submissions. They were organized in topical sections as follows: AI4IS - Artificial Intelligence for Industry and Societies; AIL - Artificial Intelligence and Law; AIM - Artificial Intelligence in Medicine; AIPES - Artificial Intelligence in Power and Energy Systems; AITS - Artificial Intelligence in Transportation Systems; AmIA - Ambient Intelligence and Affective Environments; GAI - General AI; IROBOT - Intelligent Robotics; KDBI - Knowledge Discovery and Business Intelligence; KRR - Knowledge Representation and Reasoning; MASTA - Multi-Agent Systems: Theory and Applications; TeMA - Text Mining and Applications.
This book constitutes the proceedings of the 26th International Conference on Theory and Practice of Digital Libraries, TPDL 2022, which took place in Padua, Italy, in September 2022. The 18 full papers, 27 short papers and 15 accelerating innovation papers included in these proceedings were carefully reviewed and selected from 107 submissions. They focus on digital libraries and associated technical, practical, and social issues.
When viewed through a political lens, the act of defining terms in natural language arguably transforms knowledge into values. This unique volume explores how corporate, military, academic, and professional values shaped efforts to define computer terminology and establish an information engineering profession as a precursor to what would become computer science. As the Cold War heated up, U.S. federal agencies increasingly funded university researchers and labs to develop technologies, like the computer, that would ensure that the U.S. maintained economic prosperity and military dominance over the Soviet Union. At the same time, private corporations saw opportunities for partnering with university labs and military agencies to generate profits as they strengthened their business positions in civilian sectors. They needed a common vocabulary and principles of streamlined communication to underpin the technology development that would ensure national prosperity and military dominance. investigates how language standardization contributed to the professionalization of computer science as separate from mathematics, electrical engineering, and physics examines traditions of language standardization in earlier eras of rapid technology development around electricity and radio highlights the importance of the analogy of "the computer is like a human" to early explanations of computer design and logic traces design and development of electronic computers within political and economic contexts foregrounds the importance of human relationships in decisions about computer design This in-depth humanistic study argues for the importance of natural language in shaping what people come to think of as possible and impossible relationships between computers and humans. The work is a key reference in the history of technology and serves as a source textbook on the human-level history of computing. In addition, it addresses those with interests in sociolinguistic questions around technology studies, as well as technology development at the nexus of politics, business, and human relations.
Every day we interact with machine learning systems offering individualized predictions for our entertainment, social connections, purchases, or health. These involve several modalities of data, from sequences of clicks to text, images, and social interactions. This book introduces common principles and methods that underpin the design of personalized predictive models for a variety of settings and modalities. The book begins by revising 'traditional' machine learning models, focusing on adapting them to settings involving user data, then presents techniques based on advanced principles such as matrix factorization, deep learning, and generative modeling, and concludes with a detailed study of the consequences and risks of deploying personalized predictive systems. A series of case studies in domains ranging from e-commerce to health plus hands-on projects and code examples will give readers understanding and experience with large-scale real-world datasets and the ability to design models and systems for a wide range of applications.
This book constitutes the thoroughly refereed post-workshop proceedings of the 22nd Chinese Lexical Semantics Workshop, CLSW 2021, held in Nanjing, China in May 2021. The 68 full papers and 4 short papers included in this volume were carefully reviewed and selected from 261 submissions. They are organized in the following topical sections: Lexical Semantics and General Linguistics; Natural Language Processing and Language Computing; Cognitive Science and Experimental Studies; Lexical Resources and Corpus Linguistics.
The two-volume proceedings, LNCS 13249 and 13250, constitutes the thoroughly refereed post-workshop proceedings of the 22nd Chinese Lexical Semantics Workshop, CLSW 2021, held in Nanjing, China in May 2021. The 68 full papers and 4 short papers were carefully reviewed and selected from 261 submissions. They are organized in the following topical sections: Lexical Semantics and General Linguistics; Natural Language Processing and Language Computing; Cognitive Science and Experimental Studies; Lexical Resources and Corpus Linguistics.
This book constitutes the refereed proceedings of the 9th Language and Technology Conference: Challenges for Computer Science and Linguistics, LTC 2019, held in Poznan, Poland, in May 2019. The 24 revised papers presented in this volume were carefully reviewed and selected from 67 submissions. The papers are categorized into the following topical sub-headings: Speech Processing; Language Resources and Tools; Computational Semantics; Emotions, Decisions and Opinions; Digital Humanities; Evaluation; and Legal Aspects.
This book constitutes the proceedings of the 26th International Conference on Developments in Language Theory, DLT 2022, which was held in Tampa, FL, USA, during May, 2022. The conference took place in an hybrid format with both in-person and online participation. The 21 full papers included in these proceedings were carefully reviewed and selected from 32 submissions. The DLT conference series provides a forum for presenting current developments in formal languages and automata.
This book provides a new multi-method, process-oriented approach towards speech quality assessment, which allows readers to examine the influence of speech transmission quality on a variety of perceptual and cognitive processes in human listeners. Fundamental concepts and methodologies surrounding the topic of process-oriented quality assessment are introduced and discussed. The book further describes a functional process model of human quality perception, which theoretically integrates results obtained in three experimental studies. This book's conceptual ideas, empirical findings, and theoretical interpretations should be of particular interest to researchers working in the fields of Quality and Usability Engineering, Audio Engineering, Psychoacoustics, Audiology, and Psychophysiology.
This book constitutes the proceedings of the 26th International Conference on Implementation and Application of Automata, CIAA 2022, held in Rouen, France in June/ July 2022. The 16 regular papers presented together with 3 invited lectures in this book were carefully reviewed and selected from 26 submissions. The topics of the papers covering various fields in the application, implementation, and theory of automata and related structures.
This book covers theoretical work, applications, approaches, and techniques for computational models of information and its presentation by language (artificial, human, or natural in other ways). Computational and technological developments that incorporate natural language are proliferating. Adequate coverage encounters difficult problems related to ambiguities and dependency on context and agents (humans or computational systems). The goal is to promote computational systems of intelligent natural language processing and related models of computation, language, thought, mental states, reasoning, and other cognitive processes.
Automating Linguistics offers an in-depth study of the history of the mathematisation and automation of the sciences of language. In the wake of the first mathematisation of the 1930s, two waves followed: machine translation in the 1950s and the development of computational linguistics and natural language processing in the 1960s. These waves were pivotal given the work of large computerised corpora in the 1990s and the unprecedented technological development of computers and software.Early machine translation was devised as a war technology originating in the sciences of war, amidst the amalgamate of mathematics, physics, logics, neurosciences, acoustics, and emerging sciences such as cybernetics and information theory. Machine translation was intended to provide mass translations for strategic purposes during the Cold War. Linguistics, in turn, did not belong to the sciences of war, and played a minor role in the pioneering projects of machine translation.Comparing the two trends, the present book reveals how the sciences of language gradually integrated the technologies of computing and software, resulting in the second-wave mathematisation of the study of language, which may be called mathematisation-automation. The integration took on various shapes contingent upon cultural and linguistic traditions (USA, ex-USSR, Great Britain and France). By contrast, working with large corpora in the 1990s, though enabled by unprecedented development of computing and software, was primarily a continuation of traditional approaches in the sciences of language sciences, such as the study of spoken and written texts, lexicography, and statistical studies of vocabulary.
This book gathers high-quality papers presented at Academia-Industry Consortium for Data Science (AICDS 2020), held in Wenzhou, China during 19 - 20 December 2020. The book presents views of academicians and also how companies are approaching these challenges organizationally. The topics covered in the book are data science and analytics, natural language processing, predictive analytics, artificial intelligence, machine learning, deep learning, big data computing, cognitive computing, data visualization, image processing, and optimization techniques. |
You may like...
Scheduling with Time-Changing Effects…
Vitaly A. Strusevich, Kabir Rustogi
Hardcover
The Growth of Mathematical Knowledge
Emily Grosholz, Herbert Breger
Hardcover
R6,075
Discovery Miles 60 750
Analytics in Smart Tourism Design…
Zheng Xiang, Daniel R Fesenmaier
Hardcover
R5,363
Discovery Miles 53 630
Knowledge, Number and Reality…
Nils Kurbis, Bahram Assadian, …
Hardcover
R3,021
Discovery Miles 30 210
Topological Spaces - From Distance to…
Gerard Buskes, Arnoud Van Rooij
Hardcover
R1,809
Discovery Miles 18 090
|