![]() |
![]() |
Your cart is empty |
||
Books > Computing & IT > Applications of computing > Audio processing > Speech recognition & synthesis
This book constitutes the refereed proceedings of the 16th International Conference on Text, Speech and Dialogue, TSD 2013, held in Pilsen, Czech Republic, in September 2013. The 65 papers presented together with 5 invited talks were carefully reviewed and selected from 148 submissions. The main topics of this year's conference was corpora, texts and transcription, speech analysis, recognition and synthesis, and their intertwining within NL dialogue systems. The topics also included speech recognition, corpora and language resources, speech and spoken language generation, tagging, classification and parsing of text and speech, semantic processing of text and speech, integrating applications of text and speech processing, as well as automatic dialogue systems, and multimodal techniques and modelling.
This book focuses on speech processing in the presence of low-bit rate coding and varying background environments. The methods presented in the book exploit the speech events which are robust in noisy environments. Accurate estimation of these crucial events will be useful for carrying out various speech tasks such as speech recognition, speaker recognition and speech rate modification in mobile environments. The authors provide insights into designing and developing robust methods to process the speech in mobile environments. Covering temporal and spectral enhancement methods to minimize the effect of noise and examining methods and models on speech and speaker recognition applications in mobile environments.
"Emotion Recognition Using Speech Features" provides coverage of emotion-specific features present in speech. The author also discusses suitable models for capturing emotion-specific information for distinguishing different emotions. The content of this book is important for designing and developing natural and sophisticated speech systems. In this Brief, Drs. Rao and Koolagudi lead a discussion of how emotion-specific information is embedded in speech and how to acquire emotion-specific knowledge using appropriate statistical models. Additionally, the authors provide information about exploiting multiple evidences derived from various features and models. The acquired emotion-specific knowledge is useful for synthesizing emotions. Features includes discussion of: * Global and local prosodic features at syllable, word and phrase levels, helpful for capturing emotion-discriminative information; * Exploiting complementary evidences obtained from excitation sources, vocal tract systems and prosodic features in order to enhance the emotion recognition performance; * Proposed multi-stage and hybrid models for improving the emotion recognition performance. This brief is for researchers working in areas related to speech-based products such as mobile phone manufacturing companies, automobile companies, and entertainment products as well as researchers involved in basic and applied speech processing research.
This volume constitutes the refereed proceedings of the Spanish Conference, IberSPEECH 2012: Joint VII "Jornadas en Tecnologia del Habla" and III Iberian SLTech Workshop, held in Madrid, Spain, in November 21-23, 2012. The 29 revised papers were carefully reviewed and selected from 80 submissions. The papers are organized in topical sections on speaker characterization and recognition; audio and speech segmentation; pathology detection and speech characterization; dialogue and multimodal systems; robustness in automatic speech recognition; applications of speech and language technologies.
Current speech recognition systems are based on speaker independent speech models and suffer from inter-speaker variations in speech signal characteristics. This work develops an integrated approach for speech and speaker recognition in order to gain space for self-learning opportunities of the system. This work introduces a reliable speaker identification which enables the speech recognizer to create robust speaker dependent models In addition, this book gives a new approach to solve the reverse problem, how to improve speech recognition if speakers can be recognized. The speaker identification enables the speaker adaptation to adapt to different speakers which results in an optimal long-term adaptation.
"Automatic Speech Signal Analysis for Clinical Diagnosis and
Assessment of Speech Disorders "provides a survey of methods
designed to aid clinicians in the diagnosis and monitoring of
speech disorders such as dysarthria and dyspraxia, with an emphasis
on the signal processing techniques, statistical validity of the
results presented in the literature, and the appropriateness of
methodsthat do not requirespecialized equipment, rigorously
controlled recording procedures or highly skilled personnel to
interpret results.
Modern communication devices, such as mobile phones, teleconferencing systems, VoIP, etc., are often used in noisy and reverberant environments. Therefore, signals picked up by the microphones from telecommunication devices contain not only the desired near-end speech signal, but also interferences such as the background noise, far-end echoes produced by the loudspeaker, and reverberations of the desired source. These interferences degrade the fidelity and intelligibility of the near-end speech in human-to-human telecommunications and decrease the performance of human-to-machine interfaces (i.e., automatic speech recognition systems). The proposed book deals with the fundamental challenges of speech processing in modern communication, including speech enhancement, interference suppression, acoustic echo cancellation, relative transfer function identification, source localization, dereverberation, and beamforming in reverberant environments. Enhancement of speech signals is necessary whenever the source signal is corrupted by noise. In highly non-stationary noise environments, noise transients, and interferences may be extremely annoying. Acoustic echo cancellation is used to eliminate the acoustic coupling between the loudspeaker and the microphone of a communication device. Identification of the relative transfer function between sensors in response to a desired speech signal enables to derive a reference noise signal for suppressing directional or coherent noise sources. Source localization, dereverberation, and beamforming in reverberant environments further enable to increase the intelligibility of the near-end speech signal.
This book constitutes the proceedings of the First Indo-Japanese conference on Perception and Machine Intelligence, PerMIn 2012, held in Kolkata, India, in January 2012. The 41 papers, presented together with 1 keynote paper and 3 plenary papers, were carefully reviewed and selected for inclusion in the book. The papers are organized in topical sections named perception; human-computer interaction; e-nose and e-tongue; machine intelligence and application; image and video processing; and speech and signal processing.
The author covers the fundamentals of both information and communication security including current developments in some of the most critical areas of automatic speech recognition. Included are topics on speech watermarking, speech encryption, steganography, multilevel security systems comprising speaker identification, real transmission of watermarked or encrypted speech signals, and more. The book is especially useful for information security specialist, government security analysts, speech development professionals, and for individuals involved in the study and research of speech recognition at advanced levels.
Automated Speaking Assessment: Using Language Technologies to Score Spontaneous Speech provides a thorough overview of state-of-the-art automated speech scoring technology as it is currently used at Educational Testing Service (ETS). Its main focus is related to the automated scoring of spontaneous speech elicited by TOEFL iBT Speaking section items, but other applications of speech scoring, such as for more predictable spoken responses or responses provided in a dialogic setting, are also discussed. The book begins with an in-depth overview of the nascent field of automated speech scoring-its history, applications, and challenges-followed by a discussion of psychometric considerations for automated speech scoring. The second and third parts discuss the integral main components of an automated speech scoring system as well as the different types of automatically generated measures extracted by the system features related to evaluate the speaking construct of communicative competence as measured defined by the TOEFL iBT Speaking assessment. Finally, the last part of the book touches on more recent developments, such as providing more detailed feedback on test takers' spoken responses using speech features and scoring of dialogic speech. It concludes with a discussion, summary, and outlook on future developments in this area. Written with minimal technical details for the benefit of non-experts, this book is an ideal resource for graduate students in courses on Language Testing and Assessment as well as teachers and researchers in applied linguistics.
The advances in computing and networking have sparked an enormous interest in deploying automatic speech recognition on mobile devices and over communication networks. This book brings together academic researchers and industrial practitioners to address the issues in this emerging realm and presents the reader with a comprehensive introduction to the subject of speech recognition in devices and networks. It covers network, distributed and embedded speech recognition systems.
This volume contains the proceedings of NOLISP 2009, an ISCA Tutorial and Workshop on Non-Linear Speech Processing held at the University of Vic (- talonia, Spain) during June 25-27, 2009. NOLISP2009wasprecededbythreeeditionsofthisbiannualeventheld2003 in Le Croisic (France), 2005 in Barcelona, and 2007 in Paris. The main idea of NOLISP workshops is to present and discuss new ideas, techniques and results related to alternative approaches in speech processing that may depart from the mainstream. In order to work at the front-end of the subject area, the following domains of interest have been de?ned for NOLISP 2009: 1. Non-linear approximation and estimation 2. Non-linear oscillators and predictors 3. Higher-order statistics 4. Independent component analysis 5. Nearest neighbors 6. Neural networks 7. Decision trees 8. Non-parametric models 9. Dynamics for non-linear systems 10. Fractal methods 11. Chaos modeling 12. Non-linear di?erential equations The initiative to organize NOLISP 2009 at the University of Vic (UVic) came from the UVic Research Group on Signal Processing and was supported by the Hardware-Software Research Group. We would like to acknowledge the ?nancial support obtained from the M- istry of Science and Innovation of Spain (MICINN), University of Vic, ISCA, and EURASIP. All contributions to this volume are original. They were subject to a doub- blind refereeing procedure before their acceptance for the workshop and were revised after being presented at NOLISP 2009.
Voice user interfaces (VUIs) are becoming all the rage today. But how do you build one that people can actually converse with? Whether you're designing a mobile app, a toy, or a device such as a home assistant, this practical book guides you through basic VUI design principles, helps you choose the right speech recognition engine, and shows you how to measure your VUI's performance and improve upon it. Author Cathy Pearl also takes product managers, UX designers, and VUI designers into advanced design topics that will help make your VUI not just functional, but great. Understand key VUI design concepts, including command-and-control and conversational systems Decide if you should use an avatar or other visual representation with your VUI Explore speech recognition technology and its impact on your design Take your VUI above and beyond the basic exchange of information Learn practical ways to test your VUI application with users Monitor your app and learn how to quickly improve performance Get real-world examples of VUIs for home assistants, smartwatches, and car systems
Design and implement voice user interfaces. This guide to VUI helps you make decisions as you deal with the challenges of moving from a GUI world to mixed-modal interactions with GUI and VUI. The way we interact with devices is changing rapidly and this book gives you a close view across major companies via real-world applications and case studies. Voice User Interface Design provides an explanation of the principles of VUI design. The book covers the design phase, with clear explanations and demonstrations of each design principle through examples of multi-modal interactions (GUI plus VUI) and how they differ from pure VUI. The book also differentiates principles of VUI related to chat-based bot interaction models. By the end of the book you will have a vision of the future, imagining new user-oriented scenarios and new avenues, which until now were untouched. What You'll Learn Implement and adhere to each design principle Understand how VUI differs from other interaction models Work in the current VUI landscape Who This Book Is For Interaction designers, entrepreneurs, tech enthusiasts, thought leaders, and AI enthusiasts interested in the future of user experience/interaction, designing high-quality VUI, and product decision making
To create their own voice "skills," users need to learn some new device toolkits, the basics of Voice UI design, and some emerging best practices for building and deploying on these diverse platforms. Voice Applications for Alexa and Google Assistant guides readers in the exciting world of designing, building, and implementing voice-based applications for Amazon Alexa or Google Assistant! They learn how to build their own "skills"-the voice app term for actions the device can perform-from scratch. Key Features * Designing a voice interaction model * Fulfilling skills via a serverless platform like AWS Lambda * Connecting a skill to a database Audience Written for JavaScript developers interested in building voice-enabled applications. No prior experience required! Author Bio Dustin A. Coates is a web developer and web development instructor. He has taught hundreds of students online and offline at General Assembly. Dustin also developed popular courses for OneMonth.com and the European non-profit Konexio, which teaches refugees how to code.
Automated Speaking Assessment: Using Language Technologies to Score Spontaneous Speech provides a thorough overview of state-of-the-art automated speech scoring technology as it is currently used at Educational Testing Service (ETS). Its main focus is related to the automated scoring of spontaneous speech elicited by TOEFL iBT Speaking section items, but other applications of speech scoring, such as for more predictable spoken responses or responses provided in a dialogic setting, are also discussed. The book begins with an in-depth overview of the nascent field of automated speech scoring-its history, applications, and challenges-followed by a discussion of psychometric considerations for automated speech scoring. The second and third parts discuss the integral main components of an automated speech scoring system as well as the different types of automatically generated measures extracted by the system features related to evaluate the speaking construct of communicative competence as measured defined by the TOEFL iBT Speaking assessment. Finally, the last part of the book touches on more recent developments, such as providing more detailed feedback on test takers' spoken responses using speech features and scoring of dialogic speech. It concludes with a discussion, summary, and outlook on future developments in this area. Written with minimal technical details for the benefit of non-experts, this book is an ideal resource for graduate students in courses on Language Testing and Assessment as well as teachers and researchers in applied linguistics.
This volume constitutes selected papers presented at the Third International Conference on Artificial Intelligence and Speech Technology, AIST 2021, held in Delhi, India, in November 2021. The 36 full papers and 18 short papers presented were thoroughly reviewed and selected from the 178 submissions. They provide a discussion on application of Artificial Intelligence tools in speech analysis, representation and models, spoken language recognition and understanding, affective speech recognition, interpretation and synthesis, speech interface design and human factors engineering, speech emotion recognition technologies, audio-visual speech processing and several others.
This book constitutes the refereed proceedings of the 4th International Conference on Statistical Language and Speech Processing, SLSP 2016, held in Pilsen, Czech Republic, in October 2016. The 11 full papers presented together with two invited talks were carefully reviewed and selected from 38 submissions. The papers cover topics such as anaphora and coreference resolution; authorship identification, plagiarism and spam filtering; computer-aided translation; corpora and language resources; data mining and semantic web; information extraction; information retrieval; knowledge representation and ontologies; lexicons and dictionaries; machine translation; multimodal technologies; natural language understanding; neural representation of speech and language; opinion mining and sentiment analysis; parsing; part-of-speech tagging; question and answering systems; semantic role labeling; speaker identification and verification; speech and language generation; speech recognition; speech synthesis; speech transcription; speech correction; spoken dialogue systems; term extraction; text categorization; test summarization; user modeling.
This book constitutes the refereed proceedings of the 17th International Conference on Speech and Computer, SPECOM 2015, held in Athens, Greece, in September 2015. The 59 revised full papers presented together with 2 invited talks were carefully reviewed and selected from 104 initial submissions. The papers cover a wide range of topics in the area of computer speech processing such as recognition, synthesis, and understanding and related domains including signal processing, language and text processing, multi-modal speech processing or human-computer interaction.
This book constitutes the refereed proceedings of the 18th International Conference on Text, Speech and Dialogue, TSD 2015, held in Pilsen, Czech Republic, in September 2015. The 67 papers presented together with 3 invited papers were carefully reviewed and selected from 138 submissions. They focus on topics such as corpora and language resources; speech recognition; tagging, classification and parsing of text and speech; speech and spoken language generation; semantic processing of text and speech; integrating applications of text and speech processing; automatic dialogue systems; as well as multimodal techniques and modelling.
This book constitutes the refereed proceedings of the 15th International Conference on Speech and Computer, SPECOM 2013, held in Pilsen, Czech Republic. The 48 revised full papers presented were carefully reviewed and selected from 90 initial submissions. The papers are organized in topical sections on speech recognition and understanding, spoken language processing, spoken dialogue systems, speaker identification and diarization, speech forensics and security, language identification, text-to-speech systems, speech perception and speech disorders, multimodal analysis and synthesis, understanding of speech and text, and audio-visual speech processing.
Design and build innovative, custom, data-driven Alexa skills for home or business. Working through several projects, this book teaches you how to build Alexa skills and integrate them with online APIs. If you have basic Python skills, this book will show you how to build data-driven Alexa skills. You will learn to use data to give your Alexa skills dynamic intelligence, in-depth knowledge, and the ability to remember. Data-Driven Alexa Skills takes a step-by-step approach to skill development. You will begin by configuring simple skills in the Alexa Skill Builder Console. Then you will develop advanced custom skills that use several Alexa Skill Development Kit features to integrate with lambda functions, Amazon Web Services (AWS), and Internet data feeds. These advanced skills enable you to link user accounts, query and store data using a NoSQL database, and access real estate listings and stock prices via web APIs. What You Will Learn Set up and configure your development environment properly the first time Build Alexa skills quickly and efficiently using Agile tools and techniques Create a variety of data-driven Alexa skills for home and business Access data from web applications and Internet data sources via their APIs Test with unit-testing frameworks throughout the development life cycle Manage and query your data using the DynamoDb NoSQL database engines Who This Book Is For Developers who wish to go beyond Hello World and build complex, data-driven applications on Amazon's Alexa platform; developers who want to learn how to use Lambda functions, the Alexa Skills SDK, Alexa Presentation Language, and Alexa Conversations; developers interested in integrating with public APIs such as real estate listings and stock market prices. Readers will need to have basic Python skills.
Cross-Word Modeling for Arabic Speech Recognition utilizes phonological rules in order to model the cross-word problem, a merging of adjacent words in speech caused by continuous speech, to enhance the performance of continuous speech recognition systems. The author aims to provide an understanding of the cross-word problem and how it can be avoided, specifically focusing on Arabic phonology using an HHM-based classifier. |
![]() ![]() You may like...
Estimating Spoken Dialog System Quality…
Klaus-Peter Engelbrecht
Hardcover
R2,861
Discovery Miles 28 610
Self-Learning Speaker Identification - A…
Tobias Herbig, Franz Gerl, …
Hardcover
R2,975
Discovery Miles 29 750
Dialect Accent Features for Establishing…
Manisha Kulshreshtha, Ramkumar Mathur
Hardcover
R1,521
Discovery Miles 15 210
Speech and Audio Processing for Coding…
Tokunbo Ogunfunmi, Roberto Togneri, …
Hardcover
Speech Enhancement Techniques for…
Komal R. Borisagar, Rohit M. Thanki, …
Hardcover
R2,867
Discovery Miles 28 670
New Era for Robust Speech Recognition…
Shinji Watanabe, Marc Delcroix, …
Hardcover
R5,851
Discovery Miles 58 510
Handbook of Research on Recent…
Siddhartha Bhattacharyya, Nibaran Das, …
Hardcover
R9,795
Discovery Miles 97 950
Proactive Spoken Dialogue Interaction in…
Petra-Maria Strauss, Wolfgang Minker
Hardcover
R2,979
Discovery Miles 29 790
|