0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (1)
  • R5,000 - R10,000 (1)
  • -
Status
Brand

Showing 1 - 2 of 2 matches in All Departments

Computational Auditory Scene Analysis - Proceedings of the Ijcai-95 Workshop (Hardcover): David F. Rosenthal, Hiroshi G. Okuno,... Computational Auditory Scene Analysis - Proceedings of the Ijcai-95 Workshop (Hardcover)
David F. Rosenthal, Hiroshi G. Okuno, Hiroshi Okuno, David Rosenthal
R5,407 Discovery Miles 54 070 Ships in 12 - 19 working days

The interest of AI in problems related to understanding sounds has a rich history dating back to the ARPA Speech Understanding Project in the 1970s. While a great deal has been learned from this and subsequent speech understanding research, the goal of building systems that can understand general acoustic signals--continuous speech and/or non-speech sounds--from unconstrained environments is still unrealized. Instead, there are now systems that understand "clean" speech well in relatively noiseless laboratory environments, but that break down in more realistic, noisier environments. As seen in the "cocktail-party effect," humans and other mammals have the ability to selectively attend to sound from a particular source, even when it is mixed with other sounds. Computers also need to be able to decide which parts of a mixed acoustic signal are relevant to a particular purpose--which part should be interpreted as speech, and which should be interpreted as a door closing, an air conditioner humming, or another person interrupting.
Observations such as these have led a number of researchers to conclude that research on speech understanding and on nonspeech understanding need to be united within a more general framework. Researchers have also begun trying to understand computational auditory frameworks as parts of larger perception systems whose purpose is to give a computer integrated information about the real world. Inspiration for this work ranges from research on how different sensors can be integrated to models of how humans' auditory apparatus works in concert with vision, proprioception, etc. Representing some of the most advanced work on computers understanding speech, this collection of papers covers the work being done to integrate speech and nonspeech understanding in computer systems.

Computational Auditory Scene Analysis - Proceedings of the Ijcai-95 Workshop (Paperback): David F. Rosenthal, Hiroshi G. Okuno,... Computational Auditory Scene Analysis - Proceedings of the Ijcai-95 Workshop (Paperback)
David F. Rosenthal, Hiroshi G. Okuno, Hiroshi Okuno, David Rosenthal
R2,032 Discovery Miles 20 320 Ships in 12 - 19 working days

The interest of AI in problems related to understanding sounds has a rich history dating back to the ARPA Speech Understanding Project in the 1970s. While a great deal has been learned from this and subsequent speech understanding research, the goal of building systems that can understand general acoustic signals--continuous speech and/or non-speech sounds--from unconstrained environments is still unrealized. Instead, there are now systems that understand "clean" speech well in relatively noiseless laboratory environments, but that break down in more realistic, noisier environments. As seen in the "cocktail-party effect," humans and other mammals have the ability to selectively attend to sound from a particular source, even when it is mixed with other sounds. Computers also need to be able to decide which parts of a mixed acoustic signal are relevant to a particular purpose--which part should be interpreted as speech, and which should be interpreted as a door closing, an air conditioner humming, or another person interrupting. Observations such as these have led a number of researchers to conclude that research on speech understanding and on nonspeech understanding need to be united within a more general framework. Researchers have also begun trying to understand computational auditory frameworks as parts of larger perception systems whose purpose is to give a computer integrated information about the real world. Inspiration for this work ranges from research on how different sensors can be integrated to models of how humans' auditory apparatus works in concert with vision, proprioception, etc. Representing some of the most advanced work on computers understanding speech, this collection of papers covers the work being done to integrate speech and nonspeech understanding in computer systems.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Total Quality Management: an Internal…
David L. Goetsch, Rigard Steenkamp Paperback  (1)
R1,346 Discovery Miles 13 460
Quitting Smoking & Vaping For Dummies…
Ch Elliott Paperback R498 R459 Discovery Miles 4 590
Lamenting Racism Leader's Guide - A…
Rob Muthiah Paperback R381 R346 Discovery Miles 3 460
Dinosaurs, Diamonds And Democracy - A…
Francis Wilson Paperback  (2)
R248 Discovery Miles 2 480
Emigreer Of Bly - Is Die Gras Werklik…
Stephan Joubert Paperback R220 R206 Discovery Miles 2 060
The High Treason Club - The Boeremag On…
Karin Mitchell Paperback R340 R279 Discovery Miles 2 790
The Interpretation of Cultures
Clifford Geertz Paperback R766 Discovery Miles 7 660
Greys' Ghosts - Men of the Scots Greys…
Stuart Mellor Hardcover R1,078 Discovery Miles 10 780
Evangeline a Tale of Acadie
Henry Wadsworth Longfellow Paperback R400 Discovery Miles 4 000
History of the Counties of Ayr and…
James Paterson Paperback R562 Discovery Miles 5 620

 

Partners