0
Your cart

Your cart is empty

Browse All Departments
  • All Departments
Price
  • R1,000 - R2,500 (1)
  • R5,000 - R10,000 (1)
  • -
Status
Brand

Showing 1 - 2 of 2 matches in All Departments

Computational Auditory Scene Analysis - Proceedings of the Ijcai-95 Workshop (Hardcover): David F. Rosenthal, Hiroshi G. Okuno,... Computational Auditory Scene Analysis - Proceedings of the Ijcai-95 Workshop (Hardcover)
David F. Rosenthal, Hiroshi G. Okuno, Hiroshi Okuno, David Rosenthal
R5,087 Discovery Miles 50 870 Ships in 10 - 15 working days

The interest of AI in problems related to understanding sounds has a rich history dating back to the ARPA Speech Understanding Project in the 1970s. While a great deal has been learned from this and subsequent speech understanding research, the goal of building systems that can understand general acoustic signals--continuous speech and/or non-speech sounds--from unconstrained environments is still unrealized. Instead, there are now systems that understand "clean" speech well in relatively noiseless laboratory environments, but that break down in more realistic, noisier environments. As seen in the "cocktail-party effect," humans and other mammals have the ability to selectively attend to sound from a particular source, even when it is mixed with other sounds. Computers also need to be able to decide which parts of a mixed acoustic signal are relevant to a particular purpose--which part should be interpreted as speech, and which should be interpreted as a door closing, an air conditioner humming, or another person interrupting.
Observations such as these have led a number of researchers to conclude that research on speech understanding and on nonspeech understanding need to be united within a more general framework. Researchers have also begun trying to understand computational auditory frameworks as parts of larger perception systems whose purpose is to give a computer integrated information about the real world. Inspiration for this work ranges from research on how different sensors can be integrated to models of how humans' auditory apparatus works in concert with vision, proprioception, etc. Representing some of the most advanced work on computers understanding speech, this collection of papers covers the work being done to integrate speech and nonspeech understanding in computer systems.

Computational Auditory Scene Analysis - Proceedings of the Ijcai-95 Workshop (Paperback): David F. Rosenthal, Hiroshi G. Okuno,... Computational Auditory Scene Analysis - Proceedings of the Ijcai-95 Workshop (Paperback)
David F. Rosenthal, Hiroshi G. Okuno, Hiroshi Okuno, David Rosenthal
R1,915 Discovery Miles 19 150 Ships in 10 - 15 working days

The interest of AI in problems related to understanding sounds has a rich history dating back to the ARPA Speech Understanding Project in the 1970s. While a great deal has been learned from this and subsequent speech understanding research, the goal of building systems that can understand general acoustic signals--continuous speech and/or non-speech sounds--from unconstrained environments is still unrealized. Instead, there are now systems that understand "clean" speech well in relatively noiseless laboratory environments, but that break down in more realistic, noisier environments. As seen in the "cocktail-party effect," humans and other mammals have the ability to selectively attend to sound from a particular source, even when it is mixed with other sounds. Computers also need to be able to decide which parts of a mixed acoustic signal are relevant to a particular purpose--which part should be interpreted as speech, and which should be interpreted as a door closing, an air conditioner humming, or another person interrupting. Observations such as these have led a number of researchers to conclude that research on speech understanding and on nonspeech understanding need to be united within a more general framework. Researchers have also begun trying to understand computational auditory frameworks as parts of larger perception systems whose purpose is to give a computer integrated information about the real world. Inspiration for this work ranges from research on how different sensors can be integrated to models of how humans' auditory apparatus works in concert with vision, proprioception, etc. Representing some of the most advanced work on computers understanding speech, this collection of papers covers the work being done to integrate speech and nonspeech understanding in computer systems.

Free Delivery
Pinterest Twitter Facebook Google+
You may like...
Systems Analysis And Design In A…
John Satzinger, Robert Jackson, … Hardcover  (1)
R1,284 R1,198 Discovery Miles 11 980
Memory Architecture Exploration for…
Peter Grun, Nikil D. Dutt, … Hardcover R2,730 Discovery Miles 27 300
Sleeper
Mike Nicol Paperback R300 R277 Discovery Miles 2 770
The ABC-Clio World History Companion to…
Daniel W Hollis Hardcover R2,208 Discovery Miles 22 080
Kringloop
Bets Smith Paperback R270 R253 Discovery Miles 2 530
Thomas Aquinas's Summa Contra Gentiles…
Brian Davies Hardcover R3,779 Discovery Miles 37 790
Dragon Soul - 30 Years of Dragon Ball…
Derek Padula Hardcover R1,932 Discovery Miles 19 320
Jacobean Embroidery - Its Forms and…
Ada Wentworth Fitzwilliam, A. F. Morris Hands Hardcover R473 Discovery Miles 4 730
Dragon Ball Culture Volume 3 - Battle
Derek Padula Hardcover R636 Discovery Miles 6 360
The Hand-Stitched Flower Garden - Over…
Yuki Sugashima Paperback  (1)
R433 R394 Discovery Miles 3 940

 

Partners