人机交互技术:利用声波识别手势 Gesture Control System Uses Sound Alone
當支付寶(當面付)開始用超聲波進行近距離的小數據量通訊時,我就感慨,這真是一個偉大的而又令人敬佩的發明!因為它完全不需要額外的硬件支持!
后來微信將超聲波通訊應用到了“雷達找朋友”,騰訊總是善于學習且發現的,也很令人敬佩。
而今天,微軟甚至將超聲波技術用到了手勢識別上!下面的鏈接對此詳細描述!
http://article.yeeyan.org/compare/286069
利用聲波實現手勢識別
從高中物理課本中的“多普勒效應”我們知道當波源在運動時觀察者感受到波的頻率是變化的,救護車的鳴笛聲就是一個很好的例子,你也許沒有想到過利用“多普勒效應”來控制電腦吧。
利用“多普勒效應”來控制電腦?你沒有聽錯,位于華盛頓州雷德蒙德市的軟件巨頭微軟研究院就正在做這件事情。“手勢控制”技術變得越來越常見,實際上這種技術已經運用到某些電視上了。當其它“動作感應"技術(微軟的Kinect)還停留在利用攝像機來感知動作的階段,SoundWave則是利用“多普勒效應”結合某些智能軟件以及筆記本內建的揚聲器和麥克風來實現這一點。
作為微軟研究院首席研究員以及SoundWave研究小組成員之一的Desney Tan說到利用SoundWave技術已經能感知一些簡單的動作了,隨著智能手機和筆記本配備多個揚聲器和麥克風,SoundWave技術將會變得更高效。由微軟研究院和華盛頓大學共同研究的SoundWave技術成果將在德克薩斯州奧斯汀市的2012年 ACM SIGCHI Conference大會的Human Factors部分展示出來。
SoundWave這個點子是去年夏天產生的,當時Desney和其他人正在做一個項目,需要用到超聲換能器(能發射和接收超聲波)來產生觸覺,其中一個研究員發現當他四處走動的時候聲波發生了奇怪的波動。超聲換能器發射出的超聲波從他身上彈開,他的動作改變了超聲波的頻率,在示波器上面就體現出來了。
研究院們很快意識到這個現象可能對動作感應有用。由于許多設備已經配備了揚聲器和麥克風,他們又做了實驗想要弄清楚能否利用現有的傳感器來檢感知動作。Desney Tan 說:”標準電腦的揚聲器和麥克風在超聲波(人類無法聽到)中依然可以正常使用的,這就意味著你只需要一個筆記本或者裝有SoundWave軟件的智能手機“
來自卡梅隆大學的Chris Harrison,他是研究”感官用戶接口“的,他說SoundWave如果配合現有的軟硬件將是一項巨大的發明。
”我認為SoundWave還是有一些潛力的“他說。
一臺裝有SoundWave電腦的揚聲器發射出介于20KHZ~22KHZ之間的恒定頻率的超聲波。如果當時周圍環境中沒有什么東西在移動,那么麥克風接收到的的頻率也應該是恒定的。但是如果有東西向著電腦的方向移動,麥克風接收到的的頻率就會升高,反之則會降低。
相關的數學模型物理模型已經有了,Tan說到,所以基于頻率我們可以用來分析出到底運動物體有多大、物體運動有多快以及它的方向。通過以上幾個數據SoundWave就可以實現動作識別了。
SoundWave準確性徘徊在90%左右,Tan說到,而且在用戶作出動作和計算機回應用戶動作這段時間內并沒有很明顯的延遲。當揚聲器在處理其他的一些事情的時候SoundWave依然可以使用。
目前,SoundWave研究小組已經想出了一組動作,包括上下晃動手掌、手接近或者遠離電腦、彎曲你的四肢、身體接近或者遠離電腦。有了以上動作,研究員們可以實現滾動網頁、一些簡單的網絡導航以及當用戶接近電腦時自動喚醒,遠離電腦時切換到休眠狀態,Tand如是說。
Harrison認為最好能控制下動作的數量,因為用戶要記住那么多動作可不是一件容易的事情。SoundWave小組還研發了一組動作來玩俄羅斯方塊,這樣既能娛樂又能測試SoundWave的準確性和速度。
Tan希望SoundWave能夠和其它一些動作識別技術協同工作。他說道SoundWave不用考慮光線的問題而基于視覺的動作識別技術卻不行, SoundWave對于一些細微的動作識別不是很在行比如捏手指之類的。”理論上講全世界有各種各樣的識別器,用戶不會在乎這些識別器是什么,他們只關心識別器能否幫助他們解決問題“,Tan這樣說道。
only sound—thanks to the Doppler Effect, some clever?
software, and the built-in speakers and microphone on a laptop.
Desney Tan,
a Microsoft Research principal researcher and member of the SoundWave?
team, says the technology can already be used to sense a number of?
simple gestures, and with smart phones and laptops starting to include?
multiple speakers and microphones, the technology could become even more
sensitive. SoundWave—a collaboration between Microsoft Research and the
University of Washington—will be presented this week in a paper at the 2012 ACM SIGCHI Conference on Human Factors in Computing in Austin, Texas.
The idea for SoundWave emerged last summer, when Desney and others?
were working on a project involving using ultrasonic transducers to?
create haptic effects, and one researcher noticed a sound wave changing?
in a surprising way as he moved his body around. The transducers were?
emitting an ultrasonic sound wave that was bouncing off researchers’?
bodies, and their movements changed the tone of the sound that was?
picked up, and the sound wave they viewed on the back end.
The researchers quickly determined that this could be useful for?
gesture sensing. And since many devices already have microphones and?
speakers embedded, they experimented to see if they could use those?
existing sensors to detect movements. Tan says standard computer?
speakers and microphones can operate in the ultrasonic band—beyond what?
humans can hear—which means all SoundWave has to do to make its?
technology work on your laptop or smart phone is load it up with?
SoundWave software.
Chris Harrison, a graduate student at Carnegie Mellon University who?
studies sensing for user interfaces, calls SoundWave’s ability to?
operate with existing hardware and a software update “a huge win.”
“I think it has some interesting potential,” he says.
The speakers on a computer equipped with SoundWave software emit a?
constant ultrasonic tone of between 20 and 22 kilohertz. If nothing in?
the immediate environment is moving, the tone the computer’s microphone?
hears should also be constant. But if something is moving toward the?
computer, that tone will shift to a higher frequency. If it’s moving?
away, the tone will shift to a lower frequency.
This happens in predictable patterns, Tan says, so the frequencies?
can be analyzed to determine how big the moving object is, how fast it’s
moving, and the direction it’s going. Based on all that, SoundWave can?
infer gestures.
The software’s accuracy hovers in the 90 percent range, Tan says, and
there isn’t a noticeable delay between when a user makes a gesture and?
the computer’s response. And SoundWave can operate while you’re using?
the speakers for other things, too.
So far, the SoundWave team has come up with a range of movements that
its software can understand, including swiping your hand up or down,?
moving it toward or away from your body, flexing your limbs, or moving?
your entire body closer to or farther away from the computer. With these
gestures, researchers are able to scroll through pages on a computer?
screen and control simple Web navigation. Sensing when a user approaches
a computer or walks away from it could be used to automatically wake it
up or put it to sleep, Tan says.
Harrison thinks that having a limited number of gestures is fine,?
especially since users will have to memorize them. The SoundWave team?
has also used its technology to control a game of Tetris, which, aside?
from being fun, provided a good test of the system’s accuracy and speed.
Tan envisions SoundWave working alongside other gesture-sensing?
technologies, saying that while it doesn’t face the lighting issues that
vision-based technologies do, it’s not as good at sensing small?
gestures like a pinch of the fingers. “Ideally there are lots of sensors
around the world, and the user doesn’t know or care what the sensors?
are, they’re just interacting with their tasks,” he says.
總結
以上是生活随笔為你收集整理的人机交互技术:利用声波识别手势 Gesture Control System Uses Sound Alone的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 时序图 分支_UML用例图
- 下一篇: 学习AccessibilityServi