I’ve covered about various methods of interacting with the system earlier, but what if you want to see improvement in the existing methods of interaction? That’s where TapSense comes into play. It’s essentially a technology that can enhance the interaction of finger with the touchscreen.
TapSense, developed by Chris Harrison of Carnegie Mellon University, essentially facilitate touchscreens to differentiate between the touch input, i.e. it can make a difference between normal human finger and stylus. And that too with 99% accuracy!
Although interesting, this is not that exiting part. The more fun part is, TapSense can also differentiate between various parts of fingers! With 95% of accuracy it can detect what part of the finger has touched the screen. The defined four parts which it can detect are – Tip, Nail, Pad, & Knuckle.
Apart from the touchscreen, TapSense requires a microphone to operate. Combined with touchscreen, TapSense detects the input that has touched it. As it uses sound as input, it’s also able to distinguish between the users who holds the pen type input, just by the applied pressure. Seemingly, simple and not so great innovation at first, this actually opens up a huge possibility to the way interact with the system. Now, one can have apps that can perform different set of actions based on the surface touched the screen. Now you may get the idea.
Though pretty much in it’s proof-of-concept phase, it surely has huge potential and maybe a newer opportunity for application developers to create immersive applications.
Just look at the video showing it’s ability, and get overwhelmed.[youtube=http://www.youtube.com/watch?v=-oN96cucBr4]
You can read the detailed paper from here.