Why do we do it – ’cause we can!

I was pointed to this nice video of work from Robert Kosara by Hadley via Antony.

Emerging technologies – and muti-touch must be counted as such – offer new possibilities of creating an interaction with graphics. This implementation of Robert is certainly clean and straight-forward, but still raises the question, whether or not these operations are really things we need during a data analysis.

What I found always very distracting when selecting data dynamically, was the amount of coordination which was necessary for the selection, which ultimately drew attention from the highlighting triggered by the selection. Often enough, this highlighting was most interesting in a different plot, and thus hard to watch while trying to get the dynamic selection right.

I wonder how much this is the case for Robert’s prototype, but I am afraid I can’t tell until I get my hands on the software and a new MacBook Pro.

The final question though for me is whether it will help people to get their data analysis jobs done more easily or not!

3 Comments

  1. The video is meant to support a paper, because the paper alone doesn’t begin to do this idea justice. But even the video doesn’t really demonstrate the technique sufficiently, I’m afraid. It’s really very easy to use and much faster than any other brushing I’ve seen. It takes a minute or so to get used to, but it’s really easy to use.

    The idea here is also quick exploration, not meticulous analysis. In a real program, you might want to have both this and the old-fashioned mouse interface available, which is slower but more precise and stable.

    For those who want to play with the program themselves (and who have a recent MacBook or MacBook Pro), I’ve published the source code and you can download an executable: http://github.com/eagereyes/ParVisMT

  2. martin says:

    Hi Robert,

    thanks for sharing the executable. Do you know of any literature which looks into the interface issues of multi-touch? It is not really a natural way we interact with things outside the computer world – the way I multi-touch on my harpsichord is different 😉

    Martin

  3. There are quite a few papers on multi-touch, mostly at UIST and CHI. Most of them do the typical things that work well (pinch to zoom, flick photos or documents around), but there are also a few that do visualization. I find touch interaction much more natural than anything else, but it’s clearly not suited for everything equally well.

    If you want to see how natural touch interaction is, go to an Apple Store (or somewhere where they have iPads) and watch 3-year-old kids play with them naturally within a minute or so. You’d be surprised how natural they find that 😉

Leave a Reply