Thursday, April 21, 2011

Paper Reading #25: Usage Patterns and Latent Semantic Analyses for Task Goal Inference of Multimodal User Interactions

Comment 1: http://stuartjchi.blogspot.com/2011/04/paper-reading-23-aspect-level-news.html
Comment 2 :http://jip-tamuchi-spring2011.blogspot.com/2011/04/paper-reading-24-outline-wizard.html

Usage Patterns and Latent Semantic Analyses for Task Goal Inference of Multimodal User Interactions
Pui-Yu Hui, Wai-Kit Lo and Helen M. Meng
Human-Computer Communications Laboratory
The Chinese University of Hong Kong
Intelligent User Interfaces

This paper talks about multimodal pen and voice applications and the applications of task goal interfaces between them. The system is developed to be a bridge between single modal and multimodal interfaces for pen and touch devices. The creators want the system to be able to bride commands from both pen and touch and be robust enough to measure task goals in the user interactions. There has been heavy research semantic prediction algorithms for each of the inputs individually but no one has yet tried to come them in a useable way. Their test consisted of users performing 66 different tasks using multimodal interface devices to see their outcomes and how the users responded to the feedback. The system they tested involved the user needing to look for different tasks on the bus such as location, arrival times, cost and route finding. They were for the most part able to narrow down the user input to one search relatively quickly and their algorithm seemed to be efficient enough. The reason I say seemed is that the data became a little unclear and the tests were made in Chinese and I was unable to read what the users responded. It did seem that they authors were quite happy with the results and they have plans to make the algorithms more robust in the future. They also want to program a way to use data that was previously found by a user and feed it back into the system making future searches more accurate. They plan to also make the searches more consistent and faster. The user wants to be able to narrow down their list of options for next steps on pen and voice quicker to make the whole application feel smoother.



Again I had a real hard time understanding this article because despite the number of times I read over some parts of it I am not understanding the technical jargon or the complex mathematical sequences taken to gather and display the data. It also does not help that a lot of the user questionnaires and feedback were displayed in mandarin so I could not see what the users were responding or what features they think did well. The graphs were also a bit unclear and in some cases did not make sense for what the authors were referring to. Essentially I limped through this paper the best I could and just did the best I could with it. To my actual feelings, I think that this is a really good idea combining these two mediums and any system that helps to recognize task goals is always a good idea. Any prediction algorithms that can be used and be written to be more and more robust are great advancements and help people to understand technology even better. It also speaks directly to HCI as people want to be able to use a new interface very fluidly and without having a steep learning curve assocaited with it. It also seems like despite the fact that I had a hard time interpreting the results, the tests they did with fidning these different tasks on a bus riding application is a good way to test their interface. The authors seem to really be on a good piece of software and I hope their research is allowed to continue,

4 comments:

  1. I agree, WAY too much effort to read about a system that doesn't seem particularly innovative, although useful.

    ReplyDelete
  2. I'm with you and Alyssa, journal articles like this shouldn't use so much technical jargon if they want their research to be available to general audiences.

    ReplyDelete
  3. I agree with the other comments, there should be a balance between technical jargon and broader audience terms.

    ReplyDelete
  4. I couldn't understand anything this paper was talking about so I am in the same boat as you.

    ReplyDelete