The group that I was going to visit had to cancel due to a few of the members having schedule conflicts with the usual meet time. Again this speaks to the fact that while these people are very friendly they are at times rather unorganized and things do happen to get in the way of meetings. However I did receive a nice email from the group letting me know ahead of time so that I didn't show up and sit there for awhile and realize noone was showing.
So, I was talking with my roomates and I knew they played DnD but didn't know they had a group going. So I decided that I would actually participate in a small session that they ran Saturday nights. I am becomming more familiar with how the game works and was able to easily learn how to "play" the game but with much assistance. They started me with a pre-generated character and told me about its past, motivation, skills and other important information. They also explained that this system is, yet again, different not taking place in a world of magic but in the year 3000, where we are assuming the human race has explored the galaxy and space flight is an average encounter. The system is called GURPS and thankfully there was only one book so it was rather easy to do some quick research and learn the basics of the system and learn what everything meant on the character sheet.
My character was some sort of cat-race and I was a humanoid-tiger character that was able to speak (in common) and interact similar to a human. I guess we also assumed that in a thousand years of space travel the human race is rather used to different looking 'alien' races and they are accepted like any other race. I was a former military pilot and I was looking to be moved to a base to teach advanced combat maneuvers.
The system was also a bit different in that in DnD 3.5 (described in my earlier articles), the players want to use the different "ability scores" and roll a d20 + their score to try and try to perform whatever action they want to do. In this case, your ability score is derived from your characters base stats, and you want to try and roll below the characters score using 3 six-sided dice, so essentially the best role you can have is a 3 and the worst is an 18.
We started with the scenario and I was joined in, it was rather unusual the callous disregard for me the other players had as their characters would have acted disinterested to me so the players reacted the same way. Not until we had to steal a spaceship and they learned that I was quite the pilot and was able to navigate quite easily to the nearest "jump gate" to go between galaxies.
The DM for this had no notes whatsoever and did a rather good job of keeping the group organized. In fact while he had no system of apparent organization he did have a way to keep all the players involved and did a good job of being able to tell between meta-conversations and game play questions. He also had a very well defined story with lots of details that allowed me to understand what sorts of areas we were encountering and the intimate details of scenario pieces that allowed me to understand their significance to the campaign. It might have helped that our group was only 5 people but it was rather fun and we got to have a lot of good laughs. The DM was also great about keeping the story going and presenting challenges that we were not only able to figure out but we were able to use our characters skills to the fullest and really have to think what we could do in certain situations. The most interesting part was that each character has 'flaws' and even in social situation the DM was really good about remembering our flaws and playing to them to try to make the situation more difficult for us (or in this case, more interesting and fun).
The most interesting part of this system is that there isn't *really* an ordering scheme for when players do things until combat starts, then we just carry out the combat in turn based fasion. The other really nice thing about this is that combat didn't usually last but 4-5 minutes because we were simulating guns and each "round" only represented one second (ie: the amount of time it takes to fire a gun).
The other interesting part of this is while you would think that you would be involved in the campaign a lot each player really determines his/her own level of participation in meta conversations and game play activities. If there was an obvious avenue for me to use one of my skills (whether the players knew it or not) I was able to pick and choose my level of involvement and in some cases get away with things because the DM forgot to ask me about a flaw or some other skill. It goes to show that while these sessions look rather organized it is a lot for the DM to remember and sometimes things are skipped. I did ask about this and it was explained that rules sometimes are more of "guidelines" and that for the most part if one or two things are missed it won't affect the overall outcome of the game. I was also able to participate in what meta-conversations I wanted and was quickly accepted as one of the group when I was active and laughing along with the group. The interesting thing was that regardless of what my opinion on something was it was accepted and if it was a point of contention sometimes the meta-argument could turn into a brief discussion but no one ever got mad at another player for their beliefs(again speaking to how nice they are). I fact, the only time that the players did get somewhat upset was when I considered an action before doing it. Essentially, the more anyone slows the game play down for a silly action or a thought (that takes an excessive amount of time) is when the players show frustration. I have been told stories (because each player has them) of other players who do ridiculously silly things and the group ends up having to sort-of "send them away" because their personality doesn't fit in with their game play style. They don't mean this in a mean fashion but sometimes one player will ruin the game play for the others because of pointless actions.
In all my first experience playing was rather enjoyable. I felt like I was part of the community and learned that watching a session is not that much different from playing in one other than when I am playing there is a sense of companionship from everyone else that I should just jump in 100% and really participate and play my character rather than be the silent observer. Though even as an observer I was sometimes incorporated into the meta-conversation I didn't feel that sense that I should be participating and really enjoying my experience and as a player I did. I am not sure what I will do next week as far as participating/observing but more to come and maybe another new system!
Monday, February 28, 2011
Paper Reading #12: TeslaTouch: Electrovibration for Touch Surfaces
Comment 1: http://chi-jacob.blogspot.com/2011/02/paper-reading-12-teslatouch.html
Comment 2: http://chiblog.sjmorrow.com/2011/02/paper-reading-12-madgets-actuating.html
TeslaTouch: Electrovibration for Touch Surfaces
Olivier Bau, Ivan Poupyrev, Ali Israr, Chris Harrison
User Interface Software and Technology
This article was about a new system for multi touch surfaces that enhances the users ability to perceive and feel when they have pressed a button on the screen. The system is called TeslaTouch and it provides feedback in the form of an electrovibration so that the user "senses" that they have touched the item in question. The system uses no mechanical parts and is not *technically* shocking the user although they do feel the sensation of current moving. The state of the art currently is that there are feedback systems that make the whole device move when a button is pushed and some even provide a mechanical 'click' sound when the button is pressed. They have also seen that electrovibration has been used before but the ability to transport the technology was very limited when it was first discovered in 1954. Essentially what the project plans to do is to equip a multi-touch computer surface with their system and then allow users to perform standard operations and then report how the electro-feedback feels like to them. Then the users will be given a short survey in which they will answer questions to help the researchers refine the device. Each session will take about 20 to 35 minutes and will have a guide to help anyone who is stuck or is feeling uncomfortable with the reactions of the device. They also talk extensively about how the device and idea is totally safe and has no way to harm the user. Again, the device isn't shocking the user the user is just feeling the current moving through their skin. They also point out that the amount of friction that is made due to the system is no more than the average phone with multi-touch when held to the users ear. They also have discussed the use of a grounder however the human body is a large enough ground in most cases that it will likely go unused for most of the devices lifetime. Users said that the sensation felt like "wood or bumpy leather" and when increased voltage or amplitude like "rough painted wall". Overall the users were skeptical buy happy with the device and further prototypes are being developed.
I think is is a great project idea and I think the idea of havering positive feedback systems in multi-touch is a great idea. I would also be concerned about whether the device would be shocking me or not and I am still skeptical as to if I would not see a bit of skin irritation if I had a period of lengthy use. I do like that they considered and discussed at length the physical properties involved and the idea of having a backup grounding mechanism despite the fact that the human body is the best ground they have. I have been shocked by my multi-touch phone before and it was a rather large shock but it was not because of a system like this I was just very statically charged (apparently). My other concern with this is the idea of how much capacitance the device will have. In my home we will have some devices that are based off of low-amplitude radio or television waves and they commonly get feedback from our phones. I wonder if this addition to a phone would increase its capacitance further and make it to where this is more of a problem. Eventually we will not even be able to use these devices (many don't) but for the time being they are still around. This would actually be a study where I would volunteer simply because I think it is a good idea that might not be given enough testing due to user skeptics. Overall I think the system is well designed and well implemented and I am looking forward to seeing if companies pick this up for phones and how they market the idea.
Comment 2: http://chiblog.sjmorrow.com/2011/02/paper-reading-12-madgets-actuating.html
TeslaTouch: Electrovibration for Touch Surfaces
Olivier Bau, Ivan Poupyrev, Ali Israr, Chris Harrison
User Interface Software and Technology
This article was about a new system for multi touch surfaces that enhances the users ability to perceive and feel when they have pressed a button on the screen. The system is called TeslaTouch and it provides feedback in the form of an electrovibration so that the user "senses" that they have touched the item in question. The system uses no mechanical parts and is not *technically* shocking the user although they do feel the sensation of current moving. The state of the art currently is that there are feedback systems that make the whole device move when a button is pushed and some even provide a mechanical 'click' sound when the button is pressed. They have also seen that electrovibration has been used before but the ability to transport the technology was very limited when it was first discovered in 1954. Essentially what the project plans to do is to equip a multi-touch computer surface with their system and then allow users to perform standard operations and then report how the electro-feedback feels like to them. Then the users will be given a short survey in which they will answer questions to help the researchers refine the device. Each session will take about 20 to 35 minutes and will have a guide to help anyone who is stuck or is feeling uncomfortable with the reactions of the device. They also talk extensively about how the device and idea is totally safe and has no way to harm the user. Again, the device isn't shocking the user the user is just feeling the current moving through their skin. They also point out that the amount of friction that is made due to the system is no more than the average phone with multi-touch when held to the users ear. They also have discussed the use of a grounder however the human body is a large enough ground in most cases that it will likely go unused for most of the devices lifetime. Users said that the sensation felt like "wood or bumpy leather" and when increased voltage or amplitude like "rough painted wall". Overall the users were skeptical buy happy with the device and further prototypes are being developed.
I think is is a great project idea and I think the idea of havering positive feedback systems in multi-touch is a great idea. I would also be concerned about whether the device would be shocking me or not and I am still skeptical as to if I would not see a bit of skin irritation if I had a period of lengthy use. I do like that they considered and discussed at length the physical properties involved and the idea of having a backup grounding mechanism despite the fact that the human body is the best ground they have. I have been shocked by my multi-touch phone before and it was a rather large shock but it was not because of a system like this I was just very statically charged (apparently). My other concern with this is the idea of how much capacitance the device will have. In my home we will have some devices that are based off of low-amplitude radio or television waves and they commonly get feedback from our phones. I wonder if this addition to a phone would increase its capacitance further and make it to where this is more of a problem. Eventually we will not even be able to use these devices (many don't) but for the time being they are still around. This would actually be a study where I would volunteer simply because I think it is a good idea that might not be given enough testing due to user skeptics. Overall I think the system is well designed and well implemented and I am looking forward to seeing if companies pick this up for phones and how they market the idea.
Thursday, February 24, 2011
Book Reading #5: Emotional Design
Emotional Design
Donald Norman
Chapter 1:Attractive Things Work Better
In this chapter Norman starts by talking about a few of the main concepts in emotional design which he labels as: visceral, behavioral, and reflective. He talks about a few examples and how the different kinds of design work together as well as the different cues that give a user positive and negative impressions. He makes a few references to his other two books and ends the chapter quite briefly.
In this chapter I was reminded a lot of Design of Future Things and I felt like I was re-reading one of the chapters that we read for Capstone. I really didn't get much new information out of the chapter because of this but I did think that it was necessary for anyone who has not read his other works. I don't really know what more to say because I think I have talked a lot about the ideas presented in this chapter in my first two blogs on DoFT in my capstone blog.
Chapter 2: The Multiple Faces of Emotion and Design
In this chapter Norman talks about how there are multiple levels of being able to appreciate a device. He talks about how an old device while having visceral reactions of nostalgia and liking for a device can also bring about reflective feelings of how frustrating it was to use and how much the newer models have improved usability. He actually talks about video games and how they have become another form of entertainment thanks to the multiple levels of design and the emotion that they can bring out in people. Beyond nostalgia they can also being high levels of enjoyment and people can use them as forms of learning and stress relief.
I think aside from being a bit long winded this was one of the more enjoyable chapters that Norman has written in any of his books. Maybe this book was written a bit later or maybe he just uses examples that are more relevant to me instead of harping on the idea of visceral, behavioral, reflective and just talks about products and his views on them and what they represent to society. I actually enjoyed the discussion of how things should be designed with the users enjoyment in mind and not just focusing on the idea of making it overly convenient for the user to use.
Chapter 3: Three Levels of Design
In this chapter Norman focuses on the three levels of design he refereed to over and over in the first two chapters and does an in depth discussion of them. Visceral design which he describes as the look and beauty of something is the first thing people notice. He relates this a little back to the idea that beautiful things work better and the image that very beautiful, extravagant and elegant things give off versus very utilitarian, ordinary designs and how we react to them. Behavioral design which he describes as the idea that we consider items by how they work. Even if an item is advertised if we have no idea how it would work or have not seen others using them then they are intimidating. He uses the example of a shower and talks about how showing its use and enjoyment appeals to this level of the item is easy and fun to use. The last is reflective design which he says is the image the item gives off when used. He claims that people have opinions for why they buy things or choose different items and these are based on the persons internal feelings and the image they want to give off and he refers to these as reflective decisions. He gives examples of clocks, watches and football headsets and the image they give off.
These different ideas are very true but I don't think that a lot of people consider all three levels of these. For me most things that I buy are for their efficiency. If I don't think I will use something even if it is shiny and pretty I would never consider buying one. Again, this is a behavorial decision and I really never consider the visceral means to them. However when buying clothes I have become more reflective wanting things that are compfortable and worrying less abotu style or what I am wearing. I suppose if I could get functional clothing that would be nice but I don't know what it would do other than carry stuff for me or help me if I get lost. Perhaps in a few years we will have clothes that tell us about our bodies and call the police if we injure oursevles but until then I will stick to compfort. Norman is very intelligent and understands prodcuts and people better than I ever will but sometimes his ideas are very hard for me to understand because they involve products I have never and likely will never use.
Donald Norman
Chapter 1:Attractive Things Work Better
In this chapter Norman starts by talking about a few of the main concepts in emotional design which he labels as: visceral, behavioral, and reflective. He talks about a few examples and how the different kinds of design work together as well as the different cues that give a user positive and negative impressions. He makes a few references to his other two books and ends the chapter quite briefly.
In this chapter I was reminded a lot of Design of Future Things and I felt like I was re-reading one of the chapters that we read for Capstone. I really didn't get much new information out of the chapter because of this but I did think that it was necessary for anyone who has not read his other works. I don't really know what more to say because I think I have talked a lot about the ideas presented in this chapter in my first two blogs on DoFT in my capstone blog.
Chapter 2: The Multiple Faces of Emotion and Design
In this chapter Norman talks about how there are multiple levels of being able to appreciate a device. He talks about how an old device while having visceral reactions of nostalgia and liking for a device can also bring about reflective feelings of how frustrating it was to use and how much the newer models have improved usability. He actually talks about video games and how they have become another form of entertainment thanks to the multiple levels of design and the emotion that they can bring out in people. Beyond nostalgia they can also being high levels of enjoyment and people can use them as forms of learning and stress relief.
I think aside from being a bit long winded this was one of the more enjoyable chapters that Norman has written in any of his books. Maybe this book was written a bit later or maybe he just uses examples that are more relevant to me instead of harping on the idea of visceral, behavioral, reflective and just talks about products and his views on them and what they represent to society. I actually enjoyed the discussion of how things should be designed with the users enjoyment in mind and not just focusing on the idea of making it overly convenient for the user to use.
Chapter 3: Three Levels of Design
In this chapter Norman focuses on the three levels of design he refereed to over and over in the first two chapters and does an in depth discussion of them. Visceral design which he describes as the look and beauty of something is the first thing people notice. He relates this a little back to the idea that beautiful things work better and the image that very beautiful, extravagant and elegant things give off versus very utilitarian, ordinary designs and how we react to them. Behavioral design which he describes as the idea that we consider items by how they work. Even if an item is advertised if we have no idea how it would work or have not seen others using them then they are intimidating. He uses the example of a shower and talks about how showing its use and enjoyment appeals to this level of the item is easy and fun to use. The last is reflective design which he says is the image the item gives off when used. He claims that people have opinions for why they buy things or choose different items and these are based on the persons internal feelings and the image they want to give off and he refers to these as reflective decisions. He gives examples of clocks, watches and football headsets and the image they give off.
These different ideas are very true but I don't think that a lot of people consider all three levels of these. For me most things that I buy are for their efficiency. If I don't think I will use something even if it is shiny and pretty I would never consider buying one. Again, this is a behavorial decision and I really never consider the visceral means to them. However when buying clothes I have become more reflective wanting things that are compfortable and worrying less abotu style or what I am wearing. I suppose if I could get functional clothing that would be nice but I don't know what it would do other than carry stuff for me or help me if I get lost. Perhaps in a few years we will have clothes that tell us about our bodies and call the police if we injure oursevles but until then I will stick to compfort. Norman is very intelligent and understands prodcuts and people better than I ever will but sometimes his ideas are very hard for me to understand because they involve products I have never and likely will never use.
Tuesday, February 22, 2011
Paper Reading #11: Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces
Comment 1: http://bjm-csce436.blogspot.com/2011/02/paper-reading-11-combining-multiple.html
Comment 2: http://introductionblogassignment.blogspot.com/2011/02/paper-reading-11-contact-area.html
Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces
Andrew D. Wilson, Hrvoje Benko
User Interface Software and Technology
This paper we are introduced to a design project by a couple Microsoft workers on their new system that they are calling LightSpace. Essentially this is a setup where the whole room becomes a computer and through the use of cameras and other projective devices a user is able to turn every surface in the room into a projective surface and be able to use them all as screens. The user is able to grab, move and do other things to different items in the room as well as transfer items from one surface to another. Essentially the description of the system given was similar to how their Kinect system works tracking the person and giving a simplified visual of the person and then being able to tell how they are moving and where in three dimensional space. The system can also detect flat surfaces and be able to tell when the user is interacting directly with them. This enables the user to touch an item on a surface, and then touch a second surface and then move the item between the two. The person is also able to grab the item and hold it in their hand, the item to the camera looking like a simple red ball and do the user looking like a colored cross and they would be able to move the item to any surface in the room. The room also allows the user to use its individual menu systems of each program, the menu items are displayed in spinning light displays that are projected onto the floor and the user can use them by holding their hand underneath for a period of time and then a new menu is replaced or an action is performed. The system is also able to do all of these actions just not for one user but is seamlessly able to perform these actions with multiple users in the room so that it could be used in a partner setting or for two people giving a presentation. The creators were able to setup a demo during the conference and were able to get feedback about the usability. The users were able to learn the system relatively easy but found that the maximum number of people who were able to be in a room at once was about six. There was also a little bit of delay when the users picked up multiple objects from the display and it would lag a little behind their movements. Overall they said it was a riveting success and the researchers were able to walk away with some valuable insight and free testing.
I think this is another really cool article and actually has some really neat concepts that have to do with HCI and computing in general. I would assume that eventually we might have our entire homes be large computers that can adapt for us and this is the kinds of technology that people would pay money for in order to enhance their home and be able to story memories such as photos and notes. I love the idea that any surface can be used as the screen and that each person is able to move the objects in a variety of ways. I think the most interesting part of this is that the system is not only designed with the user in mind making the interactions very intuitive and easy for the user to learn but also is designed in such a way that there is very little chance for the user to misuse the system. There is something that is so familiar about this system and I could see people instantly knowing how it works and then just grabbing an item off the table and throwing it onto the wall and then adjusting it, playing with it and ultimately putting it where they want. You would also be able to attach the internet and be able to display a cooking recipe web page on your table while you are doing it without having to mess with paper or a large clumsy book. I don't know if it would take awhile to get used to having projecting cameras everywhere but I think this would be an easy trade off for the fun and adaptability of this kind of system. I think this is a great system and I am very curious how much it costs and when I can get one installed.
Comment 2: http://introductionblogassignment.blogspot.com/2011/02/paper-reading-11-contact-area.html
Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces
Andrew D. Wilson, Hrvoje Benko
User Interface Software and Technology
This paper we are introduced to a design project by a couple Microsoft workers on their new system that they are calling LightSpace. Essentially this is a setup where the whole room becomes a computer and through the use of cameras and other projective devices a user is able to turn every surface in the room into a projective surface and be able to use them all as screens. The user is able to grab, move and do other things to different items in the room as well as transfer items from one surface to another. Essentially the description of the system given was similar to how their Kinect system works tracking the person and giving a simplified visual of the person and then being able to tell how they are moving and where in three dimensional space. The system can also detect flat surfaces and be able to tell when the user is interacting directly with them. This enables the user to touch an item on a surface, and then touch a second surface and then move the item between the two. The person is also able to grab the item and hold it in their hand, the item to the camera looking like a simple red ball and do the user looking like a colored cross and they would be able to move the item to any surface in the room. The room also allows the user to use its individual menu systems of each program, the menu items are displayed in spinning light displays that are projected onto the floor and the user can use them by holding their hand underneath for a period of time and then a new menu is replaced or an action is performed. The system is also able to do all of these actions just not for one user but is seamlessly able to perform these actions with multiple users in the room so that it could be used in a partner setting or for two people giving a presentation. The creators were able to setup a demo during the conference and were able to get feedback about the usability. The users were able to learn the system relatively easy but found that the maximum number of people who were able to be in a room at once was about six. There was also a little bit of delay when the users picked up multiple objects from the display and it would lag a little behind their movements. Overall they said it was a riveting success and the researchers were able to walk away with some valuable insight and free testing.
I think this is another really cool article and actually has some really neat concepts that have to do with HCI and computing in general. I would assume that eventually we might have our entire homes be large computers that can adapt for us and this is the kinds of technology that people would pay money for in order to enhance their home and be able to story memories such as photos and notes. I love the idea that any surface can be used as the screen and that each person is able to move the objects in a variety of ways. I think the most interesting part of this is that the system is not only designed with the user in mind making the interactions very intuitive and easy for the user to learn but also is designed in such a way that there is very little chance for the user to misuse the system. There is something that is so familiar about this system and I could see people instantly knowing how it works and then just grabbing an item off the table and throwing it onto the wall and then adjusting it, playing with it and ultimately putting it where they want. You would also be able to attach the internet and be able to display a cooking recipe web page on your table while you are doing it without having to mess with paper or a large clumsy book. I don't know if it would take awhile to get used to having projecting cameras everywhere but I think this would be an easy trade off for the fun and adaptability of this kind of system. I think this is a great system and I am very curious how much it costs and when I can get one installed.
Sunday, February 20, 2011
Ethnography Results Week 3:
This week I joined a different group that was suggested to me by one of the other members of our CHI class. He was telling me that they played another different system of DND and that they would love for me to come so I said, why not? The session is held at a McDonalds in College Station Saturday nights and they do a variety of things first which starts with DND and then eventually culminates with board games or other humorous card games. The group has been together for about two years and all were very familiar with each other. I arrived at about six and was the first one there which gave me a good opportunity to check the place out and get my bearing. The restaurant had two large areas for people to sit, one in the front where customers walk in mostly with just booths and then a back room which had smaller tables, chairs a few booths and then a large bar area with stools and a big open table. I knew instinctively that they were going to use this area and simply waited for people to arrive before stealing a stool.
The group started to arrive at about 6:09 and I was introduced to the person who was going to be their DM and got to talk to the people who would be the PC's about how this version was different from the others. They told me this was the basic set of rules of DND (which is called 3.5) and the system they do turns it into what they refereed to as DND 3.75, all the same as the base rules but with certain changes to make certain game mechanics less powerful. I asked them about the past group and about Hackmaster and their take on it was that Hackmaster is really a spoof on DND and is the base system but with a lot of cutesy things thrown in to make it more adversarial between the DM and the PCs and less about having a strong storyline.
The others arrived and the general setup for each player was to come in, stand for a bit and talk to the people, survey the area, claim their spot, setup their computer (yes they all had computers) and then go and get food. The group finally started at 7:09 and the beginning started mainly with a review of the past session and then figuring out what equipment each character had as well as the characters status, mental health, and any other vital information they need to know about their character or the area they were in. This brought about some interesting conversation as apparently the characters were in a zone that was normally 500 degrees and there was some banter about whether "True Ice" (Ice that is always cold, no molecular motion) would melt in this place.
The group finally started their actual play at 7:23 and the interesting part about this was that not everyone was yet sitting at the table. In fact the play almost worked as if the players were all in a family at a barbecue. Each player was allowed to sit by themselves and eat their meal and then when they were done eventually migrated closer to the table and joined in. Another curious observation was that during this intro time, each players amount of involvement was directly proportional to their distance to the DM. If the person was not at the table, they talked little if at all, this might have only been because they were eating but they did seem to be listening and thinking critically about what was happening. In fact if it weren't for the people to the DM's right and far right the game could have stopped as these two players really pushed the story forward. As the story progressed it was interesting how more and more the players took on their characters personality, and more interesting yet was how their characters personality (while well defined) were greatly reflected by the person playing them. Each person as they talked to me and introduced themselves asked me about the CHI class and then their questions in game were asked in very similar manners. It almost brought each character to life as I could picture a fighter or a monk who looked like the individual embodying them asking questions to dragons and people in the world they were envisioning.
The DM was the center of attention for most everything as he was not only a very strong presence in the campaign but also had a very quiet yet profound way of controlling the group. He made sure that the players understood everything that was happening giving very extravagant details about each area and making sure that the players understood the structures, terrain, rocks, plants, temperature and other factors of each area. It actually really helped me to paint a picture as well of what he wanted me to see. I almost felt like a Tolkien novel was being read and had to double check he wasn't reading this off a card. He also had an interesting way of dealing with conversation of the players. Essentially I gathered that there was three types of conversation amongst the group. The first was what the players called "meta game" which were questions outside the game, about the game. Essentially, when a person talks, one would assume they they are speaking for their character but this was not always the case. A lot of times a player would talk in questions or hypothetical, and see the DMs reaction and then sometimes they would just need a clarification "so what color is this dragon again?". The second kind is what I would call banter or a side conversation, these ranged from humorous comments that would spark a laugh from the person next to them or the whole group, or in some cases were just conversations about a random thought. The third kind was, of course, the in game talking of the character which was distinctly different from the players normal kind of speech and in most cases could be distinguished from meta conversations. I also noticed that the DM while it seemed like was letting the conversation be rather untamed had a very prepared structure for controlling flow of conversation. He essentially broke the group into pairs of 2 and took them in a rotation, first talking to one group, then another, then another then back to the first. Even if one of the other groups wanted to talk out of turn, he would check with the other two groups and then go to that group. The only time he did break this rhythm is if he had something to say about the story or one of the NPCs (non-player characters) in game had something to say or do that specifically affected another player. This was not a formal system, at no time did the DM tell anyone they were interrupting or tell a group to wait their turn but he definitely had a system in his head that he planned and he stuck by this system and made sure each group was good. I am more curious to know if this is simply how he is, how all DMs are, or if he is simple exceptionally good at what he does (if his descriptions are any indications, hes just that good).
The other interesting thing that happened different from the last group was there was no formal board that the characters used, at first. There were figurines used and instead of using various random objects each player had a figurine that was painted and somewhat looked like what their character (might) look like.
At first these were just placed on the table and had no real bearing on the game. Eventually as the story developed and location became more vital to the game, green felt was pulled out, and the figures were put in relative locations that again seemed very insignificant. They were used but largely ignored, and in fact the players didn't quite care where their figure was, and only really moved them when the DM essentially said they should.
However eventually this all made sense as there were boards added that did finally depict an area, and they were to scale as it was explained that each square on the board was a 5x5 square.
This spoke to the idea that while all these players know what is going on there is a strategic aspect to this game that the players can "see" and in this case it seemed much like a video game that allowed the players to step away from the serious story telling and simply get to prepare for combat and shoot lighting bolts. It was also curious how the board was not really set up until combat started. I don't know if this is normally done or if it just happened organically this way. It seems to me that if the DM wanted to keep his ideas private that the wouldn't pull any of this out until combat has (basically) started. In this case, these actions were almost a telegram telling the players that combat was coming and the more I considered their actions they were prepared for combat more than it seemed they should have been. I am curious to see another session from the beginning and see how much of this is true. I also plan on interviewing the players about these ideas and see if they notice the pattern or not.
I needed to leave shortly after they had started "combat" but I am going to go next week for the end of a session so I will report more on combat and other player actions next week as well. I wanted to see how they end a session and the things involved with it. I am also curous to try to get short interviews after the session and ask questions.
Thanks again to the group because I believe they will be reading this and I am looking forward to next week!
The group started to arrive at about 6:09 and I was introduced to the person who was going to be their DM and got to talk to the people who would be the PC's about how this version was different from the others. They told me this was the basic set of rules of DND (which is called 3.5) and the system they do turns it into what they refereed to as DND 3.75, all the same as the base rules but with certain changes to make certain game mechanics less powerful. I asked them about the past group and about Hackmaster and their take on it was that Hackmaster is really a spoof on DND and is the base system but with a lot of cutesy things thrown in to make it more adversarial between the DM and the PCs and less about having a strong storyline.
The others arrived and the general setup for each player was to come in, stand for a bit and talk to the people, survey the area, claim their spot, setup their computer (yes they all had computers) and then go and get food. The group finally started at 7:09 and the beginning started mainly with a review of the past session and then figuring out what equipment each character had as well as the characters status, mental health, and any other vital information they need to know about their character or the area they were in. This brought about some interesting conversation as apparently the characters were in a zone that was normally 500 degrees and there was some banter about whether "True Ice" (Ice that is always cold, no molecular motion) would melt in this place.
The group finally started their actual play at 7:23 and the interesting part about this was that not everyone was yet sitting at the table. In fact the play almost worked as if the players were all in a family at a barbecue. Each player was allowed to sit by themselves and eat their meal and then when they were done eventually migrated closer to the table and joined in. Another curious observation was that during this intro time, each players amount of involvement was directly proportional to their distance to the DM. If the person was not at the table, they talked little if at all, this might have only been because they were eating but they did seem to be listening and thinking critically about what was happening. In fact if it weren't for the people to the DM's right and far right the game could have stopped as these two players really pushed the story forward. As the story progressed it was interesting how more and more the players took on their characters personality, and more interesting yet was how their characters personality (while well defined) were greatly reflected by the person playing them. Each person as they talked to me and introduced themselves asked me about the CHI class and then their questions in game were asked in very similar manners. It almost brought each character to life as I could picture a fighter or a monk who looked like the individual embodying them asking questions to dragons and people in the world they were envisioning.
The DM was the center of attention for most everything as he was not only a very strong presence in the campaign but also had a very quiet yet profound way of controlling the group. He made sure that the players understood everything that was happening giving very extravagant details about each area and making sure that the players understood the structures, terrain, rocks, plants, temperature and other factors of each area. It actually really helped me to paint a picture as well of what he wanted me to see. I almost felt like a Tolkien novel was being read and had to double check he wasn't reading this off a card. He also had an interesting way of dealing with conversation of the players. Essentially I gathered that there was three types of conversation amongst the group. The first was what the players called "meta game" which were questions outside the game, about the game. Essentially, when a person talks, one would assume they they are speaking for their character but this was not always the case. A lot of times a player would talk in questions or hypothetical, and see the DMs reaction and then sometimes they would just need a clarification "so what color is this dragon again?". The second kind is what I would call banter or a side conversation, these ranged from humorous comments that would spark a laugh from the person next to them or the whole group, or in some cases were just conversations about a random thought. The third kind was, of course, the in game talking of the character which was distinctly different from the players normal kind of speech and in most cases could be distinguished from meta conversations. I also noticed that the DM while it seemed like was letting the conversation be rather untamed had a very prepared structure for controlling flow of conversation. He essentially broke the group into pairs of 2 and took them in a rotation, first talking to one group, then another, then another then back to the first. Even if one of the other groups wanted to talk out of turn, he would check with the other two groups and then go to that group. The only time he did break this rhythm is if he had something to say about the story or one of the NPCs (non-player characters) in game had something to say or do that specifically affected another player. This was not a formal system, at no time did the DM tell anyone they were interrupting or tell a group to wait their turn but he definitely had a system in his head that he planned and he stuck by this system and made sure each group was good. I am more curious to know if this is simply how he is, how all DMs are, or if he is simple exceptionally good at what he does (if his descriptions are any indications, hes just that good).
The other interesting thing that happened different from the last group was there was no formal board that the characters used, at first. There were figurines used and instead of using various random objects each player had a figurine that was painted and somewhat looked like what their character (might) look like.
At first these were just placed on the table and had no real bearing on the game. Eventually as the story developed and location became more vital to the game, green felt was pulled out, and the figures were put in relative locations that again seemed very insignificant. They were used but largely ignored, and in fact the players didn't quite care where their figure was, and only really moved them when the DM essentially said they should.
However eventually this all made sense as there were boards added that did finally depict an area, and they were to scale as it was explained that each square on the board was a 5x5 square.
This spoke to the idea that while all these players know what is going on there is a strategic aspect to this game that the players can "see" and in this case it seemed much like a video game that allowed the players to step away from the serious story telling and simply get to prepare for combat and shoot lighting bolts. It was also curious how the board was not really set up until combat started. I don't know if this is normally done or if it just happened organically this way. It seems to me that if the DM wanted to keep his ideas private that the wouldn't pull any of this out until combat has (basically) started. In this case, these actions were almost a telegram telling the players that combat was coming and the more I considered their actions they were prepared for combat more than it seemed they should have been. I am curious to see another session from the beginning and see how much of this is true. I also plan on interviewing the players about these ideas and see if they notice the pattern or not.
I needed to leave shortly after they had started "combat" but I am going to go next week for the end of a session so I will report more on combat and other player actions next week as well. I wanted to see how they end a session and the things involved with it. I am also curous to try to get short interviews after the session and ask questions.
Thanks again to the group because I believe they will be reading this and I am looking forward to next week!
Friday, February 18, 2011
Paper Reading #10: Enabling Beyond-Surface Interactions for Interactive Surface with An Invisible Projection
Comment 1: http://csce436-nabors.blogspot.com/2011/02/reading-10-soylent-word-processor-with.html
Comment 2: http://chiblog.sjmorrow.com/2011/02/paper-reading-10-soylent-word-processor.html
Enabling Beyond-Surface Interactions for Interactive Surface with An Invisible Projection
Li-Wei Chan, Hsiang-Tao Wu, Hui-Shan Kao, Ju-Chun Ko, Home-Ru Lin, Mike Y. Chen, Jane Hsu, Yi-Ping Hung
23nd annual ACM symposium on User interface software and technology
In this paper the authors explore the idea of having a multi-touch, multi-display 3D viewing surface computer for general use. The system is intended to be used for viewing of different materials on a large surface that allows the user to use not only multi-touch but also other forms of viewing and display to enhance the users experience. The state of art work has shown similar concepts to be working through the use of tools or mouse but no one has yet to design a system that uses touch as well as other tools to enhance the user experience. The system is designed by having a large tabletop surface with a glass pane as a display surface. The table has two different camera projections from the bottom one displaying the color image and the other an infrared projection for use by tools and the multi-touch. It also includes mirrors underneath to help the image from refracting through the bottom too much as well as additional IR cameras on the top and bottom to help facilitate the use of additional tools. The system essentially works by projecting an image and an infrared grid then translating the touch view or camera view through a series of filters completing with a Kalman filter and then translating this into (essentially) a mouse click that then interacts with the application. The system has can sense objects that are placed on the table and can view these with a "region of interest" projection enabling the use of various pointer devices. The system is primary used through the use of a i-m view that allows the user to see an area of interest in a 3D display. Essentially the device is the side of an IPad and allows the user to view the 2D surface and then the image is calibrated to display in a 3D fashion. The newest idea they are also showing is the use of an i-m camera as well as an i-m lamp. These devices are similar to what they sound like allowing the user to have a fixed (lamp) or mobile (camera) view of the surface that displays a more detailed 3D Image of the area they are viewing. These devices allow the user to get a smaller more detailed view of a certain area and allows for exploring of a smaller 3D area. When developing a prototype for this they found that users were easily able to use the i-m view and i-m lamp with their intended uses but found the i-m camera rather difficult to use as due to the frequency and velocity of movement the camera was never able to "focus" and was in some cases unable to provide detailed images. They plan to add a "jitter reduction" so smaller movements by the user are largely ignored and the camera adjusts its focus to adapt. They also found the users looking at the table in a variety of orientations and found it more difficult to view at certain angles. Censors that will orient views from each side will be added to aid in this process. They considered the first view a success and were allowed to refine their technology with the resulting user studies.
I think this is one of the most interesting articles I have read from a computer science standpoint. I am trying to visualize the amount of code that needs to go into something like this to enable the user to use a screen on a 2D surface and have a 3D image of the area projected onto their screen. I think the use of IR technology greatly assists in this and while the report did go into very fine detail about the various devices and technology used it was interesting to see the various modifications and updates that the group implemented. I think the most intriguing idea that was proposed are the lamp and camera ideas that allow the users to focus in on a particular area and get a close up view with a more detailed image. I would think for viewing maps, which seemed to be their primary use for this system, this would be a large point of interest and would need to be a feature that is available in any similar system. I would think that they would have guessed that the users might be moving the camera quite fast and been able to adapt this. From reading the article they made it sound like the users moved the device much faster than intended and that was the source of confusion but I cannot be sure because the idea is still listed as a source of contention in the user feedback. I don't think that the group does a good enough job explaining what the users meant by having strange orientation problems and if they want to improve on this need to expand the idea more or give a more concise example of what the problem is. The paper is very well fleshed out and the ideas and examples given are very well done. Despite this essentially being a professional paper on their work this really makes me want to go and try this system and see how well it does work. I wonder if the system could be expanded to 3D goggles so that the user would not have to hold the i-m view display but rather just look with glasses and then use the lamp and camera for their intended uses. I also wonder if more research is going to be done on this kind of computing with the rise of 3D technologies in the market.
Comment 2: http://chiblog.sjmorrow.com/2011/02/paper-reading-10-soylent-word-processor.html
Enabling Beyond-Surface Interactions for Interactive Surface with An Invisible Projection
Li-Wei Chan, Hsiang-Tao Wu, Hui-Shan Kao, Ju-Chun Ko, Home-Ru Lin, Mike Y. Chen, Jane Hsu, Yi-Ping Hung
23nd annual ACM symposium on User interface software and technology
In this paper the authors explore the idea of having a multi-touch, multi-display 3D viewing surface computer for general use. The system is intended to be used for viewing of different materials on a large surface that allows the user to use not only multi-touch but also other forms of viewing and display to enhance the users experience. The state of art work has shown similar concepts to be working through the use of tools or mouse but no one has yet to design a system that uses touch as well as other tools to enhance the user experience. The system is designed by having a large tabletop surface with a glass pane as a display surface. The table has two different camera projections from the bottom one displaying the color image and the other an infrared projection for use by tools and the multi-touch. It also includes mirrors underneath to help the image from refracting through the bottom too much as well as additional IR cameras on the top and bottom to help facilitate the use of additional tools. The system essentially works by projecting an image and an infrared grid then translating the touch view or camera view through a series of filters completing with a Kalman filter and then translating this into (essentially) a mouse click that then interacts with the application. The system has can sense objects that are placed on the table and can view these with a "region of interest" projection enabling the use of various pointer devices. The system is primary used through the use of a i-m view that allows the user to see an area of interest in a 3D display. Essentially the device is the side of an IPad and allows the user to view the 2D surface and then the image is calibrated to display in a 3D fashion. The newest idea they are also showing is the use of an i-m camera as well as an i-m lamp. These devices are similar to what they sound like allowing the user to have a fixed (lamp) or mobile (camera) view of the surface that displays a more detailed 3D Image of the area they are viewing. These devices allow the user to get a smaller more detailed view of a certain area and allows for exploring of a smaller 3D area. When developing a prototype for this they found that users were easily able to use the i-m view and i-m lamp with their intended uses but found the i-m camera rather difficult to use as due to the frequency and velocity of movement the camera was never able to "focus" and was in some cases unable to provide detailed images. They plan to add a "jitter reduction" so smaller movements by the user are largely ignored and the camera adjusts its focus to adapt. They also found the users looking at the table in a variety of orientations and found it more difficult to view at certain angles. Censors that will orient views from each side will be added to aid in this process. They considered the first view a success and were allowed to refine their technology with the resulting user studies.
I think this is one of the most interesting articles I have read from a computer science standpoint. I am trying to visualize the amount of code that needs to go into something like this to enable the user to use a screen on a 2D surface and have a 3D image of the area projected onto their screen. I think the use of IR technology greatly assists in this and while the report did go into very fine detail about the various devices and technology used it was interesting to see the various modifications and updates that the group implemented. I think the most intriguing idea that was proposed are the lamp and camera ideas that allow the users to focus in on a particular area and get a close up view with a more detailed image. I would think for viewing maps, which seemed to be their primary use for this system, this would be a large point of interest and would need to be a feature that is available in any similar system. I would think that they would have guessed that the users might be moving the camera quite fast and been able to adapt this. From reading the article they made it sound like the users moved the device much faster than intended and that was the source of confusion but I cannot be sure because the idea is still listed as a source of contention in the user feedback. I don't think that the group does a good enough job explaining what the users meant by having strange orientation problems and if they want to improve on this need to expand the idea more or give a more concise example of what the problem is. The paper is very well fleshed out and the ideas and examples given are very well done. Despite this essentially being a professional paper on their work this really makes me want to go and try this system and see how well it does work. I wonder if the system could be expanded to 3D goggles so that the user would not have to hold the i-m view display but rather just look with glasses and then use the lamp and camera for their intended uses. I also wonder if more research is going to be done on this kind of computing with the rise of 3D technologies in the market.
Thursday, February 17, 2011
Paper Reading #9: The IR Ring: Authenticating Users’ Touches on a Multi-Touch Display
Comment 1: http://gspotblogspotblogspotblogspotblogspot.blogspot.com/2011/02/paper-reading-10-tag-expression-tagging.html
Comment 2: http://chi-jacob.blogspot.com/2011/02/paper-reading-9-ir-ring-authenticating.html
The IR Ring: Authenticating Users’ Touches on a Multi-Touch Display
Volker Roth, Philipp Schmidt, Benjamin Guldenring
User Interface Software and Technology
This article discussed the ideas and implications of being able to identify multiple and different users on a multi touch display through the use of infrared rings. The idea here is that, multiple users might be able to use the same multi touch surface and instead of having a complicated log in system the users would each be able to simple wear a ring and use that as their way to "log in". This also allows for an easy way to deal with security as users would only be able to modify files that are tied to their IR ring ID. They started by looking at the state of art which is to use the XWand which is an infrared pointer that works with two cameras in the room to help the user determine location. Essentially with this, systems can be set up to allow the user to turn off the lights with the XWand. They then discuss more implications of the security that can be provided by their system and the hardware components that would be needed to accomplish this. The infrared device itself works by sending a stream of psuedo-random bits to the screen, this is done through a process called a Manchester coding algorithm. The code not only sends the message of what the user is pointing at but also the users ID and the location of where they are touching and the different points of their action. This then gets sent to a swtich which determines if the signal is an input or a form of identification. It sends this data to a decoder that figures out the sequence of bits using the Manchester decoding algorithm and then sends this to a matcher which matches the users pattern to a series of known actions and determines what the user is trying to do. They then discuss the "poor mans" model of this that they have built and have been testing. While it is not as large of a scale as they want and they were not able to get the bit rate that they wanted they did consider this to be a rather good test. They did notice some unintended constraints such as the ring must be pointed towards the table or else the transmission rate would decrease greatly or not be able to transmit at all.
I think this is a really neat system but I don't understand its need right now and the article was overly technical for what it needed to be. Nearly a third of the article was discussing the specs of the hardware and it didn't really give a lot of ideas as to how it would be used or practical examples of what this kind of technology would be used for. I understand that this is likely a dissertation paper and needs to be written in a manner but I had a rather hard time understanding a lot of it. The idea of having a ring that a user can wear to help in authentication and security on a multi touch display is a really cool idea. If this could be expanded and we could simply have a user log into any computer system by having it scan their ring or a chip inside the person it would make authentication a lot more safe and much more efficient. I wonder what they plan to do to get the data transmission rate of the ring to go up. They compared it to the Wii mote and talked about how it was much faster and transmitted a lot more. I think that if this kind of system could take off we could add really cool personalized features to multi touch displays and it might help to increase distribution of multi touch devices on larger scales.
Comment 2: http://chi-jacob.blogspot.com/2011/02/paper-reading-9-ir-ring-authenticating.html
The IR Ring: Authenticating Users’ Touches on a Multi-Touch Display
Volker Roth, Philipp Schmidt, Benjamin Guldenring
User Interface Software and Technology
This article discussed the ideas and implications of being able to identify multiple and different users on a multi touch display through the use of infrared rings. The idea here is that, multiple users might be able to use the same multi touch surface and instead of having a complicated log in system the users would each be able to simple wear a ring and use that as their way to "log in". This also allows for an easy way to deal with security as users would only be able to modify files that are tied to their IR ring ID. They started by looking at the state of art which is to use the XWand which is an infrared pointer that works with two cameras in the room to help the user determine location. Essentially with this, systems can be set up to allow the user to turn off the lights with the XWand. They then discuss more implications of the security that can be provided by their system and the hardware components that would be needed to accomplish this. The infrared device itself works by sending a stream of psuedo-random bits to the screen, this is done through a process called a Manchester coding algorithm. The code not only sends the message of what the user is pointing at but also the users ID and the location of where they are touching and the different points of their action. This then gets sent to a swtich which determines if the signal is an input or a form of identification. It sends this data to a decoder that figures out the sequence of bits using the Manchester decoding algorithm and then sends this to a matcher which matches the users pattern to a series of known actions and determines what the user is trying to do. They then discuss the "poor mans" model of this that they have built and have been testing. While it is not as large of a scale as they want and they were not able to get the bit rate that they wanted they did consider this to be a rather good test. They did notice some unintended constraints such as the ring must be pointed towards the table or else the transmission rate would decrease greatly or not be able to transmit at all.
I think this is a really neat system but I don't understand its need right now and the article was overly technical for what it needed to be. Nearly a third of the article was discussing the specs of the hardware and it didn't really give a lot of ideas as to how it would be used or practical examples of what this kind of technology would be used for. I understand that this is likely a dissertation paper and needs to be written in a manner but I had a rather hard time understanding a lot of it. The idea of having a ring that a user can wear to help in authentication and security on a multi touch display is a really cool idea. If this could be expanded and we could simply have a user log into any computer system by having it scan their ring or a chip inside the person it would make authentication a lot more safe and much more efficient. I wonder what they plan to do to get the data transmission rate of the ring to go up. They compared it to the Wii mote and talked about how it was much faster and transmitted a lot more. I think that if this kind of system could take off we could add really cool personalized features to multi touch displays and it might help to increase distribution of multi touch devices on larger scales.
Tuesday, February 15, 2011
Ethnography Results Week 2
This week we went back to the same DnD group as before, however the group was playing a different system as the week before. We got there early enough and we were able to talk to some of the regulars about how they percieved last weeks game went and the differences between this weeks game and last weeks.
The two are quite vast, on being described as a continual back and forth between the leader (DM or Dungeon Master) and the PC's (Player Characters). The system is designed to where the players are supposed to struggle and there is a lot of interaction between the people, discussions, interactions, and it leaves a lot of time for joking and in general the game is not taken seriously. They also discussed how it was desgined for larger groups and it worked well with groups of 9 or more.
This weeks game was going to be different, it was a more relaxed system with less overall interactions and more room for storytelling and "role playing" (an aspect of DnD where the players embody their character and talk the way they would percieve their character doing so). The system is also less adversarial between the DM and the PCs and a lot of times the DM will do things specifically so that the player will not lose their character or if they want to try a new character make it so that the story will kill them off.
The unfortunate part is we sat and discussed these things for nearly an hour and fifteen minutes and then the leader of the group decided that there was not enough participants and he was not entirely prepared and decided to call the game for the day and go to Denny's. This just goes to show the lack of organizations in these groups and how they are a very friendly community where they are all very invested in their friends lives, but for the most part things are not planned and they simply happen or not. It also plays to the aspect of the culture when they are not able to do their primary task they simply do another group activity and everyone is invited. We were even invited to join in on breakfast!
The meeting did not go as planned but we were able to get some good interviews and see how these people are outside the game. We will be comming back to this group later in the study but next week we are going to a different group that plays, yet another, slightly different system.
The two are quite vast, on being described as a continual back and forth between the leader (DM or Dungeon Master) and the PC's (Player Characters). The system is designed to where the players are supposed to struggle and there is a lot of interaction between the people, discussions, interactions, and it leaves a lot of time for joking and in general the game is not taken seriously. They also discussed how it was desgined for larger groups and it worked well with groups of 9 or more.
This weeks game was going to be different, it was a more relaxed system with less overall interactions and more room for storytelling and "role playing" (an aspect of DnD where the players embody their character and talk the way they would percieve their character doing so). The system is also less adversarial between the DM and the PCs and a lot of times the DM will do things specifically so that the player will not lose their character or if they want to try a new character make it so that the story will kill them off.
The unfortunate part is we sat and discussed these things for nearly an hour and fifteen minutes and then the leader of the group decided that there was not enough participants and he was not entirely prepared and decided to call the game for the day and go to Denny's. This just goes to show the lack of organizations in these groups and how they are a very friendly community where they are all very invested in their friends lives, but for the most part things are not planned and they simply happen or not. It also plays to the aspect of the culture when they are not able to do their primary task they simply do another group activity and everyone is invited. We were even invited to join in on breakfast!
The meeting did not go as planned but we were able to get some good interviews and see how these people are outside the game. We will be comming back to this group later in the study but next week we are going to a different group that plays, yet another, slightly different system.
Monday, February 14, 2011
Paper Reading #8: Thanatosensitively Designed Technologies for Bereavement Support
Comment 1: http://csce436-nabors.blogspot.com/2011/02/reading-8-thanatosensitively-designed.html
Comment 2: http://angel-at-chi.blogspot.com/2011/02/paper-reading-8.html
Thanatosensitively Designed Technologies for Bereavement Support
Michael Massimi
Conference on Human Factors in Computing Systems
This paper the author talks about if there is a possible role for technology in the bereavement process of people who have lost loved ones. He discusses that technologies are not designed with the idea of the eventual death of their users and simple design ideas can assist in the bereavement process of users. His work is going to focus on what he calls "thanatosenstivity" in design of technology and the creation of "online memorials" to help users cope with the passing of others. He wants to help users preserve memories with psychological satisfaction and has done research in digital artifacts and technological heirlooms. He says that he can also look at cultural practices and incorporate these into his design and have variations for the program based on age, religion, origin and other factors. His dissertation would be broken down into three parts: 1. where he does online surveys of how the bereaved use technology, this will help him to get a better idea of what a support community would be looking for in tanatosensitive technology. 2. Is to use this info to better articulate his challenges, this will involve interviews and case studies of participants to help him to clearly measure and understand what users would be looking for that would meet their needs and uphold the honor and dignity of the individuals. and 3. Is to examine new technologies for the home that help in this process, for this he suspects that he will get to create a novel in-home computer screen that will act as a "shrine" to the deceased. The system will have a few kinds of functionality such as creating a display for sharing that will allow family members to discuss and support each other. It will also be able to take pictures of various items that help to represent the person and incorporate them into the picture, thus helping the users to believe that they are honoring the persons memory and preserving items that help to remind users of them. He will then use a known bereavement firm and other surveys to analyze the effects and usefulness of the system. He will have users test the system in their home for 8-12 weeks and then use these two methodologies to help see if the users were satisfied with the experience or what could be done to help improve it.
This article while interesting and very understandable as to why someone would be interested in a study like this but I question is overall usefulness to the user. I think it s a good idea to think that users would want some way to honor their deceased friends and relatives but I do not think a computer-generated shrine would necessarily help anymore than an entire funeral would. I believe that a system like this would be better suited to not help the user with bereavement process but to be sold as a digital picture that can be hung in the household to honor the memory of the fallen. It wouldn't be seen so much as a bereavement tool as it would a tribute to the person that could change based on the season and could preserve items, photos, videos and other tributes to the deceased that would serve as a constant reminder to the person of the good aspects of the persons life. I think trying to force technology into this process is a novel idea but I have a hard time believing that people would turn to this as a suitable replacement for having a standard family gathering or going to a support group to be able to talk to others about how they feel. I also considered that if this was connected to a support website whereby the user would be able to purchase one of these and then display it on the web and have forums and other support to talk with users who are going through the same things. They could form relationships and then their friends online could post items to their shrine to help encourage them and honor their loved ones memories as well. The system seems like a very good idea but with this topic I feel like it needs to be considered very carefully and the researcher needs to make it to where any form of technology does not need to be forced on the user but suggested as a way to help remember the person.
Comment 2: http://angel-at-chi.blogspot.com/2011/02/paper-reading-8.html
Thanatosensitively Designed Technologies for Bereavement Support
Michael Massimi
Conference on Human Factors in Computing Systems
This paper the author talks about if there is a possible role for technology in the bereavement process of people who have lost loved ones. He discusses that technologies are not designed with the idea of the eventual death of their users and simple design ideas can assist in the bereavement process of users. His work is going to focus on what he calls "thanatosenstivity" in design of technology and the creation of "online memorials" to help users cope with the passing of others. He wants to help users preserve memories with psychological satisfaction and has done research in digital artifacts and technological heirlooms. He says that he can also look at cultural practices and incorporate these into his design and have variations for the program based on age, religion, origin and other factors. His dissertation would be broken down into three parts: 1. where he does online surveys of how the bereaved use technology, this will help him to get a better idea of what a support community would be looking for in tanatosensitive technology. 2. Is to use this info to better articulate his challenges, this will involve interviews and case studies of participants to help him to clearly measure and understand what users would be looking for that would meet their needs and uphold the honor and dignity of the individuals. and 3. Is to examine new technologies for the home that help in this process, for this he suspects that he will get to create a novel in-home computer screen that will act as a "shrine" to the deceased. The system will have a few kinds of functionality such as creating a display for sharing that will allow family members to discuss and support each other. It will also be able to take pictures of various items that help to represent the person and incorporate them into the picture, thus helping the users to believe that they are honoring the persons memory and preserving items that help to remind users of them. He will then use a known bereavement firm and other surveys to analyze the effects and usefulness of the system. He will have users test the system in their home for 8-12 weeks and then use these two methodologies to help see if the users were satisfied with the experience or what could be done to help improve it.
This article while interesting and very understandable as to why someone would be interested in a study like this but I question is overall usefulness to the user. I think it s a good idea to think that users would want some way to honor their deceased friends and relatives but I do not think a computer-generated shrine would necessarily help anymore than an entire funeral would. I believe that a system like this would be better suited to not help the user with bereavement process but to be sold as a digital picture that can be hung in the household to honor the memory of the fallen. It wouldn't be seen so much as a bereavement tool as it would a tribute to the person that could change based on the season and could preserve items, photos, videos and other tributes to the deceased that would serve as a constant reminder to the person of the good aspects of the persons life. I think trying to force technology into this process is a novel idea but I have a hard time believing that people would turn to this as a suitable replacement for having a standard family gathering or going to a support group to be able to talk to others about how they feel. I also considered that if this was connected to a support website whereby the user would be able to purchase one of these and then display it on the web and have forums and other support to talk with users who are going through the same things. They could form relationships and then their friends online could post items to their shrine to help encourage them and honor their loved ones memories as well. The system seems like a very good idea but with this topic I feel like it needs to be considered very carefully and the researcher needs to make it to where any form of technology does not need to be forced on the user but suggested as a way to help remember the person.
Wednesday, February 9, 2011
Paper Reading #7: Real-time Interaction with Supervised Learning
Comment 1: http://chi-jacob.blogspot.com/2011/02/paper-reading-7-real-time-interaction.html
Comment 2: http://csce436-nabors.blogspot.com/2011/02/reading-7-real-time-interaction-with.html
Real-time Interaction with Supervised Learning
Rebecca Fiebrink
2010 Doctoral Consortium
Comment 2: http://csce436-nabors.blogspot.com/2011/02/reading-7-real-time-interaction-with.html
Real-time Interaction with Supervised Learning
Rebecca Fiebrink
2010 Doctoral Consortium
In this paper the author talks about a new program she is working with that is going to assist in the machine learning process. As technology gets better more and more programs are going to be moved to having stronger AI programs associated with them and this is going to require more sophisticated and easy to learn machine learning techniques. This also means that all machine learning will not have to be programmed in by a computer scientist but will simply be able to be inserted by any average user and then processed and ready to be used by the program. Much of this work was inspired by the Weka system that allowed users of all backgrounds to follow some very simple machine learning premises and enter a lot of information and then translate this directly into a usable form without the need for programming. The author claims that these kinds of systems will have a lot of practical implications from sound identification to other forms of social media and learning. The author has begun development on a system that will allow the user to not only input data but also record live time data into simple training modules and then on a step by step basis allow the user to pinpoint specifics that the system should look for as well as change the way the program is seeing something by adding more models or more training examples of different kinds. She was then able to take these systems and work with musical performers and directors and see how this kind of machine learning tool has helped their performances. She was able to collect data from them as well as program advancements that help to make the program smoother to use and easier to understand, and change. One of the big things she wanted to focus on was the idea of making the algorithm subject to "change its opinion" as the user entered more examples of an item. In many cases once the user as set a learning mechanism they decide to move it in another direction and fluid movements that allow the algorithm to be "reprogrammed" make it a much more user-friendly experience. The author closes by talking about how more sophisticated machine learning software can make experts more comfortable with using these kinds of adaptive tools to more broad ends.
I think these kinds of systems are very cool and I think this is a good field for more research to be applied. It is true that machine learning is a very hard task for computer programmers but as we learn more about AI and have more people in the field that understand the basics we will start to see new trends develop and there will eventually be a call for more advanced machine learning systems to help assist with tasks that humans might otherwise find very difficult. I have always envisioned systems like this making their way into crime labs and being able to analyze tape and sound to see what is happening in a particular crime or point out important details that humans might otherwise lose. If this were the case we might have a whole new way to establish justice and putting the right people behind bars might be less about lawyers and more about who is actually in the wrong. It is an interesting idea to have algorithms that seem to be "willing" to decide to change their mind and go in another direction given more test cases but it makes sense that a user would want to be able to do that as the kinds of data that might be presented might be changing. It is possible that eventually if we were to build up big enough banks about all kinds of input, audio, visual, touch, smell that we would be able to have computer identify nearly anything and there would be no need to try and figure out "what is that blue-ish fruit in the produce section". I would think that if Google can take a snapshot of the internet and the contents of the world every few weeks to months that we might be able to build up a super computer with information about everything and put this on the internet. I would think.
Sunday, February 6, 2011
Paper Reading #6: Studying and Tackling Temporal Challenges in Moblie HCI
Comment 1: http://angel-at-chi.blogspot.com/2011/02/paper-reading-6-studying-and-tackling.html
Comment 2: http://csce436-nabors.blogspot.com/2011/02/reading-6-studying-and-tackling.html
Studying and Tackling Temporal Challenges in Moblie HCI
Joel E. FischerComment 2: http://csce436-nabors.blogspot.com/2011/02/reading-6-studying-and-tackling.html
Studying and Tackling Temporal Challenges in Moblie HCI
2010 HCI Doctoral Consortium
In the article the author was studying the psychology behind mobile interruptions and how they play a part in everyday life. The author stated that as mobile networks become larger and mobile devices are covering a wider variety of different functions that people are becoming more aware of the interruptions that they create, but this also leads to being more bothered by them. He did point out that after studies, much of what the annoyance came in two forms: bad timing, or message content. If the person was expecting a reply and waiting for it they were a lot less annoyed that if they were in the middle of a task requiring focus. They also were less bothered by good news or unexpected surprises (It’s a girl!!). The studies also showed that there was a good solution for good times to interrupt people in concentration as they said there is a period of brain rests between tasks when people were much more receptive to incoming information, however to track something like this would require attaching electrodes to the subjects head. They decided to come up with new kinds of tests and a new supplementary system for message sending that will analyze two things and provide new utility. First, the test was used to collect quantitative and qualitative data where each interruption is documented and the user tells what they were doing and how much the interruption bothered them. The second then is a new kind of text messaging screen that allowed the user to select from five levels of severity of the message. They would be able to pick whether the message is an emergency and needs to deliver right away which would give an additional message notification, to regular messages that would only deliver themselves after the user got off a call or the phone had not been used in a certain amount of time. The author called for a lot more testing and more psychological data to be extracted before trying a more complex system.
I think this article is interesting not only for its content and the psychological implications that were done with it but also because I can understand where the study comes from as I am being texted and email during this and some are very distracting and some are not. I think that more than anything it is who is texting me than the actual context of the message. Obviously there are certain kinds of messages that I always want to receive and some I never want to receive but there is definitely emotions attached to who is sending a message and when. I also agree with the idea that for something like this to work I would not want to have to attaché electrodes to my head in order to determine when I am receptive of a text message. I do however think that if there was a good way to tell when I would be receptive or if my phone would have a system of pattern matching where I could tell it what kinds of messages I am willing to receive at the time then it would have some good implications behind it. My only other concern with this kind of system is that in order for the phone to collect this kind of qualitative data it might require some kind of third party device attached to the phone. If this was the case we would need to consider power management because one of the biggest issues we face with mobile devices is their limited battery life. I think that this is a great study and one of my favorites so far, I am interested to see results on this and wonder if I will be able to get some kind of device that would do this in the near future.
Thursday, February 3, 2011
Special Assignment: Celine Latulipe
Special Assignment: Celine Latulipe
Reference Information:
ToneZone: Image Exploration with Spatial Memory Cues by Celine Latulipe, Michael Youngblood, Ian Bell, Carissa Orlando; ADM Creativity and Cognition Conference '09; October 27-30, 2009; Berkeley, CA, U.S.A.
The paper that I read about was about the ToneZone program that takes advantage of the muti-touch interface and allows users to more accurately select and modify the tone of a picture. Adjusting the horizontal and vertial area of the picture would allow the user to determine the tone range of the picture in question. The system makes it much more fluid and easy for the user to do so. IT also keeps a log of all the changes the user makes to make the system much easier to manipulate and change back later if they decide to go in a different direction.
I am not that into photo editing so this kind of system is rather foreign to me. I think its is a really good system and it looks like it uses some of the latest technology and makes it quite easy for the user to change and manipulate. I don't see myself ever getting to use a system like this but I think that if I was into photo editing and tone manipulation that this would be very easy to learn and understand.
Reference Information:
ToneZone: Image Exploration with Spatial Memory Cues by Celine Latulipe, Michael Youngblood, Ian Bell, Carissa Orlando; ADM Creativity and Cognition Conference '09; October 27-30, 2009; Berkeley, CA, U.S.A.
The paper that I read about was about the ToneZone program that takes advantage of the muti-touch interface and allows users to more accurately select and modify the tone of a picture. Adjusting the horizontal and vertial area of the picture would allow the user to determine the tone range of the picture in question. The system makes it much more fluid and easy for the user to do so. IT also keeps a log of all the changes the user makes to make the system much easier to manipulate and change back later if they decide to go in a different direction.
I am not that into photo editing so this kind of system is rather foreign to me. I think its is a really good system and it looks like it uses some of the latest technology and makes it quite easy for the user to change and manipulate. I don't see myself ever getting to use a system like this but I think that if I was into photo editing and tone manipulation that this would be very easy to learn and understand.
Wednesday, February 2, 2011
Paper Reading #5: Creating Salient Summaries of Home Activity Lifelog Data
Comment 1: http://chiblog.sjmorrow.com/2011/02/paper-reading-5-creating-salient.html
Comment 2: http://csce436-nabors.blogspot.com/2011/01/reading-5-creating-salient-summaries-of.html
Creating Salient Summaries of Home Activity Lifelog Data
Matthew L. Lee
CHI 2010: Doctoral Consortium
There is a lot of research done on declining cognitive functions of elderly people however not much is done to help them quantify this data and let the person know what the data means or what purpose it servers. Lee talks about how elderly people first lose cognitive function slow and then progress, forgetting entire tasks or how to do multi-step actions. He wants to run tests such as Instrumental Activities of Daily Living: activities that test patients abilities to do simple tasks such as taking medicine, making breakfast and doing housework. These give indicators about how deeply set the decline is and allow for intervention. Lee proposes using a more hands on system that will watch the person in their home and collect what he calls "lifelog data". He wants to address questions such as how well people perform everyday activities, what kind of information these people need, and what kind of sensing systems are needed to collect further data. He also has done surveys of people who have use similar equipment and asked what kind of information would be important to these people and tried to set up the system to collect this kind of data. They then used this data to help evaluate what these people need to change in order to live more independently. Patients claimed that the information collected was a lot more useful than what was able to be collected in a single visit with a doctor. He then talks about how he plans to refine the searches by collecting a lot of data and then presenting visualizations and other models. He says this this could be a breakthrough for diagnosis and curing of cognitive decline in the elderly.
I think this article is interesting in that it is gathering a new kind of data that hasn't been used a lot in the medical field. Instead of just giving standardized tests Lee is going to collect data about how elderly people perform different actions and then present it to doctors as a supplement. I think the idea is inherently interesting, cameras recording someone doing different tasks and then having them analyzed and transferred to a discussion about how far progressed ones memory loss is. It also happens to be that we just read about how tricky memory can be in Design of Everyday Things. We learned that there are different kinds of memory and that we are not exactly sure how memory works. Regarding this, we know that the information that is being studied here is knowledge of the world that the elderly should be gathering from their environment and for some reason is not being kept (remembered). I am very curious to see if this kind of experiment is going to be detailed enough to explain why this phenomenon is happening or if the info is more simple and will only be able to diagnose the problem and help find a solution. If there is a lot of detail in something like this it really could redefine the way we look at cognitive decline and lead to new medical breakthroughs.
Comment 2: http://csce436-nabors.blogspot.com/2011/01/reading-5-creating-salient-summaries-of.html
Creating Salient Summaries of Home Activity Lifelog Data
Matthew L. Lee
CHI 2010: Doctoral Consortium
There is a lot of research done on declining cognitive functions of elderly people however not much is done to help them quantify this data and let the person know what the data means or what purpose it servers. Lee talks about how elderly people first lose cognitive function slow and then progress, forgetting entire tasks or how to do multi-step actions. He wants to run tests such as Instrumental Activities of Daily Living: activities that test patients abilities to do simple tasks such as taking medicine, making breakfast and doing housework. These give indicators about how deeply set the decline is and allow for intervention. Lee proposes using a more hands on system that will watch the person in their home and collect what he calls "lifelog data". He wants to address questions such as how well people perform everyday activities, what kind of information these people need, and what kind of sensing systems are needed to collect further data. He also has done surveys of people who have use similar equipment and asked what kind of information would be important to these people and tried to set up the system to collect this kind of data. They then used this data to help evaluate what these people need to change in order to live more independently. Patients claimed that the information collected was a lot more useful than what was able to be collected in a single visit with a doctor. He then talks about how he plans to refine the searches by collecting a lot of data and then presenting visualizations and other models. He says this this could be a breakthrough for diagnosis and curing of cognitive decline in the elderly.
I think this article is interesting in that it is gathering a new kind of data that hasn't been used a lot in the medical field. Instead of just giving standardized tests Lee is going to collect data about how elderly people perform different actions and then present it to doctors as a supplement. I think the idea is inherently interesting, cameras recording someone doing different tasks and then having them analyzed and transferred to a discussion about how far progressed ones memory loss is. It also happens to be that we just read about how tricky memory can be in Design of Everyday Things. We learned that there are different kinds of memory and that we are not exactly sure how memory works. Regarding this, we know that the information that is being studied here is knowledge of the world that the elderly should be gathering from their environment and for some reason is not being kept (remembered). I am very curious to see if this kind of experiment is going to be detailed enough to explain why this phenomenon is happening or if the info is more simple and will only be able to diagnose the problem and help find a solution. If there is a lot of detail in something like this it really could redefine the way we look at cognitive decline and lead to new medical breakthroughs.
Subscribe to:
Posts (Atom)