Tuesday, April 26, 2011

FINAL PAPER READING: Estimating User’s Engagement from Eye-gaze Behaviors in Human-Agent Conversations

Estimating User’s Engagement from Eye-gaze Behaviors in Human-Agent Conversations
Yukiko, I. Nakano
Dept. of Computer and Information Science Seikei University
Ryo Ishii
NTT Cyber Space Laboratories
Intelligent User Interfaces

In this paper the authors discuss their work with eye gaze behaviors and their role in human-agent conversations. They essentially set up an entire room that allows the user to interact with a computer and perform actions using their eye movements. They also use this research to see how engaged in a conversation someone is and they try different strategies to keep the user interested and their eye gaze fixed. They did a few different kinds of experiences of experiments starting with the wizard of oz experiment and use this to see what they can do to keep eye engagement up. Later they would then use this data to create probing questions that would allow the user to keep their engagement up and their attention focused. They found that this kind of responsive feedback did help the user stay more focused and their attention stayed more fixed the longer it went. They even found that users asked more questions during the probing when they were more stimulated by the questions asked. They say that in the future they want to work on the algorithms for selecting questions for the user to be asked and they want to make the algorithm for picking probing questions more robust. They think that there is also other experiments that will help track user eye engagement. Overall they found that their experiment was a success and they plan to add these kinds of methods to personal computers.


This seems like a good experiment if they can scale it down. The image they give makes it seem like the whole room is the only want to be able to run experiments like this and if that's the case then they are going to have a hard time using it on a large scale. The experiment is a really good one: keeping peoples attention by asking probing questions. I like the fact that they are trying to keep the user engaged and asking questions while tracking their eye movements to read bio feedback. I like the idea of the system working to try to keep the user engaged is also a really neat concept and actually sounds like something that could later translate into a video game. It would be like the system would try to see how long it can keep your attention before you need to walk away...or something like that. Anway despite the article I wanted to say that I had a really good time in class and if you actually read this I deserve a 10 for this one because I haven't missed a single paper yet, but I digress. Thanks for a really good class and I hope that you all continue my capstone work and I get to see a paper on it soon.

Hooray for TAMU CHI.

Monday, April 25, 2011

Book Reading #9: Living with Complexity

Living With Complexity
Donald Norman
2011 MIT Press Donald Norman

Chapter 1: Living With Complexity

This is a standard opening chapter for Donald Norman, it sets up the various kinds of things that he is going to talk about in the book, and in this case, that is various types of complexity that we face each day. He starts by talking about the kinds of complexity that we face each day. He also talks about various situations in which we have objects or events that will make us face something that will be complex for us. He then talks about the examples we see and read about everyday that are complex for no reason but we still use them. He talks about sports, calculators, coffee, and various other devices that are quite simple on paper but once are made are rather complex. He says it comes to down to an idea of how familiar we are with something and how much technology changes devices we are used to seeing.

This chapter was interesting in that Norman did provide A LOT of rather relevant examples. However in some cases I don't think he explained them all the way. In the baseball example he talked about the infield fly rule and why it is complex, but his explanation of how it worked and what it is used for was more complex than the rule actually is. He also didn't entirely explain why people would want to always drop the ball and how this catches players off base (couldn't they simply take one base and then pause?). The other examples were along a similar vein, and that is the strangest coffee maker that I have ever seen.

Chapter 2: Simplicity is in the Mind

In this chapter Norman suggests that  we only live with complexity because we choose to and because it is in various features we find all throughout our lives. He suggests that things like the water cycle and light switches are only complex because of the way that they are presented. If we can get a nice mental model of how something works in our minds then it is much more easy to understand it. He also says that mental models are a basic fundamental of being able to understand complexity and if we can find out how to make a better mental model of what we want a product to be we can make it better for users. He also talks about how more buttons does not necessarily make something better or more complex, and sometimes it does he gives a good example of how Apply found this with their mice.

I actually liked this chapter a bit, there was some good example and the chapter actually flowed quite nicely. I think however in this case he didn't really relate much of this back to computer science, and if he did I had a hard time finding it. I also like that in this book he is using a lot more pictures so that the reader can better understand things and gives us a good relational model as to what hes talking about and referring to. I think that this chapter had some good information and made a lot of sense in terms of CHI and GUI creation.

Chapter 3: How Simple Things can Complicate Our Lives

This chapter Norman talks about different things that we use that actually make our lives more complicated. He talks about idea of trying to use signs to make things easier to understand but in reality it might simply make your life more complicated. He talks about how he has tried systems that include using different colors of sticky tabs and various things but then after awhile he starts to associate things that have nothing to do with his reminder with them. He tries to argue a few things that will make your life less complex but really I don't see how many of these work.

Norman in this chapter seems to go back to his classical examples in all of his other books trying to show the similarities between them and why they apply to CHI. The problem with this is that none of his example seems to make a whole lot of sense. He talks a lot about security passwords and trying to see if even security experts will know how to make sure their password is secure and various things. It did not surprise me however, that they didn't do much better with their passwords than the average person.

Chapter 4: Social Signifier

In this chapter we learn about signifier, which are things that let us know how a device is supposed to be used. These are apparently in some way different than affordances, they talk about variations on objects that cannot be described using affordances. How he says these are different is that unlike affordances which just tell us how to do something signifier actually force us to do an action that restrain our choices and our complexity in this way. His most significant example is probably a salt and pepper shaker and how there is only really one way to use them and we are forced to use them in the way they are expected to.

This chapter was a bit more interesting but it did seem like another chapter on affordances really more than anything. I don't see how these two things are that much different that it requires its own chapter but when he explains it you do get two different definitions. I think that we could even argue that despite signifiers some things can still be quite complex. Just because we see something and have a decent mental model of how to use it doesn't mean that it is any more or less complex.

Full Blog:

The book did cover some really good points about things that are complex and why we perceive them as such. We as engineers sometimes are really unable to see things the way they are intended to be but as we understand and develop models we can see them easier for what they are. We can also use this knowledge to reduce complexity by introducing signs and signifiers to make them more useful. There is a few pictures that are used and the avid reader is able to see how the pictures help the examples and there are captions that go along with the pics that provide other information.

I was trying to be objective about the pictures but in Norman's books we don't get a lot of pictures so they really were helpful and I had been asking for them in blogs of his other books. I like that he was able to give examples and then relate them to the pics that were provided. In all I thought these chapters were rather good but I think that the signifier chapter was a bit much considering that he talks about affordances and how they are so much different. The book seems good but I think it was a basic Norman book.

Thursday, April 21, 2011

Paper Reading #25: Usage Patterns and Latent Semantic Analyses for Task Goal Inference of Multimodal User Interactions

Comment 1: http://stuartjchi.blogspot.com/2011/04/paper-reading-23-aspect-level-news.html
Comment 2 :http://jip-tamuchi-spring2011.blogspot.com/2011/04/paper-reading-24-outline-wizard.html

Usage Patterns and Latent Semantic Analyses for Task Goal Inference of Multimodal User Interactions
Pui-Yu Hui, Wai-Kit Lo and Helen M. Meng
Human-Computer Communications Laboratory
The Chinese University of Hong Kong
Intelligent User Interfaces

This paper talks about multimodal pen and voice applications and the applications of task goal interfaces between them. The system is developed to be a bridge between single modal and multimodal interfaces for pen and touch devices. The creators want the system to be able to bride commands from both pen and touch and be robust enough to measure task goals in the user interactions. There has been heavy research semantic prediction algorithms for each of the inputs individually but no one has yet tried to come them in a useable way. Their test consisted of users performing 66 different tasks using multimodal interface devices to see their outcomes and how the users responded to the feedback. The system they tested involved the user needing to look for different tasks on the bus such as location, arrival times, cost and route finding. They were for the most part able to narrow down the user input to one search relatively quickly and their algorithm seemed to be efficient enough. The reason I say seemed is that the data became a little unclear and the tests were made in Chinese and I was unable to read what the users responded. It did seem that they authors were quite happy with the results and they have plans to make the algorithms more robust in the future. They also want to program a way to use data that was previously found by a user and feed it back into the system making future searches more accurate. They plan to also make the searches more consistent and faster. The user wants to be able to narrow down their list of options for next steps on pen and voice quicker to make the whole application feel smoother.



Again I had a real hard time understanding this article because despite the number of times I read over some parts of it I am not understanding the technical jargon or the complex mathematical sequences taken to gather and display the data. It also does not help that a lot of the user questionnaires and feedback were displayed in mandarin so I could not see what the users were responding or what features they think did well. The graphs were also a bit unclear and in some cases did not make sense for what the authors were referring to. Essentially I limped through this paper the best I could and just did the best I could with it. To my actual feelings, I think that this is a really good idea combining these two mediums and any system that helps to recognize task goals is always a good idea. Any prediction algorithms that can be used and be written to be more and more robust are great advancements and help people to understand technology even better. It also speaks directly to HCI as people want to be able to use a new interface very fluidly and without having a steep learning curve assocaited with it. It also seems like despite the fact that I had a hard time interpreting the results, the tests they did with fidning these different tasks on a bus riding application is a good way to test their interface. The authors seem to really be on a good piece of software and I hope their research is allowed to continue,

Tuesday, April 19, 2011

Paper Reading #24: Intelligent understanding of handwritten geometry theorem proving

Comment 1: http://gspotblogspotblogspotblogspotblogspot.blogspot.com/2011/04/paper-reading-24-usage-patterns-and.html
Comment 2: http://chi-jacob.blogspot.com/2011/04/paper-reading-23.html

Intelligent understanding of handwritten geometry theorem proving
Yingying Jiang, Feng Tian, Hongan Wang
Intelligence Engineering Lab,
Institute of Software,
Chinese Academy of Sciences
Xiaolong Zhang, Xugang Wang, Guozhong Dai
State Key Laboratory of Computer Science,
Institute of Software, Chinese Academy of Sciences
Intelligent User Interfaces

This paper focuses on a topic that is near to us at  Texas A&M, sketch recognition. This group tries to implement a system that provides dynamic and intelligent visual assistance in drawing and learning. The system wants to assist the user in drawing shapes on an interface and help the user to construct an answer to the geometric question they have provided. The system was originally designed to help users with geometric proofs and will assist in writing equations. There have been other computer based learning systems for geometry before but none have had all the features involved in identifying shapes and equations similar to this system, called PenProof. It also can assist the user in identifying potential mistakes that they have made while writing their answer. Essentially, as the user draws the figure they can identify that as their original question and then they can proceed to write equivalencies and other facts getting towards the proof of the figure. The system will then identify mistakes and other faults with the proof and identify them with red text or lines. The user can then systematically work towards their answer and will eventually have a proof that is completely correct. The users who tested the system were asked about their feelings when using the system such as enjoyment, comfort, and if the visual interaction was meaningful or not. They plan to extend their research and make a more robust algorithm for identifying mistakes and making a more meaningful recognition system for users. They did say that their original system was a success and they were happy with the testing and results made.



This paper was interesting enough and I like the idea of using systems like this in schools. I think that if we could incorporate these kinds of systems in the classroom that it would make subjects that are more difficult to understand (like geometry for some) easier to see and feel. The best part I like about this is that it shows you visually what is wrong with your picture or equation. I believe that math is a lot about finding patterns and recognizing how to identify how to respond to a problem so this system would help you learn how to visualize these kinds of things. I also like that the users want to improve this algorithm and make it expanded for more kinds of equations. I think that if this kind of software was going to be moved for public use that it needs to cover a very large amount of situations so that educators can effectively teach through the use of a sufficient amount of examples.I think that I would personally use this kind of system when studying for a test or trying to finish homework and I can think of a ton of uses for this kind of work. I want there to be more research and I also would like to know if I could be a tester if it is coming out soon.

Sunday, April 17, 2011

Paper Reading #23: Media Equation

Media Equation Papers:
Computers Are Social Actors
Clifford Nass, Jonathan Steuer, and Ellen R. Tauber
Department of Communication Stanford University
Can Computer Personalities be  Human Personalities?
Clifford Nass, Yongeme Moon, B J Fogg, Byron Reeves, D Christopher Dryer
Department of Communication Stanford University

These two articles essentially dealt with the same topic, that is, can computers have human personalities and be social actors? There have been a number of tests done to see if people will respond to computers acting like people and being able to interpret natural language and then respond to the person. The authors are not trying to make a touring test but simply seeing if people will talk to the computers as if they are people. They set up experiments where people interact with machines and see if they regard the machine as if its a person. They then ask the people to talk about and rate the machines on different aspects of personality such as intelligence, passion, knowledge and other aspects of personality. They use these to determine if people are actually regarding the machines as intelligent beings or if they are simply programed responses to different items. They also tried different experiments to see if people regarded male or female voices better and if their interaction with the different gender voice was more or less effective in convincing them it was a person. They tried to determine this by looking at what comprises a personality and what people are looking for when they refer to somethings personality. Again, they look at personalities by looking at a few different factors called dominance factors: able to give orders, talks to others, and takes responsibility. They also look for another set of factors called submissiveness factors which are: easily led, lets others make decisions, diffuses responsibility. They also talk about how, again, it is not simply enough to determine there is a personality but there is also the need to be able to make someone do something with that given personality. Both papers showed about the same results with one paper seeming to contradict the other only in very limited spots. The overall effect was that there was a bit of interaction that we could get out of computers and people did respond to them in some cases.



These articles were rather weird and in some cases misguided. While not directly saying that the other was wrong they did talk about different aspects in the two papers. One paper was focused more on the development of a personality while the other was talking about different aspects of interactions that are displayed. The thing I didn't understand about these papers is that these seem to have more to do with AI then they do with HCI. While this is a direct study of computer human interaction this is also more so a study of natural language processing which has been a study in AI for many years. There have been many programs that can accept natural human language and we have figured out ways for programs to learn and be able to respond to different queries. The best example of this can be seen recently in Watson IBM's new Jeopardy playing computer. He can take in answers from the game and produce the question when asked for my the host. These machines are becoming more and more complex and though progress is slow over the last 50 years there is more getting involved and we are getting closer. HCI from this viewpoint, in my opinion, needs to look at how the interaction takes place and less on what aspects of personality are involved in the interaction. We need to understand how the relationship develops and what makes a person essentially trust a computer actor. Once we do this we will better understand how people perceive computers and we will be able to see them 'evolve' and become more user friendly. This directly involves how we create graphic user interfaces and how people perceive a screen on the computer and better understanding of this will help us to better understand how to design programs.

Thursday, April 14, 2011

Paper Reading #22: Usability Guided Key-Target Resizing for Soft Keyboards

Comment 1: http://chi-jacob.blogspot.com/2011/04/paper-reading-22-usability-guided-key.html
Comment 2: http://chiblog.sjmorrow.com/2011/04/paper-reading-22-usability-guided-key.html

Usability Guided Key-Target Resizing for Soft Keyboards
Asela Gunawardana
Tim Paek
Microsoft Research
Redmond, WA 98052
Intelligent User Interfaces



In this paper we learn about a usability design by a group of workers at Microsoft research. With the advent of phone storming the market and being the computer that most people have closest to them at all times many people are starting to communicate with text and emails sent from phones more often. This requires the user to type into the phone more and more and phones are coming out with new phone-based keyboard or "soft keyboards". While there have been many improvements to soft keyboards none of them have ever considered making the keys larger as the user types to allow them to type more quickly and accurately. These users decided to implement this onto Android and iPhone devices and conduct user studies of speed and accuracy of typing. They implemented an algorithm that is called source-channel key-target resizing that uses Baye's law to predict what keys the user is near or likely to use and dynamically re-sizes them so the user can click them more easily. It then uses anchored dynamic key targets to help the user more accurately select the location on the keyboard and enter their character of choice. The user study showed that many people were at first rather taken with this but after getting more comfortable were actually rather satisfied with the dynamic resizing. They had the same set of users run tests on both regular soft keyboards for a control then had them type the same messages on phones that contained their dynamic resizing algorithm after they had become more comfortable with it. They found that users increased their typing speed and accuracy by nearly 18%. They used a Gaussian touch distribution for resizing the keys and for accuracy of touch and in their future work plan to look at other sorts of touch models and anchor sizes. They also plan to look at what they call 'finger touch points' to see if the keyboard can adapt to the users touch patterns. 



I think that this is a rather interesting approach to this kind of problem. We have been using dynamically re-sizing algorithms on our computers for a long time to try and save desktop space or to help navigate buttons and panels for a long time. While I do think that like any new typing method this will take people awhile to get used to it has some good aspects that can make it very usable by the average person. Typing on soft keyboard is a large source of frustration for many people and being able to have a system like this is perfect to help them overcome this. The idea of having the keys you want to press dynamically resizing themselves is a really good idea however I don't know if the users would like this if the phone was trying to predict what they were going to type, was wrong, and could not hit the key they wanted to hit. Despite this, I do think that this would have great integration with swype typing where the user would be able to roll over the key they want to press and the phone would re-size the keys near their finger so that they can more accurately hit the one in question. I think they did the user study well and even for the short time the users were able to use the keyboard they did see a significant improvement and I would like to see if their speeds and accuracy's go up even more if the users were left with the system for a number of days or weeks.  In all I thought this was a good article and study and I think that if my phone came with this I would be interested in trying it.

Thursday, April 7, 2011

Paper Reading #20: Rush: Repeated Recommendations on Mobile Devices

 Rush: Repeated Recommendations on Mobile Devices
Dominikus Baur
Sebastian Boring
Andreas Butz
Intelligent User Interfaces

Comment 1: http://chi-jacob.blogspot.com/2011/04/paper-reading-20-rush-repeated.html?showComment
Comment 2: http://jip-tamuchi-spring2011.blogspot.com/2011/04/paper-reading-20-lowering-barriers-to.html

In this paper we find another application written for mobile phones that allows the user to brows different purchasing options and weigh and compare them to choose the correct one. This application is known as Rush and it allows the user to search through multiple devices with a simply flick motion on the phone. Essentially you try to find a product on your phone and then search through similar items until you find one or more that you enjoy and then you can choose one or more of them. The app is designed as an informative user study that allows the shopper to select their items and then get more intuitive results for similar product or products that allow the user to find out more information about what they are shopping for. Essentially the user starts with one recommendation and then can swipe through multiple other selections 3-5 at a time looking for more information or the product that they want to find. Tests were done with usability and the accuracy involved with the user being able to select the correct item. The tests were largely successful but the algorightm for selecting similar items need the authors feel needs to be more robust and needs to be able to select more items and be able to get the user to the item they are looking for faster. The participants who used this were mostly younger people average age of 27 and they are going to see if the application appeals to older people as well.

I think this paper was fairly interesting, it seems like a good enough application for people to use. The problem I have with this application is they didn't talk much about how the app worked or the kinds of information that it can give you and mostly about the technical specs of the app. I thought it was interesting that they only used younger people as well in their initial tests and would have liked to see a wider range of people testing the application. I thought that it was good in what it was trying to do get you to your item faster but I wonder how many people really just go browsing for different items online anymore. It seems like a lot of people I know if they want to just go shopping in general they will typically go to the mall or some other kind of shopping establishment. That or what I do is I know the item I want to get and I go directly to it and order it. I really cant see a lot of people wanting to swipe through various items on their phone and then find out more about them or take the time to try and guess and navigate to the item they want to find. I also sort of feel like this has already been done as most shopping websites provide links to similar items that are a bit less interactive but still have the same general idea. I think it is a good idea but I question how many people would use something like this.

Tuesday, April 5, 2011

Paper Reading #19: Tell Me More, not just “More of the Same”

Tell Me More, not just “More of the Same”
Francisco Iacobelli, Larry Birnbaum, Kristian J. Hammond
Intelligent User Interfaces

This article the authors are making a new internet news feed system called "Tell me more" a system that looks at new articles and then scans similar articles looking for more. Essentially, when the user is reading a news article the interface looks on the web for more information about the subject and then brings it in for the user. It does this through a system of internet searches that are formatted and then entered into the UI for the user. What the system does is find other related topics and runs them through a series of difference metrics that have been defined by the user, it then can determine if the article in question (the article that will tell the user more) is different enough from the story but still within the same vein so as to provide more information. The system is then run through a series of text analytics that run hueristic algorithms on the text to determine the amount of difference between the two articles and if it should be displayed or not. Upon release of the system the users found that the system was very useful but it was not completely finished. They said they were  not satisfied with the results and wanted to make the system more robust so that they can get more kinds of articles searched and hopefully be able to choose from a large list of similar articles on the web for each news story. They also talked about generating a better hueristic that will be able to search articles better and more effectively. The system worked how they wanted it to but they want to make sure that it is more robust and has better searches in the future.


I think this is actually one of the better systems I have gotten to read about. While the system does not have a whole lot of fancy features the ones that they have implemented seem very solid and simple enough that the interface is not overwhelming the user. I would actually be interested in trying it myself to see if the articles that were found were indeed not along the vein of information I was looking for or they were actually relevant. My only problem with this interface is that in the picture of how the UI looks in the article, they make the related articles look like advertisements that are placed on websites and unless I specifically knew they were related articles I might just skip over them. The strange thing is despite rather good user responses from their test they still seemed to think that the system did not work entirely as intended but from my reading of the paper it seems like the system works really well .I would actually be interested in being one of the test subjects for this and seeing if it works and giving feedback. I hope there are more systems like this in the future and I hope that we will be able to test them soon.

Book Reading #8: Things that Make Us Smart

Things that Make Us Smart
Donald Norman
Designed and Edited by Donald A. Norman

Chapter 1: Human-Centered Technology

In these chapters Norman discusses the idea of how technology is essentially geared for humans. In fact most of the things we make are for our comport or ease of use, but does this really make us smarter? Does it in fact just make us lazy or make us unaware of how things are changing? He talks about a few of these idea and the different stories he has about how technology has made his colleges smarter...or was it less smart? He talks about two models of human cognition with regards to technology experiential cognition and reflective cognition. Experiential is how we percieve and act to things around us and reflective is how we compare different items and think about objects.

I think that this chapter was quite a bit different for Norman. While he did go back to this stories with different collegues and things he did talk about some examples that were a little outside the box for him. He talked about different models of cognition and the many different sides there is to technology. However it did seem that there was that nagging tone that Norman sometimes get, where he is not only saying about how he dislikes the technology but also how it is bothesome and really irks him.

Chapter 2: Experiencing the World

In this chapter Norman talks about the ideas from the first chapter and how they can be applied. Again he does his standard model of setting up what he is going to be talking about and then going strait into examples and explaining why they apply. It is the idea that the more situations that a person is in the more they are to understand how experiential cognition and reflective cognition play a role in our understanding of technology. Norman even offers examples that normal people would understand such as the Sylvester and Tweety metaphor talking about how their relationship at first is seen one way and then after reflecting in another. The chapter goes further into more specific examples and more analysis of the kinds of cognition.


Again this was a different kind of chapter by Norman. While he did do his standard model he did talk about items that are not spherically technology and made the chapter rather familiar. I also liked the example he gave where he was talking about the pilot and flight crew who need or make a decision about how long the plane can stay in the air and what they need to do in the event of turbulence. The strange thing is that the ways in which he talked about these it actually reminded me of old StarTrek Episodes in which the crew has a problem and then has steps to go through to solve it, first experiential and then reflective.

Monday, April 4, 2011

Ethnography Results Week 8:

For my final week I decided to join my roomates and sit in on another session of the game.

It was a continuation of the game we had the week before and I played the same pre-generated character as before as well.

This time I really tried to use the things I had learned most importantly I tried to get as involved as I could and really pay attention to what everyone was saying as well as try to play as if I was unaware of the meta-conversations that my character would not have been a part of.

I tried role playing with everyone (which is not my strong point) and I tried to honestly think in terms of actions and dialouge that would make sense to my character.

The session didn't last long but I was able to get some things noted that I really felt were important to being in this kind of community.

1. Treat other people like people, while you are playing a game you only get the tru experince if you have fun with the game. I tried to treat everyone like we were playing a game and I took them very seriouesly and every move they made I considered in the context of the game this helped to enrich the experince not only for me but for everyone

2. Listen to the DM - for the most part this person is running the game and wants to make it fun for all, but you need to know him/her and have a relationship. Thils will make the game more fun for everyone.

3. Be willing to talk to others- othe players will help you with your gameplay and if you have questions they are usually more than happy to help you out and make sure you do what you want to do, you can do just about anything and if you have questions ask.

4. Make sure that you participate in other conversations if you only pay attention to the game, then you wolnt have as much fun, be ready to joke around as well. Again these are some of the nicest people you will meet and they love to keep everything compfortable for everyone so participate, its important.


5. Dont take things too seriously- some DnD systems are designed to make it challengeing but this will not stop you from getting a good story as well, be ready to accept challenge and go with the flow.

6. Bring your own dice- the game is essentially about rolling dice so if you plan to play a lot invest $10 and get some, if its your first time usually experienced players have extras.


I know this isn't exactly an ethnographic style but it is some of the things I will talk about in our final paper, and it is a basic breakdown of the new world we entered and some of the central cores to the culture surrounding it.