CS247: Human-Computer Interaction Design Studio

Milestone 5B: User Studies Results Writeup

Introduction

We have two primary goals for this first round of user studies. The first is to understand how intuitive our interface is -- whether or not users can interact well with our interface, and whether the design and flow of our application is easy to use. Our second goal is to gain a deeper insight into the situations in which users will find our application useful. Our initial stated “use case” for the app was for friends to help find one another at a prescheduled event or meeting, but we believe that the functionality of the app can facilitate the communication for coordination of various different types of events.

We will go into depth about each of our two stated goals below.

Driving Questions and Hypotheses

Goal One: Learning about the Usability of our Application

Q1: Will users only open the application to find each other during scheduled meetings at new locations?

Hypothesis 1: Although this was our initial use case, our new hypothesis going into our user studies is that users will also use the app for more casual meetups that they have not previously scheduled into their calendars. For example, Charles may have verbally agreed upon meeting up at the Embarcadero Train Station with Angela, but he may still have incentive to open the application and create his own event on the spot upon arrival to help him communicate with and find Angela even if this were not formally scheduled into his calendar.

Q2: What features define our application, and are there any that are superfluous?

Hypothesis 2: We would like to create an application that does one thing (enabling users to find their friends) well, rather than attempt to dip our toes into every feature. Thus, we are curious to find out what users think is the most useful part of our application. We hypothesize that the answer to this will be the map showing real-time location of friends, combined with pinned photo data on the map.

Q3: Are features discoverable? Is the interface easy to use?

Hypothesis 3: Currently, we are using a text-approach for buttons, so we believe that the labels themselves should be relatively straightforward. However, we are interested in what the user interaction flow is like. We hypothesis that by keeping the view stack relatively shallow, users should be able to do what they want quickly.

Goal Two: Understanding Primary Use Cases for our Application

Q4: Are users willing (and likely to) interact with the app prior to the event? Is it too much overhead to ask users to remember to create a new event several hours or days prior?

Hypothesis 4: We hypothesize that people will not mind creating a new event on the app in order to use it. Many applications, such as Facebook events, are not synched with users’ calendars, specifically for the reason that a user may want to be selective about which events show up in which application. If this is not the case, then we will have additional incentive to incorporate synching with Google/Apple Calendars to better facilitate this particular use case. Otherwise, it may be frustrating for users to have events they don’t want to use the app for (e.g. meeting a friend at a very specific location, like the Claw) cluttering up the Events screen.

Methodology

For this user study, we recruited users from amongst our friends and classmates. We feel that this is a representative user base for our application, as college students often have meetings in new or unfamiliar places (visiting teammates in other dorms or buildings, going to large festivals and events, meeting with friends at train stations or restaurants, etc). Since our tester base comes from our friends, we simply sent emails and texts to ask them to participate in this study. We also agreed to test the prototypes of other teams in addition for their assistance. Because our goals in this study revolve around how users interact with our app, we chose tasks that would specifically allow us to observe users in action. We will be having our users participate in these tasks in pairs.

Task 1: We will ask our testers to go through the process of creating an event in the app and inviting friends to the event. After users have completed this action, we will talk to them about their experience. Specifically, we want to find out about the overhead of event creation, and question users about whether or not they would actually go through this process with their real-life events. As part of our data collection for this part of the experiment, we will prepare a set of survey questions for each user to answer. The answers to these questions will help us understand the potential use cases our testers see for our app. From initial pilot testing with another group, we have found so far that testers are, in theory, amenable to in-app event creation. One tester stated that it was no different from how many other event-driven apps like Facebook events require separate event creation, but this may just be because people are often more agreeable to the idea of doing additional work while using a prototype than they would be in real life.

Questions will include:

Task 2: We will place User A and User B in different parts of a building. We will have the two users join the same event on our app, and attempt to find each other using the map and the rich media options. One of us will be observing each of the users during this process. We will record the sequence of actions that each user takes, whether or not there are any roadblocks that are presented (either with app functionality or interface), and how long the entire process takes. We will also run a control experiment to observe how people find one another without the app, and compare the two processes.

Results

Task 1: The goal of this task was to help us gain insight into how likely users were to pre-plan the use of our app by creating events beforehand and inviting their friends. To test this, we asked 5 users to go through the process of event creation, and then interviewed them about the process afterward. Overall, our hypothesis that users would not mind creating events beforehand was disproved. Users seemed to find the process tedious, and expressed disbelief that they would put so much thought into finding friends at an event before they had even arrived.

One of our testers, Jason, when asked about his experience, said “I can’t believe I have to enter this much information… a start time… and an end time!” This was despite the fact that he had formerly cited start and end times as very important metadata for event description. We found that users in general would prefer a quicker process that didn’t necessarily require remembering to create a new event in-app several hours before an event. When surveyed about the possibility of synching with Google calendar events, however, the response was also not entirely positive. Some testers were concerned that too many events would be synched, many of which they don’t need location assistance with (e.g. if they were meeting up with someone in a friend’s room or in front of their dorm).

Task 2: This task allowed us to gain deeper insights about how our app would be used “out in the wild.” We had users run through the process of finding a friend with the app, using the location, text chat, and photo functionalities. To contrast this process with current processes that people may use to find their friends, we also had users attempt the same task using only the text chat on their phones.

Testing pair: Jason and Randy

We will dive into the process experienced by one of our tester pairs, Jason and Randy. We asked Jason and Randy to meet at Roble Hall, a dorm with a number of confusing corridors. To set the stage, we placed Randy and Jason in different parts of the building, placed Jason in a third floor meeting space, and asked Randy to find Jason.

Control: As a first task, we asked them to attempt to find one another using text chat. We found that this process was generally extremely inefficient, as the testers spent a great deal of time sending texts such as “Where are you?” and “I don’t see you”. After about 4 minutes of texting, we realized that not much progress toward finding one another was being made. We decided to allow the two of them to call each other to find out more specific directions. When this phone call was made, Jason shared detailed step-by-step instructions to Randy on how to find him (on the third floor of the building).

After this process, we asked the two testers to attempt using our app to complete the same task. We placed Jason in a new location (still in Roble) for this task. This time, Jason immediately began taking photos of his surroundings, including decorations on the walls and signs on doors. However, it turned out that the greatest challenge for Randy was trying to even get to Jason’s general vicinity, so that he would be able to observe the scenes in the photo. Even though Randy had access to the map, the detailed locations within the building were difficult to trace, especially since they included multiple floors of the same building. He generally just ignored Jason’s pictures of hallways because he said they were disorienting. However, once Randy arrived in Jason’s general vicinity, he was able to immediately find the kitchenette without any difficulty.

Based on our results from testing our app, we decided to conduct a further experiment in which we would allow users to take videos of their surroundings. Since we did not have this functionality built into our own app, we designed an experiential test using the new Snapchat video feature. What we found from our video test was that video does a much better job of capturing directionality and serves as a very good instructional set for other event attendees of how to arrive at a location starting from a particular entrance. One tester, Devney, remarked however that watching a 30-second video of someone walking was not terribly interesting and that it was too specific to the path taken by one specific person.

Testing pair: Timothy and Henry

We will also take a closer look at another tester pair, Timothy and Henry. We asked Timothy and Henry to meet at a study desk in Green Library. Green Library, while clearly labeled with room numbers and floor plans, is quite large and follows a somewhat confusing numbering scheme. To start, we had Henry start in Meyer Library, looking for Timothy inside the Fifth Floor of the West Stacks (W5)

Control: We followed a same control condition as the above pair via text chat. An interesting point was that Timothy preferred longer texts, so the time between texts were relatively long. While detailed, Henry occasionally waited for for the texts from Timothy. Nonetheless, it still did not take that long for the pair to find each other at about 7 minutes.

Afterwards, we asked the pair to use the app to complete the same task. We moved locations in Green Library (South Stacks) and retried the task. Henry paid attention to the map first to get to the relevant part of the building. Timothy was slightly confused about what the camera function does, and still resorted to text chat in the beginning. However, he tried the camera function halfway through to take a picture of the call number sign on the bookshelves and post it. Henry, by then already near the South Stacks, simply followed it to the right area. The time was shorter at about 5 minutes.

Based on this result, it was apparently still not clear exactly what the camera function does withou. We believe that we probably need to modify our onboarding process to make the function of these features clearer. That said, the map and the chat was discovered instantly. Since the search was much faster with the picture in the end, we believe that the camera is still an important feature to retain - it may just need to be more discoverable.

Discussion

Task 1: The results of task 1 primarily informed us that, regardless of how useful the application may be, the overhead of event creation is currently too tedious for users to adopt it into their scheduling workflow. The somewhat contradictory information that some event metadata, such as event start and end times, are important but that entering it is tedious, combined with the opinion that calendar synching may be overkill makes it apparent that we need to rethink the balance between ease of use and richness of information. Of course, even though these results were corroborated by all our users, they were still the opinions of only three people. Specifically asking testers to experience event creation in isolation was also not entirely representative of how the process may feel in the context of a real situation, for example, if a group were rushing to meet up with one another and needed immediately to create a new event.

Task 2: One of our key learnings from Task 2 was that multiple photo angles are not necessarily sufficient for the purposes of finding one another -- directionality and the specific angle at which the photo was taken vs. where the user viewing the photo is standing greatly affects whether the photos are informative at all rather than just confusing. In trying to use video as a supplement to help users find each other more easily, we learned that while photos are too ambiguous, videos may be too specific to the path of one user to be useful in a general context. The caveat here, again, is that we have not yet been able to test our application at a full-scale event, where people would be less likely to try and follow a particular path to a single location if the venue is someplace open, like a field, and where it is hard to find each other not because of building barriers but because of crowds, trees, or other ambiguous physical obstructions.

Implications

Task 1: Our results for task 1 primarily informed us that our current event creation paradigm needs to be simplified, but not how we might effectively do so. Since testers were conflicted on what their ideal event creation experience would look like, we will, during our next iteration, test a number of different creation interfaces: one with synching to Google calendars, another with less event metadata, and a third for immediate spontaneous use. Our idea for immediate spontaneous usage was informed by Jason’s specific disdain for entering start and end times. Do users really care about what time an event ends when they’re trying to find each other? Since the point of the app is not for scheduling or calendaring, will users even need to enter events beforehand at all, or would it make more sense just to let them start using the app on the fly, add friends to a map of a general location rather than an event, and not constrain them to the event level?

Task 2: We realized from task 2 that while we want to keep the same value proposition for our app, we need to do a partial pivot on how we intend to delivery the functionality. Because the angles and directionality of photos are so important to how users understand them, we want to try incorporating video into our next prototype. But given the issue of video length, the potential inconvenience for the videographer of having to hold a camera steady for the entire duration of his walk to a location, we want to adapt this to leverage the other rich data we currently have of map, location, and chat data.

As our tester, Devney, noted, because attendees to a meetup will often arrive to a general location from different start points and take different paths towards the end location, it makes more sense to find a way to represent various paths than it does to show either just people’s general locations or that of one taken by a specific user. We therefore want potentially to prototype the addition of tracked paths to what we currently have of real-time locations and photos on our map. Imagine a set of footsteps or a dotted line following the path of each individual that has already arrived close to the final location, annotated by photographs and short video clips. We feel this could potentially be a much more useful way for users who have arrived to help direct those who have yet to find them to the correct location. Properly annotated with chats, and users will be able to communicate to each other, “turn left at this hallway” potentially marked on the map with a brief video clip in order to find each other initially and then similarly update each other with chats and multimedia on the map throughout an event to keep tabs on where people are.

Presentation

Results presentation


© 2014 by Alex Wang, Angela Yeung, and Jessica Liu.