Kairavi Chahal
Hero (Wide).png

Mint.com | Usability Study

 

Jump To

01. Study Objectives & Research Questions
02. Methods & Planning
03. Testing
04. Analysis
05. Finding & Recommendations
06. What I Learned

Main Findings

  • Users were unsure of what the budgeting feature could and couldn’t do and how to use it

  • Menu options were ‘overwhelming’

  • Parts of UI were too hard to read

What I Learned

  1. Always pilot your study!

  2. Participants can have contradicting opinions.

  3. Working with a real website is tricky.

Project Specs

Role: Evaluation, study design, testing, analysis
Team: Kairavi Chahal, Bermet Jamankulova, Yoonbo Shim, Sandy Tsai
Timeline: January — March 2018 (10 weeks to research, test and analyze)
Skills: Usability test planning, design, execution, analysis

Mint.com — Usability Study

Mint.com is a personal finance management app. It provides consolidated views across multiple bank accounts and helps track budgets and spending. In 2016, Mint.com had 20 million users.


01. Study Objective & Research Questions

Having attempted to use it several times, but always getting frustrated in the end, my group and I picked the budgeting task flow to conduct a usability test on to get to the bottom of why each of us had the same frustrating experience with it.

Using Jakob Nielsen's 10 Heuristics, we conducted a heuristic evaluation on Mint.com, primarily focusing on the "Budget" tab and a few other related features. While there were a lot of visual/UI problems, we found that there was a deeper, underlying problem with the mental model and user expectations that we wanted to address. This is why things such as “Not enough contrast” are rated as minor, while “What happens after you set a goal” is of major severity.

Summary of findings from heuristic evaluation.

 

Based on these findings, we decided on the following research objectives to focus on:

 

02. Methods & Planning

Participants & Recruiting

Using a standard screening survey, we recruited 6 participants with whom we conducted hour-long usability tests. The participants had various levels of self-reported budgeting experience, but had definitely used Mint.com before, but not recently (in the past 12 months).

Tasks

As per the study objectives, we wanted to test the creating, adjusting and tracking a budget functions. We first listed the tasks and defined the success criteria for them.

Usability Study Kit

After finalizing the tasks, we created scenarios around them, and listed what data we wanted to specifically observe during the task. This list of task scenarios and data was compiled into a study kit that would allow us to administer the study smoothly while noting user feedback.

View the Usability Study Kit


03. Testing

During each test session, we audio- and video-recorded the participant and their screen, as well as had a note taker. In addition to the moderator of the study, the rest of the team would also observe for educational purposes. We all moderated at least one study.

The data was collected in four main ways:

  1. Screening survey included demographic questions in addition to participation criteria

  2. Pre-study questionnaire to collect more detailed information about their budgeting habits and motivations

  3. Usability study included 5 scenarios completed using Think Aloud Protocol, as well post-task ratings

  4. Post-study questionnaire captured overall satisfaction and gave users a chance to bring up any additional or lingering concerns


04. Analysis

After all tests were conducted, we each transcribed our notes and participants’ questionnaire answers into a shared spreadsheet, color-coding, counting and highlighting as we went along. After attempting to analyze the data on our own and then combine our findings, we ended up having 2 in-person working sessions instead — using whiteboards and sticky notes to our advantage.

Parsing user comments and issues by task scenario.

Grouping participant comments into key issues, using affinity diagramming.


05. Findings & Recommendations

Besides finding usability issues in a study like this, it is also important to document what the participants liked and what is working well, i.e. what to keep doing.

What Is Working (Keep Doing)

We saw that 5 out of 6 participants were able to quickly and easily locate and use the ‘create a budget’ feature. On average, this task received a 2 (Somewhat Easy) on the 5-point post-task Likert scale.

We also observed that 5 out of 6 participants appreciated the notifications feature.

Room For Improvement

We identified 8 broad issues, which we ranked with low, medium or high severity depending on how frequently the issue occurred and how likely it was to prevent the user from completing a task. The issues are summarized in the chart below.

Major Recommendations

Recommendations for the high severity issues are below.

Details about each issue (including quantitative data and screenshots) and our recommendations are outlined in the report and presentation below.

View the report | View the presentation


06. What I Learned

  • Always pilot your study! We were lucky since we had recruited extra participants; however, upon completing our first usability study we found that there were some kinks in the phrasing of our scenarios. So we were able to fix those for the following studies and use the first study as a pilot.

  • Participants can have contradicting opinions. Two participants may say different things about the same feature or task. This is fine and it is up to the researchers to carefully present both sides of the story, while making their own recommendations. Stakeholders and designers may then make informed decisions.

  • Working with a real website is tricky. Since this was a class project and not sponsored by Intuit, we were not able to fully control the program that was being tested. For example, there was an update made to the site between the usability kit being created and the test being conducted and we had to go back tweak a scenario to match the update. Additionally, since this was a financial application, we created a dummy account and populated it with fake transactional data — it would have been interesting to see performance if participants were able to use perhaps their own transactions and their own Mint.com account.