MVA More Menu Tree Test
This study was an early milestone in my role as a Support portfolio researcher and stands out as one of the most memorable projects due to its impact on the MyVerizon App’s information architecture. The goal was to evaluate the organization and labeling of the More menu, focusing on how easily users could find support-related content. We tested three variations: the existing menu structure and two alternative designs that our team hypothesized would improve usability and discoverability. Insights from the study led to structural updates that reorganized the menu and clarified labeling, resulting in an increase in click-through rates to support content once users click into the More menu. The changes made also demonstrated measurable improvements in navigation and overall information architecture.
Support Portfolio at Verizon
The Support Portfolio plays a critical role in shaping the digital support experience for millions of Verizon customers. The work done in this portfolio helps ensure that support tools are intuitive, effective, and aligned with business goals.
​
The Support Portfolio covers a wide range of experiences that help users get the support their need when they have questions or are trying to solve a problem. Some experiences included in the portfolio:
-
Self-Service Troubleshooting Tool – Empowering users to independently resolve common issues without needing to call or chat with an agent.
-
Wayfinding to Support Tools – Streamlining navigation and discoverability of support features across the app and web experiences.
-
General Support Pages – Designing and refining content-rich help pages that provide clear, actionable answers to user questions.
-
Support Ticket Management – Helping users view, track, and manage their support requests with transparency and ease.
-
Device Diagnostics – Surfacing diagnostic tools that help users identify and address device-specific problems.
-
Network Status and Usage – Providing users with real-time visibility into network health, outages, and data usage.
-
Generative AI in Search and Chat – Exploring how GenAI can enhance customer support by delivering faster, smarter, and more personalized assistance.
Project Overview
Business Situation/Problem
The “More” menu is a key access point for digital support within the MyVerizon App, yet it's structure and labeling lacked validation. To improve navigation, increase support self-service, and reduce reliance on call centers, we aimed to evaluate how effectively users could find support-related content.
Our Hypothesis:
The existing menu may not reflect how users naturally organize or search for support options, leading to confusion, poor self-service rates, and increased support calls.
​
Research Objective
Evaluate the performance of 3 different menu structures to understand how users naturally categorize and organize the support-related options in the MVA More Menu.
​
Key Research Questions
-
How do users categorize the support-related options in the More menu?
If a user designates a particular categories can they explain the reasoning -
Were there any missing topics or categories they would like to add?
-
If the users had to order and prioritize the categories, what would that look like?
-
Were there any challenges in classifying the labels?
-
For the categories defined, are there any options that users feel should be added to that category?
​​
Methodology
Remote, quantitative tree test
Unmoderated individual sessions
Mobile device Modality
​
Participants
-
300 participants total
-
3 test groups viewed 3 different menu formats
-
100 users viewed Option A
-
100 users viewed Option B
-
100 users viewed Option C
-
-
All current Verizon customers
-
All were required to have used the MyVerizon App within a month of participating in the study.
-
Mix of ages, incomes, ethnicities, and genders.
​
Tools
UserZoom - Utilized for recruitment, as well as data collection, monitoring, and analyzation.
Figma - The prototypes were created through Figma.
Google Office Suite - I utilized Google Office Suite to write test plans and to create findings presentations.
Miro - I utilized Miro for note taking and synthesis.
​
Evaluated Stimuli

Task List

Timeline
Week 1
-
Research brief submitted.
-
Sent out invites for research sync to walk through brief and prototypes.
-
​Refined brief and prototypes.
-
Held Project kickoff with finalized brief and prototypes.
-
After kickoff, I wrote the test plan.
Week 2
-
Finished writing test plan.
-
Sent test plan to stakeholders for review; allotting 3 days for team to review.
-
Edited and finalized test plan from feedback.
-
Program test in UserZoom.
Week 3
-
Submitted request for a test run to ensure programming was correct and to ensure tasks were clear.
-
Analyzed test run results and made adjustments to task wording to help clarity.
-
Submitted finalized test and began data collection.
Week 4
-
Data collection finished over the weekend.
-
Began note taking and data synthesis.
Week 5
-
Finished Synthesis.
-
Wrote final presentation report
-
Presented findings to team.
Analysis
My analysis focused on understanding both quantitative performance metrics and behavioral patterns to uncover where the information architecture aligned or conflicted with users’ mental models. I approached analysis across 3 main dimensions:
-
Task Success Rate: For each task, we calculated the percentage of participants who reached the correct destination without backtracking. This provided a baseline for comparing the effectiveness of each menu variation.
-
Navigation Path Analysis: Using clickstream data, I mapped the most common navigation paths participants took. Frequent detours, backtracking, or unexpected routes revealed where the hierarchy or labeling failed to align with user expectations.
-
Error Patterns and Misclassifications: I closely examined where participants consistently selected incorrect categories or abandoned tasks. These misclassifications highlighted moments where terminology or grouping did not match how users conceptualize support topics.
Challenges
Participant Quality Challenges
During the study, about 87 participants were identified as disengaged, likely rushing to complete tasks for incentives. I re-recruited replacements and closely monitored participant quality, which added two days to the timeline but ensured reliable results.
Key Takeaways
-
For mobile tasks like battery drain or dropped calls, Versions A and B performed best. These versions featured “Troubleshooting” prominently on the top-level menu.​
-
For home internet tasks, users often navigated to top-level menu items “Home Support” or “Manage 5G Home,” indicating having an a clear internet menu label increases where user should go.​
-
Success rates for contacting Verizon tasks were consistently high (>85%).
-
The task for searching for signal issues had a 1% or less success rate in versions A and B; whereas, Version C had a 62% success rate.
-
The commonality between versions A and B being that the success option is located within the high-level option 'Feedback'.
-
Impact
Internet-related tasks:
-
None of the menu versions reached the 65% success benchmark.
-
Many participants selected “Home Support” or “Manage 5G Home" showing they associated these labels with with internet-specific issues.
-
Result: We introduced two new L1 categories: “Mobile Troubleshooting” for mobile-related support and “Home Internet Support” for home-specific issues to make navigation clearer and better aligned with user expectations.
Signal issue task:
-
Success rates for the task “check for signal issues in the area” were extremely low in Versions A and B (1% or less).
-
Version C performed significantly better at 62%, highlighting how placement within the menu directly affects task success.
-
Result: We added additional access points for the “Check Network Status” tool within both new L1 categories while keeping it available in “Feedback” as well.
Overall impact:
-
These structural changes made the menu more intuitive and easier to navigate.
-
Click-through rates to support-related content increased, confirming improvements in both navigation and usability.