How We Secured Our $15M Line of Revenue

Case Study 3

THE CHALLENGE

At my company we fulfill mobile advertising campaigns. As we ride the wave of increasing digital media spend it means we also have to support the increasing number of clients. One of the biggest problems we have is creating 100% of campaign reports by hand. To put hundreds of Excel or PDF reports together each month for all the campaigns is slow and time-consuming, especially as our number of clients increases rapidly.

OUR GOAL

Creating manual reports each month by hand is not a long-term solution and we need to become scalable for the influx of future campaigns. Our mission is to provide real-time, automated, and self-serve reporting for our users. This would free up our team from creating these reports by hand and also empower our users so that they can review reporting whenever they wanted.

MY ROLE

I was in charge of finding the real problem, defining exactly what we needed to build, leading a team to build it, and the strategic rollout of that product across the country. This process included data analysis, user research, prototyping, user testing, and coordinating with product marketing for this new product release.

WHAT WE DISCOVERED

We looked at three main things: our data, our internal users (who created the manual reports), and our external users (who received the reports).

By looking at the data it helped us determine the most common campaign types that were submitted. There were almost up to 100 ways a campaign could be submitted, but after analyzing the data we discovered that if we supported 3 campaign types we could effectively support 85%+ of campaigns that were submitted. So, our data helped us determine which campaign types we needed to focus our efforts on to achieve the 80/20 of campaign reporting.

We then looked internally to our Account Managers (who were servicing our direct users). We uncovered the most common complaints, the most common requests and also the most common user inquiries.

By understanding the complaints, requests and inquiries we could apply these to a user-interface that would proactively fulfill these most common demands. The hope was that our users would be empowered to get what they needed and allow our Account Managers to spend more time on impactful and meaningful work. From speaking with our internal users, we could then determine exactly what needed to be included in this self-serve reporting.

We then reached out to our users and scheduled calls with them.

Before jumping on the calls we needed to define a clear purpose and goal of these interview calls. We determined, our goal was to understand why users needed campaign reporting and understand how they went about getting these campaign reports.

By knowing this goal we used a framework to help us script the phone interview so that we would ask questions to specifically validate our assumptions, answer our unknowns and give us more insights into the unknowns.

I learned this framework somewhere else and wish I could give them proper credits in this use case.

From our user interviews, we determined that these reports were not necessarily for our direct users, but for our users clients and our users wanted to feel good showing their clients these reports. We also discovered the actions users would most likely take like changing dates or finding specific reports.This was a critical piece of insight we could use. From user research, we had more insight into exactly why our users needed these reports and how they would go about getting them.

PROTOTYPING AND TESTING

From the insights from data, account managers, and users we had an idea of what we needed to build. So, we created a high-fidelity interactive prototype to test. We tested these prototypes internally and externally.

After we passed these mocks by proxy users and implemented feedback from those, it was time for user testing with real users!

The beginnings of my design process is divergent and gathers diverse references.

Before meeting with our users for user-testing, we had to make sure we were accomplishing our goals. Based on the user research above, we were able to script the user testing tasks and goals to ensure we tested what our users needed.

  • Data informed us of the campaign types that needed to be supported.
  • User Research told us why they needed these reports and what data they needed from them.

So, our user testing focused on task success and learnability based on those insights. If users were able to accomplish the most common complaints, questions and requests while also including what the users needed, we would be golden!

So, we flew to San Francisco to meet with our users in their office to perform contextual interviews and user-testing. We wanted to see everything how they experience and where they worked.

THE RESULTS FROM USER TESTING

Fortunately, our research was sound and users completed 6 of 7 tasks of the most common tasks they would usually perform when getting campaign reports. As a result, we knew what we had to fix and knew what else to implement that we believed would beat their expectations.

STRATEGIC ROLLOUT

To guide our strategic rollout we determined the segments of users that would benefit the most from the first version. We specifically had to determine which users from which markets submitted the campaign types we supported (remember the 80/20 above?). From this, we released to a market with a small number of very active users to ensure our product would get used and ensure we would get feedback.

From this, we were able to get feedback in a manageable way while implementing those changes before it was exposed to the mass majority of users.

We then continued to roll out the product by larger segment groups until all users had access.

FOLLOW UP USER SURVEY

A product is always changing and can always be improved. So even though we did data analysis, user research, user-testing, and a strategic rollout, we decided to conduct user surveys. Below are the results.

29.4% Check reporting multiple times a week.
88.2% Agree or Strongly Agree that Mobile Reporting is useful.
94.2% Agree or Strongly Agree that Mobile Reporting is easy to use.
82.3% Disagree or are Neutral that Mobile Reporting helps sell more.
70.6% Disagree or are Neutral in Mobile Reporting’s ability for EOM.
64.7% Agree or Strongly Agree they can use this for client inquiries.
88.2% are Satisfied or Very Satisfied with the Mobile Reporting Dashboard.

Ultimately, this survey told us a few things:

1. By doing research, building prototypes and testing we were able to build a valuable product that users actually use. Not only were we able to roll out reporting to support 80% of the campaigns, but users felt like this reporting was valuable.

2. A majority of users do not feel like this reporting helps them sell more campaigns

3. This reporting will not suffice for end of month reporting.

With that, we now have a clear direction on what we need to improve the product. The next step is to figure out why the users feel this way, take that feedback and implement it into our product to further meet our users’ needs.

USERS HAD NICE THINGS TO SAY