User Research on Hard Mode

So I work at UpGuard currently. 

We make a suite of products for IT professionals that allow them achieve complete cyber resilience. 

Challenge 1 - We are a startup

Startups a notorious for not having enough resources to invest in design. They often rely on internal people being the same as the users, instinctually making decisions based on their experiences. Or, they work closely with a couple customers to create a product to fit their specific use case. 

The problem with using the experiences of team as the basis for making decisions is that it can be bias or dated. Not everyone has the same experience (even if they are doing the same job).

UpGuard made the second mistake during the first five years of existence - they would make one-off features just to land deals. These features were usually rushed, not designed, and too hard to maintain. The result was a monolithic product with a couple valuable features used by most and then a whole bunch of features not used by anyone. Work on the valuable features would break the other features leading to a long development cycle full of frustration. 

Challenge 2 - We make B2B software

Unlike consumer apps were the person who buys the software is usually the same as the person using the software, B2B (business to business) often has one person that makes the decision to buy the software and a completely different person that uses the software.

Even if you are working with a lot of companies that are small and the user is the same as buyer, you still have to research, understand, and build for larger organizations. 

So you have to build something that sparks the attention of the buyer and then something that is actually useful for the end user. This means doing double the amount of research - some research to understand who the buyer is and what their needs are, and some research to understand the user and what their needs are. 

Challenge 3 - End users are different

In the case of UpGuard, the end users aren't even the same from company to company. Some of our users are analysts, some are developers, some are advisors. Each one of those users has a different need, goal, and (most importantly) level of technical knowledge. 


When we first started building UpGuard CyberRisk, we didn't even know who was going to be interested in a product that monitors your vendors' external security as well as your own and rolls them up into an easy to understand score. So we just made something simple and let the market direct us. 

The more users we on-boarded (about 100 in our first quarters selling), the more trends we could see. 

After two quarters on the market, it was time to start looking at our users and buyers. 

  • Who is using the product?
  • Why are they using the product?
  • What are the workflows we are fitting into?
  • What are the limitations of the product that keep it from becoming more integral?

Obviously, UpGuard CyberRisk is filling some need for people (otherwise they wouldn't buy it) so the first step is to figure out who is using the product. 

I gathered a list of all the users and their job titles. Then I bucketed all the users into segments:

  1. Leaders 
  2. Developers / Architects / Admins 
  3. Analysts 
  4. Managers 
  5. Advisors
  6. Unknown 

One of the key assumptions driving our development was that the people who are fixing the risks we are detecting are in the platform. What I discovered was that leaders are actually the majority of our users. However, the small amount of technical people we have in the platform are active. 

In doing internal interviews with sales and customer success, I started to lay out some key assumptions to validate with actual users. 

  1. People know what all our technical lingo means
  2. People who are in charge of fixing items are also in the system 
  3. Procurement / supply chain users are active in the platform 

Even though we haven't yet started recruiting customers for discovery interviews, we can shine a light on these assumptions already. For example, sales people reported a trend in their discovery calls that people often don't have a deep technical knowledge. This means that our technical language might make using the product difficult or impossible for these users. People in charge of fixing items are not in the system. In fact, only 11 out of the 100 users are technical people. Finally, we have hardly any users in the supply chain that are active. 

With this new light, we can see that our product has obvious UX vulnerabilities. 

With the these assumptions, gathered just from our internal team, we can now start to look at the data and start interviewing actual users to validate and shine a light on our own bias.