Can you create value by not building features?

In this blog post, I will share why it is important to remove underperforming features and how my team systematically removed such features to dramatically improve the product’s key metrics. This process helped us grow our Day 1 retention rate by over 20% while reducing the number of features we support. But it wasn’t always like that..

The Capped Potential Problem

At the beginning of 2021, I took ownership of a legacy mobile app that displays personalized news on the lockscreen of mobile phones. Over the years, the product developed a strong presence, mainly in Latin America, and currently has over 8M Daily Active Users receiving daily news updates from leading publishers directly on their lockscreen.

Until then, our product team had been consistently launching new features, but they weren’t having a significant impact on our KPIs. The idea that the product’s potential is capped has become “common wisdom” in the organization.

This is the ultimate frustration for every product team, but the problems didn’t stop there…

“We have a problem with Hungarian”

During our release process, our QA team found a bug when users switched their device language to Hungarian.I was surprised as I didn’t realize the app supported Hungarian. I was even more surprised to learn we supported over 36 languages that were used by only 0.9% of our users. I didn’t think that fixing the problem would actually make a difference.

Instead of fixing the bug, I suggested we remove the redundant languages and see what happens. What “happened” amazed me – our guess was right, users did not complain.What surprised us was that our app size decreased and our product performance improved. But then it made me think…

“What will happen if we remove more features?”

At first, the mere thought of removing something the team had worked so hard to build was scary. In fact, when thinking about removing features we usually have many excuses, such as the team’s investment or that the request came from a specific customer. So maybe we should just keep them?

Why keeping underperforming features is bad for your product

Obviously our beloved features aren’t performing as we expected, but why should we get rid of them? We’ve already made this effort, and we can even see users using the feature in our analytics platform (“Finally, after 5 years, we have early adopters!”).

Deep in our hearts we know that this feature isn’t strategic in any way. We can’t justify investing more in it, because we know it won’t move the needle. This is exactly the point where, by not removing the feature, we’re damaging our product.

First of all, the 1% that actually use the feature are going to suffer, mostly from a lack of attention. But the 1% by definition should be the least of your problems, you should worry about the 99% who “just don’t get the feature”.

Having an underperforming feature will constantly hurt your core user segments. The damage derives from both the direct and indirect costs of keeping this feature alive. Let’s take a closer look.

The Direct & Indirect Costs of Underperforming Features

The direct costs of keeping an underperforming feature are usually obvious to the product team, but might not be measured. For example, if there’s a minor bug, we’ll fix it. But sometimes that time spent fixing can pile up into days and weeks that could have gone to more important features.

In my opinion, however, the direct costs are just a small tax to pay, compared to the indirect costs of having underperforming features. The first and most important cost is the lack of strategic focus.

The lack of strategic focus makes the product harder to sell, longer to implement and makes it slower to be adopted by users. You also need to account for the maintenance costs of keeping these features “future-compatible” with every improvement, refactor or even external change you make to the product. will require you to adjust these features as well.

Okay, but how do you know which features to remove?

Finding the Deadweight in My Product

Now you realize that having underperforming features comes at a (very high) cost. The hard part is that you now have to choose which features should go. In this section I’ll share our methodology for making the difficult decision to remove a feature.

The decision is difficult because it’s emotionally tied to the level of investment the team has poured into this feature, otherwise known as your Sunk Cost. To overcome this bias, we propose using the same framework of prioritization to de-prioritize your underperforming features. For example, at Taboola we use the ICE methodology to prioritize new initiatives based on a 1-10 ranking for Impact, Confidence and Ease.

We compare the ICE scores of all features in the backlog to decide on the priorities.

Reverse Engineering ICE

The same framework can be applied to reverse engineering our selection process for removing underperforming features. Since these features are already live, we usually have much more confidence in the impact they are (not) making. The R&D team can then estimate the complexity of removing these features. It’s important to gather your stakeholders and enable them to share their views on your prioritization. By systematically applying this process we’ll have a prioritized list of features that we can then start to remove from bottom to top. But where do we start?

The Usual Suspects

The “usual suspects” are often found in areas that are not in your product’s “golden flow”, or the path users receive value from your product.

  1. Settings: In this area we’re likely to find things we developed “just in case” or as a response to a very specific use case. Our goal here is to determine whether these use cases are frequent enough to justify their existence. This can be achieved by either your product analytics platform, or by looking at support tickets and feature requests in that area.
  2. Legacy Customers: Look at your customer list and see if you have built custom-made features for old customers. Are these customers active? Do they still represent your core user segment?
  1. Analytics Events: Go to your product’s analytics platform and extract the events that have had the least usage in the previous 3-6 months. Do these events originate from specific areas of the product? These areas could potentially host underperforming features.

When applying this approach, we identified many features that were just “sitting there” waiting to be removed.

When to Iterate and When to Kill

Even after we’ve gone through the entire process, and identified potentially underperforming features, we still had a difficult time deciding to remove these features. This is because we have our inner product voice that says to us:

“Maybe just another iteration could turn this around”.

The dilemma of when to iterate and when to kill a feature is something that every PM comes across at least once in their product career. My suggestion to solve this dilemma is to use the same method we use for setting our goals, – the OKR method. In a nutshell, we define ambitious and measurable goals every quarter and rank them according to our performance with 3 colors.

When I think about whether to iterate a feature or not (regardless of if I defined it as underperforming or not), I ask myself what could’ve been the best case scenario for this feature? To be more precise, if the feature worked like magic (exactly as I planned), by how much would it drive my metrics up? 5%? 10%? more?

After doing this hypothetical exercise, which I usually do before building the feature, I go back and measure the impact the release has actually had.

Going back to the OKRs:

  • Red Zone (0%-40% of my goal) – additional incremental iterations won’t work
  • Yellow Zone (40%-70% of my goal) – good chance I’ll get there with further incremental iterations

For example, let’s say I build a new sign up form and expect my conversion rate to grow by 10%.

  • Red Zone Scenario: After the release we see that we’ve only grown by 3.5%, which is 35% of my target of 10% – I would rather try out a new concept than iterate on the existing one
  • Yellow Zone Scenario: We managed to grow our conversion rate by 6.5%, which is 65% of my target of 10%. I am much closer to my intended goal and will decide to iterate on the existing concept

The same concept could be applied to underperforming features as well. We can estimate the best case scenario for the feature’s adoption and then measure the actual results of the feature after it’s been released. If we don’t see it moves the needle for them, it’s definitely not worth another iteration.

Wrapping Up

In this article we discussed why keeping underperforming features could damage your product. We shared where and how to find the usual suspects for such features and our method for solving the “Iterate or Kill” dilemma.

Since applying this process, our product team has carefully evaluated which features we can remove and which we must keep. This has led us to reduce the product’s scope by 25% and has subsequently improved our key metrics, such as our day 1 retention by over 20%.

As Product Managers we strive to bring more value, but we sometimes get confused between delivering more value and delivering more features. Or to paraphrase a famous Bill Gates quote; Measuring programming progress by lines of code is like measuring a plane’s quality by weight.

Originally Published:

Start Your Taboola Career Today!

Apply Today