We hear a lot about the importance and value of outcomes in the Agile community. Yet, the Agile community remains silent on the subject of data.
Today’s data is more readily available than ever before, making it possible to see cause and effect in product decisions in a way that was not possible 20 years ago when Agile Manifesto was published.
Let’s first distinguish between superficial and more meaningful outcomes. Customer success can be defined as a result of solving a genuine customer problem such as improving one’s credit score.
Let’s say a financial company wants to offer a service that helps customers improve their credit scores. The impact is not immediate as improving your credit score takes time.
One of the services might allow someone to consolidate their debt under one low-interest loan. This would pay off some of their credit cards, and reduce their monthly payment.
Another feature could optimize the interest earned on customer deposit accounts. These actions can improve a customer’s credit score over time, but not immediately.
Whether a customer clicks, sees, or uses (e.g. These are all potential indicators of customer success. However, the outcome indicator is how much credit they have actually improved.
Although leading indicators are important in revealing the use and effectiveness of a service, they do not indicate a company’s success. Instead, it is determined by corporate outcomes like profit, increased customer retention, etc. Lagging indicators are more important.
Most Agile authors have completely missed the importance of data in assessing lagging indicators, and thus outcomes. Data is not mentioned anywhere in the Agile Manifesto.
Data is rarely mentioned in articles or books about Agile methods. There is a lot of talk around code. We hear about “refactoring”, testing, and software “developers” – the people who code.
However, the word “data” is rarely used. There is a lot of talk about hypothesis-driven approaches and experiments, but no mention of data.
It’s almost as if we’re traveling through Arizona in the US and making comments on the mountains but failing to notice the Grand Canyon. We tend to look in one direction and miss the immense chasm right next to us.
A/B testing is a practice that allows you to release two versions of a product feature in order to determine which version users prefer. DevOps practices make it easy to perform A/B testing, and so it is a common practice today.
Lack of data can make A/B testing difficult. We are not just talking about which feature version users prefer more; we are talking about actual customer outcomes.
Simple A/B testing won’t work to assess the long-term outcomes of these features. The question is not whether a feature is used more. The company wants to know if these features produce the desired outcome.
The question is: Does the credit score improvement service actually improve credit scores of our customers over time? If so, we can advertise it to help us retain and attract more customers.
The company must have data to prove that the service works. For example, does the credit score of its customers increase when they use it?
The company must combine a lot disparate data in order to obtain this data. This includes data about customers’ use of its services and data about credit scores of customers.
It will likely also need data about customers’ other behaviours, as customers may be doing other things that can affect their scores. This noise is something that must be removed from the data.
A simple Agile story like “I want to demonstrate that our services are impro.”
