In recent years supervised machine learning has played a significant role across industries: from optimizing advertisements on websites, to providing relevant content to various user segments, to detecting anomalies in clients’ transactions. In most of such applications, the user experience is directly affected by the decisions that associated models make. However, the range of actions that are taken as a result of these predictive models are limited to a pre-deterministic set defined by the developers and the product owners. Given the power of automation and machine learning, one can imagine an information pipeline where the behavior of users are captured in a constant feedback loop to be fed and experimented with continuously in a scientific fashion.
One of the most significant of applications of AI in industry has been churn prediction. Different machine learning models can be trained to predict the ‘churn’ on specific products or services. A classifier would be fed with a pipeline of historical customer behavioral data and can learn to proficiently predict the probability that a user is going to churn in a given time period. Flagging churners in this case is the output of the machine learning model, but the next level of action taken in most companies, is usually pre-determined by a human in a traditional non-data driven approach. Despite some recent work on ‘creative’ models and data-driven design approaches, most product owners do not leverage the full potential of predictive analytics pipelines yet.
The main caveat with narrow A/B testing approaches and validated learning is that they can be misleading if they are looked at naively and from a binary include/not-include perspective
The most common scenario in the churn example is that product developers will write a general prescription for flagged users to receive a special offering of some sort, in a hope of putting a dent in churn and achieving healthier retention.
The concept of validated learning has been discussed thoroughly in the book The Lean Startup from Eric Ries. In short, validated learnings are the result of experiments that companies run on specific products or services to measure customer behavioral changes. The most common approach to validated learning is known as A/B testing. In A/B testing, a small portion of the population goes through a slightly different experience (just one manipulation or feature change at a time). If this small population reacts positively to the change, the feature will be integrated into the main product and if the reaction is negative the feature will be modified and retested, or discarded all together. The main caveat with narrow A/B testing approaches and validated learning is that they can be misleading if they are looked at naively and from a binary include/not-include perspective.
Universal Consumer Behavioral Metrics (UCBM)
Universal Consumer Behavior Metrics are a set of terminologies and measures that can help organizations map iterative testings to a more generalized validated learning. Rather than looking at each A/B testing with a binary perspective of including or excluding a feature, each test can help the company to infer more long-term useful information about their customers. This requires cross-functional teams to work together and formulate hypotheses and behavior predictions to be associated with a specific test release. Once an initial UCBM has been established, predictive pipelines can leverage each experiment’s information to map each and every experiments attributes and characteristics to customers reactions. In a sense, a meta predictive pipeline learns how new features and services would invoke users reactions. This in turn, will make validated learnings more achievable and A/B testings more meaningful.