Lifetimes Part 1: Customer Analytics

What is Customer Analysis?  

Customer analysis, being such a vague phrase, can mean a lot of different things whether that's from businesses, financial analysts or everyday ordinary people.  The work I will be drawing upon comes from Peter Fader, Bruce Hardie as well as Cameron Pilon.  The research by Fader and Hardie matches the math with the customer behavioral story.  This is often referred to as Customer Lifetime Value (CLV), Recency, Frequency, Monetary Value (RFM) or Customer Probability Models, etc. etc.  These models focus exclusively on how customers make repeat purchases over their own lifetime relationship with the company.  Cameron Pilon later transformed their work into an easily implementable python code package called lifetimes.

In this blog post, I base my analysis and process around the mechanics of lifetimes.  I apply the analysis to my parent's actual company (Zakka Canada) and carefully examine 22,408 registered customer orders from June 2007 to December 2015.  Zakka Canada is an online e-commerce jewelry display store that wholesales various display fixtures and models.   From the data-set, I extract ZC's best customers through forecasting their future purchases.  I then analyze their historical purchasing paths, infer their probability of leaving and generalize ZC's customers via RF Matrices.  In part 2, I take the models presented one step further and perform a bottom-up financial valuation of Zakka Canada to determine how much it is worth today.

Modelling the Behavioral Story

The first step to customer analysis is to find a good mathematical model to describe customer repeat purchases.  This doesn't have to get complicated, Fader and Hardie only considers timing as a primary factor.  There have been quite a few canonical models proposed, first is called the Pareto/NBD Model by Schmittlein et al. in 1984.  An alternate and easier to implement model called the BG/NBD model was later proposed by Fader and Hardie in 2004.  More recently, they have proposed another model called the BG/BB Model in 2009.  While I'm not too familiar with the BG/BB model, I will be introducing and actively using the BG/NBD model within our analysis.  Both the Pareto/NBD and BG/NBD are supported within lifetimes.

Heres a run down of the BG/NBD model (surprisingly simple actually):  Customers will come and buy at an interval that's randomly distributed within a reasonable time range.  After each purchase they have a certain probability of dying or becoming inactive (never returning to buy again).  Each customer is different and have varying purchase intervals and probability of going inactive.

Mathematical Box: Model Specification
 Customers buy stochastically according to a Poisson distribution with purchasing rate \lambda.  After each purchase the customer has p% chance of becoming inactive.  Therefore, the time period at which a customer becomes inactive is distributed as a shifted geometric distribution.  The customer-base is heterogeneous across those two parameters such that we can assume a gamma and beta distribution respectively.  Lastly we assume that \lambda and p across customers are jointly independent.  This makes some of the math later on much easier.

 \lambda\sim Gamma(\alpha,r),\quad p\sim Beta(a,b)

Miscellaneous Box: Family of Models
This class of models try to quantify customer behavior under a non-contractual setting where we don't know when customers become inactive but rather assign a percentage confidence that we believe they are dead.  In contrast, another class of models are designed for contractual settings and have been successfully applied to other businesses such as the telecommunication industry where a customer has to tell you that they're ending their relationship.

For example, a large number of subscriber-based firms like Netflix often report and describe their churn rate or customer turnover.  CLV is much simpler to quantify on that scale but for a normal product-selling firm, this is much more difficult as were not sure when a customer has decided to terminate the relationship.

Having a model that can describe customer deaths and purchases over time is much more effective at inferring their future purchases and the subsequent aggregation into expected total sales than a naive "oh I expect goods sale to grow at 2% in the next year".  This is shown in details later on with some simple plots.

 

Example of customer purchasing patterns
Example of customer purchasing patterns

Data to the Model as the Patty is to the Buns

The BG/NBD model only requires three primary components for each unique customer:

  1. Frequency: The number of repeat purchases that they have made.
  2. Recency: When was the last time that they have made a purchase since their first purchase.
  3. Customer Age: The end of our observation period minus out the period that they made their first purchase.

There is one last component called Monetary Value that doesn't quite come in until later.  These four components together is called an RFM Matrix.  Below is an example of the Zakka Canada RFM matrix for the first few customers.  The untransformed data consists of only the customer Id's date of purchase along with its monetary value.  Note that each time period is equivalent to one day.

Customer ID frequency recency T monetary_value
34 0 0 3115 $86.00
38 0 0 3109 $38.40
47 0 0 3104 $53.50
61 0 0 3092 $7.00
78 0 0 3085 $55.50
Python Box: Data to RFM
A customer's frequency, recency and age can be then summarized as (x, t_{max(x)}, T) respectively.  Transformation from a normal transaction list can be done via lifetimes

1
2
3
trans_data = read_csv('orders.csv')
data = summary_data_from_transaction_data(trans_data, 'Customer ID', 'Date', monetary_value_col='Subtotal', observation_period_end='2015-12-31') # from lifetimes.utils
data.head() # What you see above

Preliminary Check-up

Before we dive right into the data, we can do a quick describe on our frequency and recency to get a basic idea of what an average customer is like.

Histogram of both metrics.
Histogram of both metrics.
Python Box: Histogram Plots
1
2
3
4
data[data['frequency'] > 0]['frequency'].plot(kind='hist', bins=20) #subplot 1
data[data['recency'] > 0]['recency'].plot(kind='hist', bins=20) #subplot 2
print data[data['frequency'] > 0]['frequency'].describe() #Descriptive statistics
print data[data['recency'] > 0]['recency'].describe() #Descriptive statistics

As shown, both frequency and recency are distributed quite near 0.  Most registered customers of Zakka Canada (12,829) make zero repeat purchases (72%) while the rest of the sample (28%) is divided into two equal parts: 14% of the customer base makes one repeat purchase while the other 14% of the customer base makes more than one repeat purchase.  Similarly for Recency, most customers have made their last purchase early in their lifetime and then became inactive.  Indeed, the last repeat purchase that half our customers will make is within less than a year (245 days to be precise), since their first purchase, and approximately 2 years (661 days) for our 75th quantile.  What does all this mean?  Not enough customers are re-purchasing...or maybe too much?  We don't really know, if only we knew what other similar businesses are bringing in.  However, these statistics aren't that surprising due to plenty of reasons:

  1. Unsatisfied customers can always go to other online merchants; substitution and competition is easy.
  2. Jewellery fixtures can last a very long time if rarely moved; perhaps our analysis horizon isn't long enough.
  3. If (2) is true, then what is the probability that ZC's customers (often small businesses) go out of business before they actually realize a need for more display products?

Whatever the case is, our models can still accommodate for the low amount of repeat purchases and make useful inferences for us.

Let the Model do the Talking

We first need to fit the customer probability model to the data so that it picks up on their behaviors and pattern.  This is done by looking at each individual's Frequency, Recency and Age and adjusting its parameters so that it better reflects the intervals in which our customer-base purchases.  Refer to the image above of customers A to D making their purchases, we take that timeline approach in fitting our model on each individual customer which feeds into our overall likelihood function.

Mathematical Box: Likelihood Derivation

The Poisson distribution describes the probability for the number of purchases that can occur within one time period.  There's a fundamental relationship we can establish: after a customer just made a transaction, for some time constant t later, what is the probability that the next transaction hasn't occurred?  Denote T_x as the random variable for time between this and next transaction and X_t as the poisson random variable, recall that it's independent over time so \lambda can scale by t to go beyond one time period.

 P(T_x \gt t) = P(X_t = 0) = \frac{(\lambda t)^0e^{-\lambda t}}{0!} = e^{-\lambda t}

 1-P(T_x \gt t)=P(T_x \le t) = 1 - e^{-\lambda t}

 \frac{\delta{P(T_x \le t)}}{\delta t}=P(T_x = t) = \lambda e^{-\lambda t} \equiv T_x\sim exp(\lambda)

We have now effectively transformed our random variable from probability of number of transactions within some time t to probability of t between transactions.  It follows an exponential distribution.  To estimate \lambda we can now develop our likelihood which, for a given customer history, would be the product of all his/her probability of time between the xth and (x-1)th purchase \forall x.  It can be sectioned into four parts, let t_x equal the time of the xth purchase:

  1. For the 1st repeat purchase a customer makes, the time period would simply be the difference between the time of his repeat purchase and the time of his first purchase ever. The likelihood is equal to \lambda e^{-\lambda t_1}
  2. For the xth repeat purchase, the time period would be the difference between the time of the xth purchase and the (x-1)th purchase.  After the (x-1)th purchase, the customer has a p% chance of dropping out so the likelihood is multiplied by the chance that he/she hasn't became inactive after their previous purchase:  (1-p)\lambda e^{-\lambda(t_x-t_{x-1})}.  Note that we assume a customer cannot become inactive after their first ever purchase so there's no p term in (1).
  3. Between a customer's last observed transaction time and their age (t_{max(x)}, T], they have made zero purchases, so the likelihood of that occurring is equal to the summation of
    • The probability that they have died right after their most recent purchase p
    • The probability that they are still alive but simply have yet to make a purchase yet.  This is equivalent to P(T_x \gt t) shown above since we are calculating the probability that the customer's next purchase is beyond T-t_{max(x)} which implies P(X_{T-t_{max(x)}} = 0).  The likelihood is thus, (1-p)e^{-\lambda(T-t_{max(x)})}

The combined likelihood is then defined as

\mathcal{L}(\lambda,p\mid t_1,t_2,...,t_{max(x)})=\underbrace{\lambda e^{-\lambda(t_1)}}_{(1)}\underbrace{(p+(1-p)e^{-\lambda(T-t_{max(x)})})}_{(3)}\overbrace{\prod_{i=2}^{max(x)}{(1-p)\lambda e^{-\lambda(t_i-t_{i-1})}}}^{(2)}

=\lambda e^{-\lambda(t_1)}(p+(1-p)e^{-\lambda(T-t_{max(x)})})((1-p)^{x-1}\lambda^{x-1}e^{-\lambda(t_{max(x)}-t_1)})

=(p+(1-p)e^{-\lambda(T-t_{max(x)})})((1-p)^{x-1}\lambda^{x}e^{-\lambda(t_{max(x)})})

=p(1-p)^{x-1}\lambda^{x}e^{-\lambda(t_{max(x)})}+(1-p)^{x}\lambda^{x}e^{-\lambda(T)}

I have now effectively shown that the combined likelihood boils down to the RFM matrix components with x equal the customer frequency, t_{max(x)} equal to customer recency and lastly T equal to customer age.  There's one last part.  Recall that a portion of our customer base have yet to repurchase (frequency/recency = 0) and that we assume they're 100% alive.  This is equivalent to x,t_{max(x)}=0 within (0,T].  Using the same concept from (3.2), The probability that they have yet to purchase is equivalent to e^{-\lambda(T)}.  Fader and Hardie used a simple indicator trick such that when a customer's frequency equal to zero, the likelihood should equal to the latter equation.  Our final combined likelihood is written as

\mathcal{L}(\lambda,p\mid x,t_{max(x)},T)=\delta_{x \gt 0}p(1-p)^{x-1}\lambda^{x}e^{-\lambda(t_{max(x)})}+(1-p)^{x}\lambda^{x}e^{-\lambda(T)}

Where \delta_{x\gt 0} is equal to 1 when x \gt 0 and 0 when x=0

The parameters also vary across different customers so it is calculated over two distributions for a more accurate and flexible fit of the data.   Mathematically, this is done by taking the expectation of our equation over both distributions (see below).  Fader and Hardie uses this concept for a lot of their models such as the Gamma Gamma sub-model for monetary value which is discussed in the next blog post.

Mathematical/Python Box: Incorporating Heterogeneity

A central theme to these set of probability models is to incorporate heterogeneity through assuming distributions on both parameters.  Intuitively, we say that \lambda and p vary over our customer base.  As previously mentioned, both parameters follow a gamma and beta distribution respectively which is consistent with their possible range of values \lambda \geq 0, p \in [0,1].  We derive the heterogenic likelihood as follows:

E[\mathcal{L}(\lambda,p\mid t_1,t_2,...,t_{max(x)})]=\int_{0}^{1}{\int_{0}^{\infty}{(\delta_{x \gt 0}p(1-p)^{x-1}\lambda^{x}e^{-\lambda(t_{max(x)})}+(1-p)^{x}\lambda^{x}e^{-\lambda(T)})g(\lambda\mid r,\alpha)d\lambda}b(p\mid a,b)dp}

where g(\lambda\mid r,\alpha) and b(p\mid a,b) are the gamma and beta pdf respectively.  Using our independence assumption and linearity property of integration, we are able to break this down to two parts.

=\underbrace{\int_{0}^{1}{\int_{0}^{\infty}{(\delta_{x \gt 0}p(1-p)^{x-1}\lambda^{x}e^{-\lambda(t_{max(x)})})g(\lambda\mid r,\alpha)b(p\mid a,b)d\lambda}dp}}_{(1)}+\underbrace{\int_{0}^{1}{\int_{0}^{\infty}{((1-p)^{x}\lambda^{x}e^{-\lambda(T)})g(\lambda\mid r,\alpha)b(p\mid a,b)d\lambda}dp}}_{(2)}

Focusing on (1)

\int_{0}^{1}{\int_{0}^{\infty}{(\delta_{x \gt 0}p(1-p)^{x-1}\lambda^{x}e^{-\lambda(t_{max(x)})})(\frac{\alpha^r\lambda^{r-1}e^{-\lambda\alpha}}{\Gamma(r)})(\frac{p^{a-1}(1-p)^{b-1}}{B(a,b)})d\lambda}dp}

=\int_{0}^{1}{\int_{0}^{\infty}{\delta_{x \gt 0}(\frac{\alpha^r\lambda^{r+x-1}e^{-\lambda(\alpha+t_x)}}{\Gamma(r)})(\frac{p^{a+1-1}(1-p)^{b+x-2}}{B(a,b)})d\lambda}dp}

Leveraging the Gamma function, we multiply the first product (gamma terms) by \frac{(\alpha+t_x)^{r+x-1}}{(\alpha+t_x)^{r+x-1}}.  Let's focus exclusively on that partial integral:

=\int_{0}^{\infty}{\frac{(\alpha+t_x)^{r+x-1}}{(\alpha+t_x)^{r+x-1}}(\frac{\alpha^r\lambda^{r+x-1}e^{-\lambda(\alpha+t_x)}}{\Gamma(r)})d\lambda}

=\frac{\alpha^r}{(\alpha+t_x)^{r+x-1}\Gamma(r)}\int_{0}^{\infty}{(\lambda(\alpha+t_x))^{r+x-1}e^{-\lambda(\alpha+t_x)}d\lambda}

Let  u=\lambda(\alpha+t_x)\rightarrow d\lambda = \frac{1}{\alpha+t_x}du

=\frac{\alpha^r}{(\alpha+t_x)^{r+x-1}\Gamma(r)}\int_{0}^{\infty}{(u)^{r+x-1}e^{-u}\frac{1}{\alpha+t_x}du}

=\frac{\alpha^r\Gamma(r+x)}{(\alpha+t_x)^{r+x}\Gamma(r)}

Leveraging the Beta function, we can simplify the second product (beta terms) similar to what we did above:

=\frac{1}{B(a,b)}\int_{0}^{1}{p^{a+1-1}(1-p)^{b+x-2}}dp

=\frac{B(a+1,b+x-1)}{B(a,b)}

Finally, part (1) combined becomes

\delta_{x\gt 0}\frac{\alpha^r\Gamma(r+x)}{(\alpha+t_x)^{r+x}\Gamma(r)}\frac{B(a+1,b+x-1)}{B(a,b)}

Focusing on (2) and applying the same methods used previously

\int_{0}^{1}{\int_{0}^{\infty}{((1-p)^{x}\lambda^{x}e^{-\lambda(T)})(\frac{\alpha^r\lambda^{r-1}e^{-\lambda\alpha}}{\Gamma(r)})(\frac{p^{a-1}(1-p)^{b-1}}{B(a,b)})d\lambda}dp}

=\int_{0}^{1}{\int_{0}^{\infty}{(\frac{\alpha^{r}\lambda^{r+x-1}e^{-\lambda(\alpha+T)}}{\Gamma(r)})(\frac{p^{a-1}(1-p)^{b+x-1}}{B(a,b)})d\lambda}dp}

=\frac{B(a, b+x)}{B(a,b)}\int_{0}^{\infty}{(\frac{\alpha^{r}(\lambda(\alpha+T))^{r+x-1}e^{-\lambda(\alpha+T)}}{(\alpha+T)^{r+x-1}\Gamma(r)})d\lambda}

=\frac{B(a, b+x)}{B(a,b)}\frac{\alpha^{r}\Gamma(r+x)}{(\alpha+T)^{r+x}\Gamma(r)}

The final combined and heterogenic likelihood that we can maximize to fit our model is then

\mathcal{L}(r,\alpha,a,b\mid t_1,t_2,...,t_{max(x)})=\delta_{x\gt 0}\frac{\alpha^r\Gamma(r+x)}{(\alpha+t_x)^{r+x}\Gamma(r)}\frac{B(a+1,b+x-1)}{B(a,b)}+\frac{B(a, b+x)}{B(a,b)}\frac{\alpha^{r}\Gamma(r+x)}{(\alpha+T)^{r+x}\Gamma(r)}

Application in Python

Using the lifetimes module fitter to fit to the Zakka Canada data set.  I've also decided to plot the heterogenity of both parameters for readers to visualize.  As we can see, the death rate centers around the 30%-40% probability but a large portion of customer cohort's still have a high chance of dying after each purchase.  The heterogenity of \lambda is mostly distributed around 0 and 0.05 with a few having a small tail.

1
2
3
4
5
6
7
8
9
10
11
12
13
bgf = BetaGeoFitter(penalizer_coef=0.0)
bgf.fit(data['frequency'], data['recency'], data['T'], )
print bgf
#  
# Plot
gbd = beta.rvs(bgf.params_['a'], bgf.params_['b'], size = 50000)
ggd = gamma.rvs(bgf.params_['r'], scale=1./bgf.params_['alpha'], size = 50000)
plt.figure(figsize=(14,4))
plt.subplot(121)
plt.title('Heterogenity of $p$')
temp = plt.hist(gbd, 20, facecolor='pink', alpha=0.75)
plt.subplot(122) plt.title('Heterogenity of $\lambda$')
temp = plt.hist(ggd, 20, facecolor='pink', alpha=0.75)

Heterogenity BG/NBD Histogram

After fitting the model, we're first interested in seeing how well it is able to relate to our data.  Peter Fader in his youtube talk says that it fits really well to almost every type of company and then proceeds to prove it with some very convincing plots.  Let's replicate them here for Zakka Canada as a fact check to our model.

Fact Check 1: Frequency Fitting
Fact Check 1: Frequency Fitting
Python Box: Frequency of Repeat Transactions Plot
1
plot_period_transactions(bgf, max_frequency=10)

In this first figure, we plot the expected number of customers that are going to repeat purchase 0, 1, 2, 3 ... 10 times in the future. For each number of repeat purchases (x-axis), we plot both what the model predicted and what the actual numbers were. As we can see, little to no errors in the fit for up to 10 repeat purchases. Now we might think, "yeah it's good cause it's probably overfitting with all that complex modelling!", then lets move onto the next fact checker.

Fact Check 2: Predicting Repeat Purchases Out of Sample
Fact Check 2: Predicting Repeat Purchases Out of Sample
Python Box: Actual Purchases in Holdout Period vs Predicted Purchases Plot
1
2
3
4
5
summary_cal_holdout = calibration_and_holdout_data(trans_data, 'Customer ID', 'Date', 
 calibration_period_end='2015-01-01',
 observation_period_end='2015-12-19' ) # Separate the data into holdout/calibration
bgf.fit(summary_cal_holdout['frequency_cal'], summary_cal_holdout['recency_cal'], summary_cal_holdout['T_cal']) # fit the model on calibration data
plot_calibration_purchases_vs_holdout_purchases(bgf, summary_cal_holdout, n=10) # plot the above graph

In this plot, we separate the data into both a in-sample (calibration) and validation (holdout) period.  The sample period consists from 2007 (the beginning) to Jan 1,2015; the validation period spans the rest of the 2015 year.  The plot groups all customers in the calibration period by their number of repeat purchases (x-axis) and then averages over their repeat purchases in the holdout period (y-axis).  The green and blue line presents the model prediction and actual result of the y-axis respectively.  As we can see, up to until five repeat purchases, the model is able to very accurately predict the customer base's behavior out of sample.  After 5, the model does produce a lot more error and over-estimates the average repeat purchases.  This is due to the lack of data for those large repeat purchasing customers.

Visualizing Repeat Sales from the Model's POV

Up to this point we've looked at how the purchasing model is able to accurate predict future repeat purchases of the customer base.  However, we've never considered how our input data (Frequency and Recency) is interpreted by the model as well as how their interaction affects what the model output.  Below I've created two plots that are called Recency/Frequency (RF) Plots.

Mathematical Box: Model Predictions/Inferences

The authors of customer probability models have consistently provided three key outputs that their model should be able to infer.  We've seen some already and the plots below uses them as well:

  1. P(X(t)=x\mid\lambda,p) - The probability that a customer has made x repeat purchases within t periods.
  2. E(X(t)\mid\lambda,p) - The expected number of repeat purchases for a customer within t periods.
  3. E(Y(t)\mid\lambda,p,x,t_{max(x)},T) - The expected number of repeat purchases for a customer within t periods given his/her prior purchasing history.  Essentially, it can be seen as a future forecast of the customer's purchases.

Derivation for these three components are provided in detail from the original paper.

 

RF Probability and Expected Transaction Plots
RF Probability and Expected Transaction Plots
Python Box: RF Plots
1
2
plot_frequency_recency_matrix(bgf, T=365)
plot_probability_alive_matrix(bgf)

The RF plots maps a customer's expected purchases by the next year and probability that they're alive given his or her frequency/recency.  Intuitively, we can see that customers with high frequency and recency are expected to purchase more in the future and have a higher chance of being alive.  Customers in the white zone are of interest as well since they are 50/50 on leaving the company but we can still expect them to purchase about 2 to 2.5 times during the next year.  These are the customers that may need a little customer servicing to come back and buy more.  It is interesting to note that for a fixed recency, customer's with more frequency are more likely to be considered dead.  This is a property of the model that illustrates a clear behavioral story:  A customer making more frequent purchases is more likely to die off if we observe a longer period of inactivity than his/her previous intervals.  For example, given two customers A and B that both last purchased 1.5 years ago (~2300 days) but purchased 10 and 30 times respectively, we believe that it is more likely that customer B has died (0%) while customer A still has a fair chance of being alive and making purchases in the future (~40%).

Projecting Future Sales from Current Customers

We can infer from our existing customers the expected number of purchases that they're going to make in the next X amount of days.  Businesses looking to forecast their existing customer sales can leverage this calculation as it inspects each observed customer and predicts how much this individual will buy over the forecast period.  In short, it's able to use the customer-driven metrics we've seen to forecast repeat sales.  This is also useful as it can directly link back to marketing and operations function as both a form of evaluation for previous projects or as a starting point for new changes.  For example, let's say we want to begin a rollout of a new rewards program that's awarded to our best customers.  We want to exclusively target six customers that we feel will revisit our stores within the next three months.  Let the model pick for us

             frequency  recency    T  monetary_value  predicted_purchases
Customer ID                                                              
10920               20      757  801       61.854762             1.798846
13527               11      318  348      123.129167             1.923234
14008                9      242  261      133.091000             2.041649
12780               14      445  461      116.653333             2.144764
13577               12      247  275       54.710000             2.436121
14263               17      198  219      310.246667             4.056201
Python Box: Top Six Customers

The component used from the bg.nbd model is the conditional_expected_number_of_purchases_up_to_time function from the lifetimes module.  This is equivalent to E[Y(t)\mid r,\alpha,a,b,x,t_{max(x)},T] where we feed in a customer's summary history and the model infers their future expected purchases.  This is simply done by first inferring the population expected future purchases (E[X(t)\mid{r,\alpha,a,b}]) and multiplying it by the probability that the customer is still alive by T.  See the original paper for detailed derivation.

1
2
3
4
t = 31*3
data['predicted_purchases'] = bgf.conditional_expected_number_of_purchases_up_to_time(t, data['frequency'], data['recency'], data['T'])
best_projected_cust = data.sort('predicted_purchases').tail(6)
print data.sort('predicted_purchases').tail(6)

Listed above is our top six customers that the model expects them to purchase in the next three months. The predicted_purchases column lists their expected number of transactions while the other four columns lists their current RFM metrics.  It is clear that our best customers are often individuals that are relatively young and have made at least 10 repeat orders with the latest one being very recently. The BG/NBD model believes these individuals will be making more purchases within the near future as they are our current best customers.

Visualizing Historical Paths

One last feature that the lifetimes package includes is the ability to visualize a customer's historical purchase history along with the probability that he/she is still active at each time period.  This feature is useful in trying to visualize the patterns that specific customers may transact at.  Some of the most common patterns are cyclical purchases (especially for businesses) as well as different group behaviors; purchasing a lot in a short period then die in contrast to purchasing frequently over a longer period.  Let's take our top six customers and see how they have historically shopped (for derivation of probability of alive over time see here)

Historical Customer Path
Historical Customer Path
Python Box: Historical Path Plots
1
2
3
4
5
6
7
8
9
fig = plt.figure(figsize=(12,12))
for ind,i in enumerate(best_projected_cust.index.tolist()):
 ax = plt.subplot(4,2,ind+1)
 best_T = data.ix[i]['T']+31 #add a month
 best_trans = trans_data[trans_data['Customer ID'] == i]
 plot_history_alive(bgf, best_T, best_trans, 'Date', freq='D', ax=ax)
 ax.set_title('ID: '+str(i))
 plt.xticks(rotation=25)
fig.tight_layout()

Despite the lack of a more general pattern among our best customers, I do notice a few interesting things :

  1. Customers always tend to purchase around Oct/Nov Period
  2. Customer \tt{13577} tends to buy at an equal interval with the exception between September 2015 and Nov 2015.  Now he/she has dropped off for a long time with a 50/50 chance that they're dead.
  3. Customers \tt{12780},  \tt{13527} and \tt{10920} tends to buy during the early stage of their life-cycle and then repeat at a much slower interval
  4. Customer \tt{14008} first bought at a slower interval but increased his/her frequency over time. We can reasonably assume that they started buying more as their business expanded and became more successful.

Based on these graphs, businesses should come up with some rule-based system to identify important customers and cluster them into segments that can be explained economically.  Businesses can then further strengthen the relationship with these segments through different catered interactions.

Conclusion

In this blog post, I've summarized the functions available in the lifetimes package to apply to real-life datasets for customer analysis.  Additionally, I've described the intuition and math that goes behind these functions in hopes for anyone looking to get more into it.  It's a very powerful model and a lot of insight and further decisions can be gained from it.  I've only listed a few examples of real application, businesses looking to leverage the analytics should use this as a tool to separate and group specific customers for future interactions.  In the next blog part, I'll be applying dollar figures to our model and showing how to create a discounted cash flow valuation model from the bottom up.

 

Stay tuned!

5 thoughts on “Lifetimes Part 1: Customer Analytics

  1. Hi Kevin,

    Thanks for this wonderful post.

    I have around 1 TB of customer's data on which I am planning to use the model defined above to get CLTV in python. Will it be scalable enough to handle that or I would need to use some other tool?

    Also, let me know how can I deploy this model in production?

    Thanks
    Gaurav

    1. Hi Gaurav, I'm not very knowledgeable when it comes to working with a lot of data as I've always applied models to more easy-going datasets. Nevertheless, here is my thought on dealing with 1TB of data: 1) the likelihood can be parallel computed as it is a sum of all individual likelihood. 2) the bottleneck may come from transforming transactions into summary items as it is O(N) with a groupby, the subsequent model fitting is a bit easier as it is just O(num of unique customers). 3) AFAIK, there's no version of online updating, you will have to occasionally update the model to handle new data.

      my suggestion is to first try it on a subset, see how fast it works. Furthermore, the model boils down to just the four parameters, what I suggest as a test is to randomly sample your 1TB of data (say 500MB or 1GB) 1000 times and fit the model each time, plot the distribution of the parameters and see if there's a large variance, then decide from there on whether you will need to fit either 1) more recent data or 2) the entire dataset.

  2. Thank you for putting this case study together. I've found it helpful to walk through, along with Cam's documentation.
    One thing I'll share from the work I've been doing is that there's some additional data prep needed for the BG Fitter to work properly.

    For example, if the customer makes two purchases on the same day and then never purchases again, the function doesn't know how to fit it and so assigns a ridiculously long time to purchase so the lambda coefficient ends up right-skewed. This is seen when alpha is calculated as zero - which doesn't make any sense.

    I've also found that the perspective of the calculation must be the day after the end period (in other words, recency cannot be zero).

Leave a Reply

Your email address will not be published. Required fields are marked *