By Richard Boire
Given my longevity as a data scientist (30 years +), it is not hard to discern that much of my knowledge was acquired in its application towards direct marketing programs.
These programs are still relevant today under the auspices of a well-thought out marketing strategy, which would also include the hugely important digital programs. Although many people consider the digital revolution as being completely different than direct marketing, this could be no further from the truth.
Amazon vs. traditional direct marketing
Let’s look at arguably the technical leader of this digital revolution and see what it does. Amazon sends one-to-one messages to its customers using email or programmatic advertising on the web. In direct marketing, direct mail or outbound calling would have been used to send one-to-one messages to Amazon customers, but they would have been slower in terms of delivery.
Through email and web messaging the Amazon customer responds and the company’s many warehouses and trucks are put into action to fulfill the response; Amazon is currently looking at the use of drones as another form of delivery.
The delivery time is a real competitive advantage of Amazon as the customer has several options, which vary in cost depending on the speed, with some expected package arrivals in the 24 to 48 hour range.
In the direct marketing world, the same fulfillment process occurs alongside partnerships with Canada Post and other long-standing shipping companies such as FedEx and UPS. But the fulfillment process historically to responders has been much less timely, although I suspect that this is improving given the Amazon competition.
The point of this above Amazon example is to illustrate the fact that Amazon has essentially adopted the direct marketing business model but one on steroids.
Let’s think of the other unique features of direct marketing. Direct marketers were the first organizations, outside of credit card risk companies, to leverage advanced analytics and the use of predictive models. Why? Because we had the data and lots of it.
In the early 1980s, a team of regression analysts, before we were called data scientists, were building hundreds of models a year in trying to target the right customer at the right time with the right offer. This often took a computer that had to be housed in a room that could seat 50 people.
In today’s environment, much of this work can now be done on a PC or laptop. But with big data, companies like Amazon have exponentially more data than the data environments of traditional direct marketers had in the 1980s. But through servers and advances in data processing i.e. (big data and cloud technology), they too leverage the use of advanced analytics through their use of recommender engines.
But how are these different than predictive models? Let’s examine this more closely.
Recommender engines vs. predictive models
The essence of predictive models is more of a singular focus in that the model’s objective is to predict one given outcome using a variety of characteristics or features that best predict that outcome. Advanced statistics where knowledge pertaining to the use of variance analysis and matrix/linear algebra are used to generate the best algorithm or equation.
For the actual practitioner, much of this technical work is now commoditized into modules or procedures, which are readily available both in commercial as well as open source software. The practitioner does not need to code or program the arcane mathematical equations but does need to understand the output and what it means when it is applied to the given business problem at hand.
But the singular focus of the predictive model, as described above, is used as a tool by itself to target some specific consumer behaviour. With recommender engines, there is not this same singular focus as we are trying to predict the next most likely action or behaviour and not one specific action or behaviour. Accordingly, several statistical tools and options can be employed to create this recommender engine, some of which may be predictive models.
Yet much like the process used in building predictive models, the organization needs to understand the previous behaviour of the consumer, which is identical to what data scientists have been doing in the direct marketing environment, except we now are using online data.
Here the end objective of trying to predict the next likely outcome (recommender engine), rather than the likelihood of a specific outcome (predictive model), can utilize several different approaches. Let’s look at some of these practices.
The different recommender engine approaches
One approach utilizes the principles of predictive models to determine, for example, the likely consumer rating of a certain movie given the consumer’s prior features and behaviours. Under this approach, the consumer will have multiple rating scores for different movies as several predictive models are built for each movie or movie genre.
The movie with the highest rating score would represent the one that is most highly recommended to the consumer. But this represents just one option in determining the next best course of action. Another course of action, which has been adopted by many eCommerce companies, is to base decisions not on what the customer says (rating) but rather on what the consumer did (clickstream behaviour). It is different from the rating system in that the target variable is based on actual clickstream behaviour rather than on what the consumer says.
The analytics approach here is different than building predictive models, but the actual mathematical techniques have been used by practitioners for many years. In this recommender engine approach, item-based or user-based solutions are the end objectives. With item-based solutions, correlation analysis is used extensively to identify items that are highly correlated with each other.
In our movie example above, item-based scores are output for movies given how much they correlate with other movies that all users have watched. But the key math here is correlation analysis, which has been used by data science practitioners for many years as it represents our initial statistical tool when conducting advanced analytics.
Let’s look at the user-based approach. In it, a cluster-based approach is utilized to statistically group consumers or customers into segments, which again is similar to how practitioners segmented customers into distinct segments for various direct marketing initiatives.
In the above example again, a consumer is bucketed into a segment of other Netflix consumers, which would have similar preferences regarding particular movie genres. Using the user-based approach, Netflix would recommend particular movies which pertain to the movie genres that are particular for that user’s cluster segment.
There is also a hybrid approach that combines both the item-based and user-based approaches. Here an overall score or composite index is used to rank customers. This composite index creates separate index ranks where one rank is based on the user-based approach and another rank is based on the item-based approach, which are then combined into one overall composite ranking index.
Assuming ranks go from 1 to 100, with 1 being the highest and 100 the lowest, then an item-based rank of 25 and a user-based rank of 11 would yield a composite rank of 18 for that specific movie or show. [(25+11)/2]. Each movie or show would have this composite score or rank where movies with low ranks as opposed to higher ranks would more likely be recommended to the user.
Using predictive models
To date, a variety of these approaches can be used where the key is to do something based on the historical data being captured in the online environment. In the digital world, it is all about what to next offer the customer given their past behaviour or their given path within their customer journey.
The ability to use recommender engines is the more common tool for using data within an eCommerce type environment. Within the online environment, the use of traditional predictive models would be used on a more holistic level, such as predicting customer retention or customer migration (predicting those customers that are most likely to migrate to become higher-value type customers). Other predictive models using digital data would be the ability to estimate the credit risk of a customer or if a given transaction is fraudulent.
Given today’s advancements in technology, both recommender engines and predictive models would leverage the use of artificial intelligence (AI) if appropriate. Note how I use the word appropriate. Data scientists have access to a variety of machine learning techniques with one option being the use of AI or deep learning. The knowledge of the data science practitioner in assessing both the performance of the solution as well as its explainability to business stakeholders will dictate the type of machine learning solution that will be deployed.
Operationalizing these solutions
In building these advanced analytics digital solutions, one key consideration is how these models will be deployed. Operationalizing these models in a production environment involves extensive collaboration with individuals known as dev ops engineers. Data scientists would work with these individuals in ensuring that their solutions are being implemented correctly within the digital environment. The solutions are often placed in digital metafile containers. A container might have one or multiple algorithms being scored within this platform to produce multiple solutions for a given consumer.
The process in ensuring that implementation and deployment would involve a quality control type approach, which is no different than what practitioners used within the direct marketing environment. But a key difference is that this “production” environment in many cases operates in real-time and not batch, which implies that predictive analytics or machine learning solutions need to be able to use real-time data such as streaming data.
Given that these solutions need to be delivered in some cases instantaneously and with extremely large volumes of data to process, big data technology and its parallel type data processing approach is now a given as opposed to sequential data processing.
But one element that can often be overlooked in these solutions is the adoption of an effective measurement framework that tracks and evaluates the performance of these advanced solutions. Digital marketers with direct marketing backgrounds understand this concept completely, which represents one of the pillars to success in any marketing program.
The growing need for practitioners
The need for advanced analytics in our digital ecosphere is even more paramount than what practitioners observed 20 to 30 years ago. Technology has been the key enabler. But the approaches and techniques that are used to develop solutions in today’s highly charged digital data environment are no different than what was used decades ago. The difference is we have much more data with an expectation to deliver that solution immediately.
Even with advancements in technology and automation including AI, which is and will continue to replace jobs, the many consultants’ reports (such as from Gartner) infer that analytics and advanced analytics represent the high growth areas in employment opportunities.
Think about it. With data and technology at our fingertips we now can solve many more problems but the need for the human becomes even more critical. The human being who can understand and identify the real critical business issue or problem and align the right data with the right tools is the core need for organizations across business, government and not-for-profit sectors.
So, what does this individual look like? This will be the topic of our next column as we explore not only the key traits of this individual but what it takes to develop this type of person both academically and within the corporate sector.
Richard Boire is president of Boire Analytics. He can be reached at email@example.com.