The Future of Marketing Analytics: Highlights from MSI's 2023 Conference

Credit: Shutterstock

By Professor Eric T. Bradlow (@ebradlow) (GBK Collective and the Wharton School)

As the Vice Dean of Analytics and Chair of the Marketing Department at the Wharton School, it was great to help host the Marketing Science Institute’s Analytics Conference on campus this year. I really look forward to MSI events as an opportunity to meet with other leading academics and practitioners to discuss the latest analytics methods and their applications. Below, I share some of my takeaways from the most discussed themes from this year’s MSI conference, along with a few areas I was surprised (from a “traditional statistics” perspective) didn’t come up more.

Universal Hold Out / Field Experiments

Experimentation will always play a crucial role in marketing analytics. While big data and machine learning provide valuable insights, well-controlled experiments in conjunction with sophisticated statistical methods can really provide firm value.

A/B testing and randomized control trials are effective for many decisions, while other problems require a more specialized approach. For example, an A/B experiment may be statistically significant but still lack precision in terms of estimating outcomes on a longer-term basis. Experiments are also often run in isolation – making it more difficult to estimate the combined impact of multiple variables.

At the MSI conference, Xiao Liu, Assistant Professor of Marketing at New York University Stern School of Business, presented on the use of Universal Hold Out / Field Experiments to address these challenges. By conducting accelerated testing to assess longer-term outcomes within these experiments, brands can improve accuracy to optimize decision-making over time. 

Personalization at Scale

Finding the most impactful experiments to run starts with asking the right questions and maintaining a ‘test and learn’ mindset where you’re constantly iterating. But there are a number of factors to consider when it comes to optimizing decision-making or personalizing customer experiences at scale.

One of these challenges is the trade-off between batch and real-time personalization, which Prof. Liu also discussed at MSI. Real-time personalization enables immediate responses and adjustments based on up-to-date information, catering to individual preferences in the moment. On the other hand, batch approaches offer efficiency and scalability, processing data in larger batches for optimization across a broader customer segment or scenario set. 

The choice between these approaches depends on factors such as use case, data volume and velocity, and desired level of personalization. Finding the right balance between timeliness and efficiency is essential.

In the past, understanding an important source of within-person heterogeneity was more difficult for marketers because we didn’t know where customers were physically located – only when, what, and how much they were buying. Now by applying better data from a variety of sources, brands can better predict specific outcomes with customers based on location, timing, context, or other individual preferences by channel.

“Finding the most impactful experiments to run starts with asking the right questions and maintaining a ‘test and learn’ mindset”

The importance of model fit and out-of-sample validation

Elea Feit, Associate Professor of Marketing at Drexel University, and Alice Li, Associate Professor of Marketing at Ohio State University, presented an update on MSI’s initiative on Marketing Mix Models (MMM). These models have regained popularity in recent years due to increased privacy measures coupled with the increasing capabilities enabled by ML and AI. As part of their update, Prof. Feit and Prof. Li discussed the need for model fit and out-of-sample validation in marketing. Model fit refers to how well a statistical model represents the training data, while out-of-sample validation tests the model's performance on new data to assess its predictive capabilities. 

Prof. Feit and Prof. Li also stressed the value of information and forecasting for exploring beyond existing data patterns, allowing marketers to uncover novel opportunities and make informed decisions as they adapt to changing market dynamics.

(Prof. Dominique “Mike” Hanssens from UCLA is also supporting MSI’s MMM initiative but was unable to attend the conference. Mike is also a colleague at GBK Collective, where we are helping leading brands successfully leverage these models to drive improved decision-making and results).

High-Dimensional Interactions

The challenge of high-dimensional interactions and feature selection was another key topic at the Analytics at Wharton-MSI conference. Traditional approaches to data analysis often struggle to handle the complexities of high-dimensional data, which includes interaction terms between variables. Additionally, determining the significance of selected features can be a daunting task. 

This can lead to overfitting, a phenomenon where a statistical model becomes too specialized to the training data, losing its ability to accurately predict real-world scenarios based on new or unseen data.

As an example, Xueming Luo, Charles Gilliland Distinguished Chair Professor of Marketing at Temple University’s Fox School of Business, brought up how weather conditions could influence people's emotional responses to music. For instance, sunny and clear skies might evoke feelings of happiness or excitement, whereas gloomy weather evokes a more reflective mood.

Understanding these high-dimensional interactions, such as the weather x music emotion interaction, is crucial for marketers. As individual customers provide signals and information about their needs and intentions through activities, brands can capture that data to respond with relevant and timely content and experiences based on those triggers.

Bringing Theory into AI Models

Bringing theory into AI models was another area of focus at the conference. While it is important to incorporate theory into AI models, there is a risk of purely replicating past patterns. This can be addressed by considering whether these patterns are priors (which “nudge” the outcomes) or hard constraints, which restrict certain results (e.g. one could imagine enforcing downward-sloping demand so that higher prices yield lower demand). Hortense Fong, Assistant Professor in Quantitative Marketing at Columbia Business School, provided valuable insights on this topic. 

“While it is important to incorporate theory into AI models, there is a risk of purely replicating past patterns.”

Responsible AI Use

The responsible use of AI was another hot topic of discussion at the event. In a panel, moderated by Mary Purk (Executive Director of AI at Wharton), JoAnn Stonier, Chief Data Officer of Mastercard, and Miriam Vogel, CEO of EqualAI, highlighted the importance of addressing bias in AI systems and implementing responsible AI governance to ensure companies are applying data in the right way, prioritizing the principle of "do no harm" versus "do more good".

Stonier and Vogel also emphasized the importance of reducing differences or reducing disadvantages through responsible AI use. This involves addressing biases and ensuring that AI systems are designed and implemented in a way that promotes fairness, equity, and inclusivity, ultimately working towards reducing societal disparities and empowering marginalized communities.

The Role of ChatGPT and Deep Learning Models

The rise of ChatGPT and generative AI was another big topic of discussion. As a language model, ChatGPT has already made a significant impact in the field of data science, changing how we code and analyze unstructured data. Deep learning models, such as the GPT-3 architecture that powers ChatGPT, are also transforming the way we approach natural language processing (NLP), enabling highly accurate and efficient analysis of textual data.

The availability of deep learning models has opened up new possibilities for processing vast amounts of unstructured data from sources like social media, news articles, and customer feedback at scale. This has provided insights into consumer behavior, trends, and sentiment.

However, the use of deep learning models like ChatGPT raises important ethical and social considerations. These models rely on extensive datasets, and the quality and potential biases within the data can significantly impact the outcomes and potential risks associated with their use.

“The question of whether AI will be good enough to complete meaningful cognitive tasks or assist with traditional research has been answered. Now the question is how will it – and we – evolve and adapt?” - GBK President Jeremy Korst

In addition, Stonier discussed the distinctions between predictive AI and generative AI, highlighting that generative AI poses unique challenges compared to predictive AI. Exploring the potential applications and implications of generative AI is an ongoing endeavor, and understanding its capabilities and limitations will be crucial. 

My partner in my consulting business (Jeremy Korst, President of GBK Collective) recently published a piece on the potential impact of AI in the realm of marketing insights. As Korst notes, “the question of whether AI will be good enough to complete meaningful cognitive tasks or assist with traditional research has been answered, for now. Now the question is how will it – and we – evolve and adapt? While AI has incredible potential as a tool for consumer insights and research, it’s important to strike the right balance between AI and other research methods.”

The value of individualized data decreasing

Another topic we discussed at the conference is whether the cost of not having individualized data is declining. As Stonier noted, as the ability to measure and collect data at scale continues to improve, the value of individualized data diminishes. 

With advancements in analytics and the availability of large datasets, organizations can gain insight and make informed decisions even without highly granular individual-level data.

While individualized data can provide specific insights, the cost and effort of collecting and processing it may outweigh its incremental value. Analyzing aggregated and anonymized data from larger populations can still uncover valuable patterns and trends. Privacy concerns and data protection regulations have also contributed to the shift towards privacy-preserving approaches. 

While individual-level data remains valuable in certain contexts, the discussion emphasized the increasing recognition of the value derived from larger-scale data analysis and the need to consider trade-offs between benefits and costs.

A few areas I was surprised didn’t come up more

There were a few areas I was surprised received less attention. One is the exploitation versus exploration dilemma, which is a fundamental challenge in decision-making. This dilemma revolves around creating a balance between exploiting known strategies that are effective and exploring new possibilities that could be even more so. This is the so-called multi-armed bandit problem.  

Another aspect that didn't receive much discussion was the objective function being optimized. Every marketing strategy aims to achieve specific goals, and the choice of the objective function greatly influences the decision-making process. Defining the right objective function is essential for aligning marketing efforts with overall business objectives and ensuring that resources are allocated effectively.

The cold start problem, which refers to the challenge of making accurate predictions or recommendations for new users or products with limited data, also wasn’t discussed much. This problem is particularly relevant in personalized marketing and requires innovative approaches to overcome.

Other important topics that received less attention than I anticipated include causal inference, which allows marketers to establish causal relationships between marketing activities and outcomes, as well as considerations of multi-collinearity, aliasing, outliers/influential points/robustness, non-stationarity, and test-retest reliability. These topics play a crucial role in ensuring the validity and reliability of marketing analytics methodologies and findings.

However, it is worth noting that one participant did discuss the potential of an attribute vector-based approach using AI for addressing forecasting challenges related to radically new options. Exploring innovative techniques and leveraging AI capabilities can provide better insights and solutions in dynamic and evolving marketing environments.

In summary, the conversations at the Analytics at Wharton - MSI conference underscored the significance of data and analytics in driving marketing success. By applying advanced analytics techniques responsibly, brands can gain a competitive edge. As the marketing landscape continues to evolve, maintaining a balanced approach to AI and machine learning will be pivotal for brands seeking sustainable growth and meaningful connections with their target audience.

 

Share this article

 

Follow us and stay up to date

Previous
Previous

Key Learnings from Sawtooth’s 2023 Analytics & Insights Summit

Next
Next

Data Science To The Rescue: Tackling Real-World Problems With Analytics