In the rapidly evolving landscape of market research, staying ahead requires adapting and embracing change. This session focuses on equipping market researchers with the skills necessary to leverage AI effectively, transforming challenges into opportunities for growth and innovation.
Christina Nathanson, Director, Market Research, Quest Diagnostics, served as moderator of the panel. The panel included: Kajoli Tankha, Senior Director, Consumer Marketing Insights, Microsoft; Eli Moore, Director, Consumer Analytics, The Coca Cola Company; and Peter Henstock, Lecturer, Graduate Advanced AI & Machine Learning, Software Engineering Capstone, Harvard.
Nathanson: I’m Christina Nathanson, and I lead market research at Quest Diagnostics. As a team, we employ rigorous research and analytics solutions, including primary, secondary, CX, UX, to embed research via consultative partnerships across the commercial and consumer part of our businesses.
A way I’d like us to introduce ourselves is to talk a little about how we use AI personally because we all use it in some fun way as well as how we use it in our work life. I’ll start. I love using AI to explore new places wherever I travel. I say I ask it, and personally we’ve named it George because I’m very curious. Give me an itinerary for this weekend in Washington DC. I’m going to be staying at this hotel. I’m going for three days. Give me all the restaurants, best places to get cocktails, that are in the Capitol Hill area. And I get my list, which I love. Usually, it’s pretty good. I may have to prompt, but it’s great.
For work, we’ve used it extensively to synthesize insights from secondary research reports, articles and themes to start an insight paper. We’ve used it for open-ended questions, discussions, and transcripts from qualitative research to help us boil things down.
Moore: I lead consumer analytics globally at The Coca Cola Company. My team is responsible for building analytical tools that leverage our consumer foundational trackers. Also, we lead several AI initiatives, one in particular around an AI, analytical assistant that we’ve named Simon. Simon is kind of an evolution of an AI that we had built before. Now it’s much more engaging with the power of ChatGPT and large language models. But the idea is to give users direct access to data and be able to ask questions and learn a lot of things through a direct connection to our datasets.
How do I love using AI? Personally, I’m a huge nerd even outside of work, and so I use AI probably the most for writing scripts, such as Python scripts, and things like that. I love to try to find ways of automating and systematizing all the things that I love to do. But I also love to use it just to tell jokes to my kids and things like that as I’m sure many of you do. Most of them are really bad jokes, which fits with what I would probably tell anyway.
Tankha: I’m so happy to be here. I lead consumer insights for Microsoft. So that’s Windows Surface, Xbox, and 365, search, and AI. I’m a huge AI optimist. I could give you many personal examples, but the one that comes to mind is my son is autistic, and that means he often has a hard time understanding other people’s perspectives. I just pop into Copilot and ask him to write a social story, act like a social story expert, and write to me describing a situation in which he was in, and he doesn’t understand why it didn’t go the way he expected. It’s enormously helpful because it slows the situation down for him. It’s also a written story, so he can read it at his leisure. I also get Copilot to generate a few questions for him. He’s able to answer those questions so he gets that positive feedback of answering the questions correctly. That’s my most personal way of using it, and it’s been incredibly helpful.
At work, we are the company which has Copilot, so we use it extensively. But the biggest game changer for me has been in two areas. The first is basically summarizing meetings and giving action items. If you just record a meeting in Copilot, you can summarize all the action items. The second thing that is very helpful to me is my written language. Sometimes I feel it doesn’t sound quite so professional. When I’m writing performance reviews or an email and I want to just change the tone to sound a little bit more professional, I’ll have Copilot rewrite it for me where the ideas are all mine, but the polish comes from Copilot.
Henstock: Interesting use cases, everyone. My name is Peter Henstock. I teach at the Harvard Extension School. I teach courses in machine learning and AI as well as software engineering. That’s my part-time work. My full-time work has been working at Pfizer, where I was a machine learning and AI lead, trying to evangelize AI across the company and have it picked up by the scientists and used extensively throughout different areas of drug discovery. I’m more of an AI evangelist in that sense.
Personally, I found I’m using it more for music. I’m a wannabe pianist, and I’ve been trying to figure out how I can use AI for music for 20 or so years, using it for understanding rhythms, patterns, accompaniments and other aspects. I find that AI isn’t just one thing. It’s a hundred different pieces that come together to solve different problems. We’ve been using it extensively to extract insights from data, to have better understanding that gives us something beyond what the statistics and organization would provide, aggregating literature data together to understand what science is doing and the patterns within it, the temporal trends, and which drugs are going to be the up-and-coming ones. There’s a lot of value for aggregating data together and generating insights from it.
Defining Human & AI Roles
Nathanson: I love the authenticity here, and I really appreciate that. Thank you so much for sharing. I learned something new about you, and we all did. I’m going to jump right in here and ask about how many fear that AI may replace market researchers? What do you think is the role of humans and the role of AI?
Moore: We get this question a lot at Coke. People in analytical roles are terrified that they are going to be replaced by machines. One of the common things you’ll hear people say is that most technologies that have come have not completely replaced people. They’ve changed the kind of work that we do, and I think that there’s a lot of truth to that with AI as well. But when I think about what humans innately bring, what AI can do today and let’s talk about when we say AI for right now, we’ll talk about large language models. Large language models are incredible at answering questions that they’ve been trained on, mostly things on the Internet because that’s how most of them have been built on that kind of information. Something like ChatGPT, you ask it a very specific question, it cannot answer it. That’s because it doesn’t have the information. It’s never been trained on that. One day, it will be. One day, we’ll have a connection in, and we’re going to be doing some of this within our own environment.
But it can only answer things that a human has decided to load in and teach it how to answer. One of the roles of humans is actually going to be educating our AI, thinking about how do we want train it, how do we want to teach it to answer, what are the ethical, the moral boundaries that we want to create that we believe should kind of govern how an AI should answer and solve problems. That’s critical because it comes down to decision. This is the thing that ultimately humans are going to do. No matter how good the technology is, humans are going to have to make a choice. Is our goal to drive revenue? Is our goal to drive profit? Is our goal to grow share? These don’t necessarily have a right or wrong answer. There is a choice that we must make. This goes across all sorts of analytical exercises as well. There are different answers, and we must choose which one we want to do.
Tankha: It’s not about the battle between AI and human, which I think all of us grew up on Terminator, and that’s sort of how we instinctively think about it. I may be speaking as an AI activist, but to have a playful, curious attitude and just go into it and try to see what you can accomplish with AI, that is a mindset that we should have. I don’t think we can really predict just exactly how humans and AI will interact, but no technology has actually reduced jobs. It’s only increased jobs. But it’s changed the nature of jobs.
We should try and learn from it so we are ready for a different kind of job that might be, a lot more fun. When I started, working as a focus group moderator, I just think of the enormous drudgery I used to have, which ultimately made me leave a job I love. But I would literally spend hours and hours of the day, and it just stretched on forever doing this drudgery work. While that kind of drudgery has reduced, there’s still a lot of additional drudgery, which makes very little sense for us to be focusing on. If you can’t be optimistic, you can be curious. People are having an emotional conversation, and we sometimes try to push them into a rational conversation, and that doesn’t work. I only suggest that you try and approach it with a sense of curiosity and interest, and just think about your workday and about how many things you do which are just drudgery, which you would love to not do. Something you would love to hand off to an assistant. Just think about that and then think about how AI could potentially help you get rid of that.
Nathanson: I think you said in our first meeting that someone will never spend an hour to save a hundred hours. Right?
Tankha: Respondents said that. Sometimes what happens is we’re almost looking for AI to fail. I’ve seen this with my friends. I tried this once and then it didn’t work and I’m never trying it again. My answer to that is just try, it’s a very emerging technology. Today’s AI is the worst AI you will ever see in your life. This is like the first iPhone where you could make a call and it seemed miraculous but it’s nothing compared to what you have now in your pocket. They’re looking for it to fail but spend a little time. Make that one-hour investment and save a hundred hours.
Henstock: There’s three different places where AI really plays role. They’re kind of broken into different periods. One is the drudgery area that Kajoli was talking about. It’s how can we automate, do things better, extract all the information, do all these tasks, and make them faster for us. I think that’s where we’ve been for the past five years, and it’s getting even better now.
Where we’re headed is that we know that an AI can generate images, it can generate text, which means it’s probably going to be personalizing ads very soon, to very small groups and perhaps individuals. We have similar patterns going on in other fields, even medicine. And if we can do that though, what is the strategy for advertising? How do we work with that? How does that affect market research? Humans are involved in the right strategy. Given the strategic direction, how do we enable these systems through a combination of the AI plus humans to do things better?
The third phase is where we’re really headed, and that’s the inferencing aspect. We can ask AI what the capital of France is. It knows the answer because it can look it up. We can also ask it what the stock market price for Apple will be tomorrow. It can’t look it up, but it will still make a guess. It shouldn’t be doing those kinds of things, because it doesn’t really have a model that’s behind it that says, well, we’re looking at the trends of economics. We’re looking at Apple’s trends. We know what the different buy and sell signals are. It doesn’t do that now. This is an inferencing engine that needs to be combined with where we are now. The combination of ChatGPT or some kind of system like that plus this inferencing engine will really transform a lot of the things that we do. It’s really being able to use the tools effectively.
My answer to the question is really that the market researchers who use AI will be much better than those who don’t.
Is There a Need for Training & Upskilling?
Nathanson: That was such a great perspective, which leads me to ask, so how do we approach training and upskilling our teams if permitted to use this to ensure that they’re proficient in what they do using these AI technologies regardless of what it is? What skills do market researchers need to have to effectively develop and leverage AI in their work?
Tankha: I think skill number one is just the willingness to try it. That’s the only thing, honestly, that I encourage my team to do is to just try something. Find some real problem. Don’t make up a use case that you don’t have. Find a real use case that is a problem for you and use AI for that. For example, I wanted to do research on very broad AI perceptions. I have a large team who is doing research on various ideas, messaging in AI. Now each of their focus groups has this warmup section, which is just incredibly valuable for me. My use case was how can I combine the warmup section of ten different projects and turn it into my own focus group?
Moore: There’s nothing truer than just jumping into it, exploring it and experiencing it. The reality is we are at the very beginning of everything that we’re going to see when it comes to AI. We don’t really know where it’s going. And I think that’s exciting. If you see that as an opportunity for yourself, think like any other technology that just came out. When the Internet was first invented a lot of people were like, who’s ever going to use this? My dad used to tell me stories about the first computer that he purchased for his university, and everyone was like, you’re crazy. Why do we need a computer?
This is where we’re at with AI today. If you can recognize that, you can say, if I start playing with this, I might be able to find use cases that no one has thought of yet because this is the very beginning. As you start to explore it, it’s going to be exciting to see what everyone comes up with because people are going to use the technology in different ways and find interesting use cases that make their life better and, in the process, make a lot of other people’s lives better. We can’t emphasize enough on just experiencing it, being curious, trying to discover ways to make your life better with it.
Henstock: I’d like to go one step beyond, and that is setting up experiments and figuring out what AI is capable of doing. It’s one of the most important things that we teach in the machine learning course. It’s how to design an experiment. It starts with being curious. It continues to figure out what does the end result look like, and how do we prove that AI is actually capable of achieving it or prove that it can’t. If we set up an experiment and say we’re going to have this many people, we have human classifiers, we have AI do it, can we prove that AI is adding value to it? This is something we do for almost all the experimental work that we do. It’s how all the papers are written in AI even to say, well, yes, these are benchmarks. These are different datasets, and this is how well AI can perform. The ability to understand what that setup looks like, understand what the metrics are that you’re going to be assessing with really help guide what AI can do and will do in the future.
Perspectives on Compliance
Nathanson: Let’s touch on the elephant in the room, restrictions, compliance around AI. And yesterday, we heard a bit about synthetic data, which is fascinating to me. I work in a health care industry and getting respondents and trying to get the right people in the room is always a challenge. If I had my own opportunity to have a panel of physicians or health care executives at my arsenal, that would be phenomenal.
But I think many of you may have experimented or have a perspective on this. So how does your organization ensure the ethical use of AI in market research or any kind of data analytics, even in handling consumer data? How can we all play nice in the sandbox?
Moore: I think we could talk about synthetic respondents, but I’m going to set that aside for a moment and just talk about synthetic data because there’s a lot of applications and benefits there. But one of the benefits when you think about privacy, if you have a dataset today that has Eli Moore’s responses to a survey, masking that is creating an ID and things like that, and we already do that. But another way is to create a synthetic dataset that has totally new responses. But it’s based off of your original dataset. One of the ways that I saw this presented to me once was they’ll show a picture of two dogs, and a human can’t tell the difference between which one was created by AI and which one is a real picture of a dog. The same sort of thing can be done with data, which allows us to then have the original dataset, which has lots of privacy things that we have to master, and another one that is, in essence, completely made up data but provides the same level of insight that you would get from looking at the first dataset. That’s one potential use case with synthetic data.
This is technology that’s existed long before large language models, but I think that it’s incredibly powerful when we think about it from a market research perspective. Because if you can fill in one week from, say, a hurricane that hit Houston and you don’t have responses in Houston, what’s to say that you don’t fill in the entire month of February every single year or every other month every single year? Now you start to have significantly higher, large amounts of data, and you don’t have to spend so much to collect the data. I think that’s there’s a lot of cool use cases like that around synthetic data that people are talking about. It can help us out a lot, especially from a Coke perspective. We’re always looking for ways to get the same level of insight, with less cost and to try to drive efficiency.
Henstock: Those are great points. My perspective is that we need to have some kind of governance overall for the AI policies, and that should be at a corporate level. In the health care sector, we have to prove that every treatment has no bias as a result of race, ethnicity, gender, age, and so on, and that goes true for all our AI models. If we’re predicting a particular medicine will work for a group of people, it has to work for all these different groups and have to all be tested. What’s at stake is the health care of those individuals, but also, the company’s reputation. I think that will affect all the companies without such a policy in place, regarding particularly the generative AI aspects, but also other treatments.
The other aspect of privacy, beyond anonymization, is the new large language models. If you send data to them in general, they can use that data for other purposes and it may even be possible to extract the data you send to it, using different nefarious techniques. There’s also a need for some governance over what can be sent to a large language model depending on where it is, whose is it, and whether or not it’s an internal dataset. There are different layers of privacy that have to be governed, both of which are at the corporate level.
Curious about the evolution of chatbots? Check out the video for more of the “Empower Your Research: Upskilling with AI Tools and Techniques” panel at the Road to TMRE 2024, plus a question-and-answer session afterwards. Click here for more of the content during Road to TMRE 2024.
Contributor
-
Matthew Kramer is the Digital Editor for All Things Insights & All Things Innovation. He has over 20 years of experience working in publishing and media companies, on a variety of business-to-business publications, websites and trade shows.
View all posts