Moderated by All Things Insights’ Seth Adler, the session was part of TMRE Continued, which featured themed sessions including the AI for Insights & Analytics Summit Continued on December 5.
While AI can automate data collection, analysis, and interpretation processes to uncover valuable insights at scale, that’s easier said than done. Myriad issues such as corporate firewalls, dynamic yet disparate, mostly unproven solutions that fit your organization must be found, tested and implemented by the right combination of teams. The insights function of course stands to benefit from predictive modeling, sentiment analysis, and clustering for in-depth consumer understanding. The key is identifying opportunities for hybrid approaches that combine AI techniques with traditional research methods.
Speck notes, “Since TMRE, there actually has been a lot of activity. As you know, President Biden signed the executive order for responsible AI. Even in the EU, there has been voluntary code that’s set to come out. That’s going to have broader implications. These codes are essentially about intended use, informed consent, collaboration tools, making sure that there’s transparency, and you can show end to end how you’re using people’s data. There’s expectation that there’s going to be even more detailed and more defined regulations, and there’s more coming. If you work in AI, it’s an interesting time.”
“I think that people who are working in the market research and insights industry are trying to understand AI at the present. They’re trying to understand what these tools can do for them. And they’re trying to understand how they can use it in a privacy-safe way,” says Whitely. “The government is interested in ensuring that systems are free of bias, for example, so they’re not making decisions that might favor one group over another. They’re making sure that consumer data is protected. It’s a far-reaching order that I think will set a lot of efforts over the coming years as we see it come into fruition, taking it from these early stages to how these AI systems are being tested and evaluated.”
Let’s turn to the applicability to the particular businesses that we have, such as market research and health care. Where would you dive in?
Whitely observes, “A lot of these industries are still deciding where AI fits in the workflow. Some of the research companies that are utilizing it right now might use AI to do summaries of studies that they execute. I think we saw at the AI Congress that it’s necessary to include in these systems the evidence that’s kind of supporting that summary that the AI generates. If there’s a summary of what a certain consumer group might think, behind that, there might be some videotape interviews that the research companies did with the consumers that support that recommendation. Research companies know that it’s a possibility for generative AI to hallucinate, so they want to make sure that there’s evidence behind every analytical summary and recommendation that the AI might generate.”
The key being not to step over the line. What about that delicate balance in the healthcare field?
“We talked a lot about empathy as well, how important that was,” says Speck. “And actually, there has been a very interesting study that came from JAMA, the Journal of American Medicine, and they tested a chatbot to deliver follow-up notes and conversations. They tested physician driven notes, and they tested the chatbot notes. And guess what? The chatbot scored higher. Now I am shocked because I will say I was the first one to be skeptical about whether the chatbot can deliver empathetic notes. But what they found was the where the physicians had no time and were buried in paperwork, the chatbot was able to do 200 words. This recent evidence is showing that there might be some early positive hope that that AI can actually be empathetic.”
Does this prove to us that AI, the more human that we can be with it, the more we can get out of it? We can make doctors more human by adding AI. It’s still the human element though, that needs to oversee the process.
Whitely adds, “Testing out these tools of generative AI, we always keep in mind do we have good data behind it? Right now, as we’ve seen, a generative AI can tell you maybe five or six different customer segments that might be appropriate for a new product without any research input going into the system, but that’s not really talking to consumers. It’s not really gathering data. We want to make sure that kind of output is also super relevant for that case at hand. And with the case that Christina brought up, we want to make sure that using all the different data that comes up will bring about the best potential empathetic recommendation, for that case.”
Check out the full video from TMRE Continued as Christina Speck and Chris Whitely discuss hybrid approaches to insights and AI, and what the future might look like with this new capability.