"With great ability comes great responsibility." You don't need to be a Marvel fan to recognize this phrase, popularized by the Spider-Man series. And although its origin is related to superhuman powers, it is relevant to remember it when we talk about the rise of Generative AI.
It is not an emerging technology, but with the launch of ChatGPT, it has been made available to 100 million users in just two months; For many, it has been like gaining a superpower. Like these, the key is how they are used. Generative AI is no different. There are opportunities to make great progress, make positive changes and even, unfortunately, negative ones.
Right now, large companies are faced with the challenge of deciding how they will apply this technology. At the same time, there is constant economic uncertainty and growing inflation, keeping consumers in suspense about how to channel their resources.
Taking both factors into account, generative AI has the potential to give brands an advantage in the fight for consumer attention. However, balanced views must be adopted: considering both opportunities and risks, facing them with an open mind.
The impact of generative AI on insights collection
The marketing research industry is no stranger to change. The tools and methodologies available to consumer insights specialists have experienced rapid development in recent decades.
At this point, we can only guess at the magnitude and speed of the changes that increasingly accessible generative AI will bring. However, it is essential to define certain fundamentals that will help decision makers know how to act quickly as more information becomes available.
Ultimately, it's all about asking the right questions.
What are the opportunities?
Currently, the main opportunity provided by generative AI is the optimization of productivity. It can dramatically speed up the processes of generating ideas, collecting data, and writing texts such as drafting emails, reports, or articles. Creating efficiencies in these areas frees up more time for tasks that require strong human expertise.
Acceleration in information collection
For data, specifically, one area where there is great potential is information summarization. For example, the Stravito platform has already been using generative AI to create automatic summaries of individual market research reports, saving the need to manually write an original description for each of them.
Additionally, we see the potential to further leverage this use case, with the ability to summarize large amounts of information to quickly answer business questions, in an easy-to-read format. For example, this could involve just typing a question into the search bar and getting a compact answer, based on the company's internal knowledge.
For brands, this would mean being able to provide simple answers more quickly, and it could also help take some of the legwork by delving into more complicated questions.
Democratization of insights through better self-service
Generative AI could also allow all company stakeholders to access information without the need to directly involve an information manager each time. By removing barriers to entry, generative AI could assist organizations seeking to more deeply integrate consumer insights into their operations. operadaily tions.
Additionally, it could help address recurring concerns associated with access to market research for all stakeholders, such as asking the wrong questions. In this use case, generative AI can assist business stakeholders who do not have research experience to ask more appropriate questions by suggesting relevant questions tied to their search query.
Personalized communication for internal and external audiences
Another opportunity that generative AI offers is the ability to adapt communication to both internal and external audiences.
In the context of insights, there are several potential applications. It could enable knowledge sharing to have a greater impact by enabling customization of information communication to different business stakeholders within the organization. organization. It could even be used to tailor reports for investigative agencies as a way to speed up the investigation process and reduce unnecessary processes.
What are the risks?
Generative AI can be an efficient tool for insights teams, but it also carries several risks that organizations must consider before implementing it.
Immediate dependency
A foundational risk is immediate dependency. Generative AI is statistical, not analytical, meaning it works by predicting the most likely information to be communicated next. If you give him the wrong message, he could get a response that, although very convincing, is incorrect.
Reliability
What becomes even more complex is how generative AI can mix correct and incorrect information. In low-risk situations, this could be entertaining. But in cases where high-value business decisions are made, the input underlying each decision must be reliable.
Furthermore, many of the issues related to consumer behavior are complex. While a question like “How did US millennials react to our latest concept test?” might yield a clear answer, deeper questions about human values or emotions often require a more nuanced perspective. Not all questions have a single correct answer, and when trying to summarize large bodies of research, crucial details can be ignored.
Transparency
Another primary risk to watch out for is the lack of transparency about how algorithms are trained. For example, ChatGPT can't always tell you where it gets its responses from; and even when you can, those sources may be impossible to confirm or even locate.
Additionally, because AI algorithms, generative or not, are trained by humans and through pre-existing information, they can be biased. This can lead to responses that show racism, sexism or are offensive. For organizations looking to challenge biases in their decision-making and create a better world for consumers, this would be an instance where generative AI does the less effective job.
Security
Some common use cases for ChatGPT are for generating emails, meeting agendas, or reports. But entering the details necessary to generate those texts can put confidential company information at risk.
In fact, an analysis by security company Cyberhaven found that of 1.6 million knowledgeworkers across industries, 5,6% had tried ChatGPT at least once at work and 2,3% had entered sensitive data. of the company on ChatGPT.
Companies such as JP Morgan, Verizon, Accenture and Amazon have banned their staff from using ChatGPT at work for security reasons. And recently, Italy became the first Western country to ban ChatGPT while investigating privacy issues, which has drawn the attention of privacy regulators in other European countries.
For knowledge teams or anyone working with sensitive information or proprietary research, it is essential to be aware of the risks associated with entering information into a tool like ChatGPT, and to stay up to date on both your organization's internal data security policies. organization, as well as about the policies of providers such as OpenAI.
We firmly believe that the future of consumer understanding will continue to require combining human experience with powerful technology. The most powerful technology in the world will be useless if no one really wants to use it.
Therefore, the focus of brands should be on responsible experimentation, on finding the right problems to solve with the right resources and not simply on implementing technology for the sake of it. "With great ability comes great responsibility." Now is the time for brands to decide how to use it.
READ MORE ARTICLES ABOUT: Electronic Commerce with AI.
READ THE PREVIOUS POST: Artificial Intelligence in the Pharmaceutical Industry.