HomeUS News updateFrom marketing to design, brands adopt AI tools despite risk

From marketing to design, brands adopt AI tools despite risk


Even if you haven’t tried artificial intelligence tools that can compose essays and poems or create new images on command, chances are the companies that make your household products are already starting to do so.

Mattel has put AI image generator DAL-E to work coming up with ideas for new Hot Wheels toy cars. Used vehicle seller CarMax is summarizing thousands of customer reviews with the same “generative” AI technology that powers the popular chatbot ChatGPT.

Meanwhile, Snapchat is bringing a chatbot to its messaging service. And grocery delivery company Instacart is integrating ChatGPT to answer customers’ food-related questions.

Coca-Cola plans to use generative AI to help create new marketing content. And while the company hasn’t detailed how it plans to deploy the technology, the move reflects the growing pressure on businesses that many of their employees and consumers are already trying to do on their own.

“We must accept the risks,” James Quincy, CEO of Coca-Cola, said in a recent video announcing a partnership with startup OpenAI — the creator of both DALL-E and ChatGPT — a project led by consulting firm Bain. through alliance. “We need to intelligently embrace those risks, experiment, build on those experiments, drive scale, but not taking those risks is a pessimistic outlook to begin with.”

Indeed, some AI experts warn that businesses should carefully consider the potential damage to customers, society, and their own reputations before adopting ChatGPT and similar products in the workplace.

“I want people to think deeply before implementing this technology,” said Claire Leibovich of The Partnership on AI, a non-profit group founded and sponsored by major technology providers that recently developed AI-generated synthetic imagery, issued a set of recommendations for companies producing audio. and other media. “They should be able to play around and tinker, but we should also think about what purpose are these tools serving?”

Some companies have been experimenting with AI for some time. Mattel disclosed its use of OpenAI’s image generator as a Microsoft customer in October, which has a partnership with OpenAI that enables it to integrate its technology into Microsoft’s cloud computing platform.

But by the time OpenAI released ChatGPT, a free public tool, on November 30, widespread interest in generative AI tools had begun to seep into workplaces and executive suites.

“ChatGPT really brought it home how powerful they were,” said Eric Boyd, a Microsoft executive who leads its AI platform. This has changed the conversation in a lot of people’s minds where they really get at it on a deeper level. My kids use it and my parents use it.

However, there is reason for caution. While text generators like ChatGPT and Microsoft’s Bing chatbot can make the process of writing emails, presentations, and marketing pitches faster and easier, they also have a tendency to confidently present misinformation as fact. Image generators trained on vast repositories of digital art and photography have raised copyright concerns from the original creators of those works.

“For companies that are really in the creative industry, if they want to make sure that they have copyright protection for those models, it’s still an open question,” said Anna Greusel, a lawyer at the law firm Debevoise & Plimpton. Said, which advises businesses on how to best use AI.

Gressel said that one is thinking of the safe-use tool as a brainstorming “idea partner” that won’t produce a final product.

“It helps to create mock ups that will then be transformed by a human into something that is more concrete,” she said.

And this also helps to ensure that humans are not replaced by AI. Forrester analyst Rowan Curran said the devices should speed up some of the “basic-grit” of office tasks — past innovations such as word processors and spell checkers — rather than putting people out of work as some fear.

“Ultimately it’s part of the workflow,” said Curran. “Not that we’re talking about a big language model, just build an entire marketing campaign and launch it without expert senior marketers and all kinds of other controls.”

For integrating consumer-facing chatbots into smartphone apps, it gets a bit trickier, Curran said, with the need for guardrails around technology that can answer users’ questions in unexpected ways.

Public awareness fueled increased competition among cloud computing providers Microsoft, Amazon and Google, which sell their services to large organizations and require massive computing power to train and operate AI models. Microsoft announced earlier this year that it was investing billions of dollars in its partnership with OpenAI, although it also competes with the startup as a direct provider of AI tools.

Google, which has pioneered advances in generative AI but has been cautious about introducing them to the public, is now playing catch-up to its commercial potential, including the upcoming Bard chatbot. Facebook parent Meta, another AI research leader, builds similar technology but doesn’t sell it to businesses like its bigger tech peers.

Amazon has taken a more muted tone, but makes its ambitions clear through its partnerships — most recently an expanded collaboration between its cloud computing division AWS and the startup Hugging Face, maker of a ChatGPT rival called Bloom.

Hugging Faces decided to double down on its Amazon partnership after seeing an explosion in demand for generative AI products, said Clement DeLong, co-founder and CEO of the startup. But Delangue compared its approach with competitors such as OpenAI, which does not disclose its code and datasets.

Hugging Face hosts a platform that allows developers to share open-source AI models for text, image and audio tools, which can form the foundation for building a variety of products. That transparency is “really important because that’s the way for regulators, for example, to be able to understand and regulate these models,” he said.

Delangue said it’s a way for “underrepresented people to understand where biases can occur (and) how the model has been trained,” in order to reduce bias.


RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular

Recent Comments