As the metaverse expands, companies will encounter a massive opportunity for deploying incredibly lifelike virtual influencers (VIs). For example, as VIs proliferate, organizations will establish mascots and ambassadors that will ‘live’ in the metaverse. As you can imagine, these company-‘born’ VIs should help brands enter a new level of messaging and marketing.
Companies will have choices. No longer will brands be forced to compete for a human influencer’s time or attention. Similarly, marketers won’t remain awake nights hoping influencers will convey talking points and CTAs accurately.
Presumably, in the near term, companies with large budgets will have these VI-created options. Employing teams of AI, CGI and creative personnel, larger companies will ‘birth’ and ‘raise’ VIs in their image. Third-party creators will likely create and control VIs of smaller companies.
More safety but…
Still, it’s assumed increased safety will emerge as a benefit of VIs for all companies that deploy them. For instance, with control of a VI or the people who create it, owner-companies should enjoy high levels of professionalism and safety.
Still, it is critical to consider strategy and intentionality in this equation. As in cinema, where almost everything actors do and say on screen is scripted, VIs should work from careful plans that dictate every action and piece of content that emerges from them.
An ideal situation has PR pros in the writers’ room as VIs’ actions and words are created. At the very least, PR should review VI scripts in advance of their going ‘live.’
One set of rules
In terms of how brands work with VIs, the same rules as dealing with human influencers apply: there should be a morality clause, built-in basic protections and brand-safety checks.
In addition, a successful VI effort will include managed competitive scope, properly disclosed sponsorships and an orderly, intentional content creation process.
Although, as noted above, VIs should offer a higher degree of safety than human influencers, the potential for gaffes and impromptu crises exists. This will increase as real-time interaction becomes more common in the metaverse.
This will likely be a direct result of too many variables and limited control of VIs in real time. To avoid gaffes and tropes as such, managers, hopefully with PR input, should ensure VIs’ content feels and looks organic, while still communicating brand narratives clearly.
Moreover, the brand’s CTA should address KPIs and goals while minimizing distractions in VI content. Such distractions include other logos or items that may shift focus to something outside the company message.
Another consideration in this issue is brands’ acceleration in adopting AI. Initially, teams will manage VIs. However, as time passes, companies will want VIs that seem more convincing and can exist in multiple places simultaneously.
Similarly, audiences will demand VIs seem ‘real’ enough that they are worthy of attention and interaction. This will push AI and help accelerate the development of virtual interaction ability.
Eventually, the metaverse could essentially become one big Turing test, constantly requiring that VIs ‘think.’
A final consideration in the growing metaverse is transparency. Should government regulation dictate that VIs disclose that they are not real?
It’s a tricky question and requires some consideration. On the other hand, does it matter?
First, what is the difference between a paid actor in a show or a cartoon character in a cartoon? Really just the format.
In addition, how do you define real? Is a team of people behind a VI real? Is a single individual who manages the VI real, even though she is operating with an alias?
Similarly, there are different degrees of what real means. If you run into Tony the Tiger in the metaverse, it is safe to say that you know he is not real.
However, perhaps the true disclosure should come down to the individual/organization behind an account the same way the App Store discloses the developer of every app.
Eric Dahan is co-founder and CEO, Open Influence