Mark Zuckerberg wants to build a metaverse voice assistant for Facebook to get rid of Alexa and Siri

Meta, the company formerly known as Facebook, has shifted its long-term strategy away from its social media apps to focus on the metaverse, a virtual world where people wearing virtual/augmented reality headsets can can talk to each other’s avatars, play games, hold meetings, and participate in social activities.

That has raised a lot of questions, such as what this means for a company that has focused on social media for nearly two decades, whether Meta can achieve its new goal of building a social network. metaverse future or not and what that future will look like for the billions of people who use Meta’s products every day. On Wednesday, Meta CEO Mark Zuckerberg revealed some of the answers during his keynote about the company’s latest developments in the field of AI.

One of Meta’s main goals is to develop advanced voice assistant AI technology – think Alexa or Siri, but smarter – that the company plans to use in its AR/VR products, like the Quest headset (formerly Oculus), the Portal smart display, and Ray-Ban smart glasses.

“The kind of experiences you’ll have in the metaverse are beyond what’s possible today,” Zuckerberg said. “That will require advancements in many areas, from new hardware devices to software to build and explore the world. And the key to unlocking a lot of these advancements is AI.”

The presentation took place during one of the most difficult times in the company’s history. Meta’s stock price has fallen deeply in historythat is advertising models was shaken by Apple’s mobile privacy changes, and it faced the potential threat of political regulations.

So it makes sense that the company is looking to a future in which Meta hopes to roll out sophisticated language processing AI.

Cartoon two legless people hovering over a body of water and a horizon.

Mark Zuckerberg (left) – in a virtual reality avatar – demonstrates how his company’s new AI tools let you create virtual environments by saying what you want to see.

According to a Meta spokesperson, this is the first time that Meta has organized an event solely to showcase its AI development. However, the company admits this AI is still under development and not yet widely used. The demonstration is discovery; Meta’s demo videos on Wednesday include a disclaimer at the bottom that many of the images and examples are purely for illustrative purposes and not the actual product. Also: Representative characters in the metaverse still have no legs.

However, if Meta is pushing world-class computer science researchers to develop these tools, it will most likely succeed. And if fully implemented, these technologies could change the way we communicate, both in real life and in virtual reality. These developments also raise significant privacy concerns about how much personal data collected from AI-powered wearables is stored and shared.

Here are a few things to know about how Meta is building voice assistants using new AI paradigms, as well as the privacy and ethical concerns raised by an AI-powered superpower metaverse. .

Meta is building its own ambitious voice assistant for AR/VR

On Wednesday, it became clear that Meta sees voice assistants as an important part of the metaverse, and that it knows its voice assistants need to be more conversational than we have at the moment. For example, most voice assistants can easily answer the question, “What’s the weather like today?” But if you ask a follow-up question, such as “Is it hotter than last week?” Voice assistants can be difficult.

Meta wants its voice assistant to collect better contextual clues in conversations, along with other data points it can collect about our bodies, like gaze, facial expressions, and gestures. hand.

“To support true world creation and exploration, we need to go beyond the current level of smart assistant technology,” said Zuckerberg.

While Meta’s Big Tech rivals – Amazon, Apple and Google – already have popular voice assistant products, on mobile or standalone hardware like Alexa, Meta does not (aside from some functionality). limited voice commands on Ray-Bans, Oculus, and Portal devices).

“When we put glasses on our faces, it will be the first time that an AI system can actually see the world from our perspective – see what we see, hear what we hear, and see what we see,” Zuckerberg said. more than that. “So the capabilities and expectations we have for AI systems will be much higher.”

To meet those expectations, the company says it’s developing a project called CAIRaoke, a self-learning AI neural model (it’s a statistical model based on biological networks in the human brain) to powering his voice assistant. This model uses “self-supervised learning”, which means that instead of being trained on large data sets like many other AI models, the AI ​​can essentially learn on its own.

“Previously, all the blocks were built separately, and then you would glue them together,” Meta’s executive director of AI research, Joëlle Pineau, told Recode. “As we move to self-supervised learning, we are likely to learn the whole conversation.”

As an example of how this technology could be applied, Zuckerberg – in the form of a virtual reality avatar – introduced a tool the company is working on called “BuilderBot” that allows you to say what you want. see in your virtual reality. For example, saying “I want to see a palm tree over there” can make an AI-generated palm tree pop up where you want it to, based on what you say, your gaze, the controller/ your hand and a general sense of context, according to the company.

Meta still needs more research for this to be possible, and it’s working on so-called “ego perception,” i.e. understanding the world from a first-person perspective, to build on this. Currently, they are testing the technology from the model in their Portal smart display.

Eventually, the company also hopes to be able to capture inputs other than voice — like a user’s movements, location, and body language — to build even more intelligent virtual assistants that can predict what’s going on. what the user wants.

AI in the metaverse will present ethical challenges

Concerns about privacy and failure haunted Meta and other big tech companies because their business models were built around collecting users’ data: our browsing history, preferences, personal contact information , etc

Those concerns are even greater, with AR/VR, because it can track more sensitive data, like eye movements, facial expressions and body language, privacy experts say. our.

Some AR/VR and AI ethologists worry about how personal these data inputs can become, what kind of predictions the AI ​​can make with those inputs, and how the data that will be shared.

“Eye-tracking data, gaze data, can determine if you’re feeling stimulated by stimulation,” says Kavya Pearlman, founder of the XR Safety Initiative, a nonprofit organization. sexual interest or loving gaze – all of which have to do with advocating for the ethical development of technologies like VR. “Who has access to this data? What are they doing with this data? ”

For now, the answers to those questions aren’t entirely clear, although Meta says it’s committed to addressing such concerns.

Zuckerberg said the company is working with human rights, civil rights and privacy experts to build a “system based on fairness, respect, and human dignity.”

But with the company track records of privacy violationsSome tech ethicists are skeptical.

“From a purely scientific perspective, I’m really excited. But because it’s Meta, I’m scared,” Pearlman said.

In light of people’s concerns about privacy in the metaverse, Meta’s Pineau said that by giving users control over what data they share, the company can help ease people’s worries.

“People are willing to share information when there is value that they derive from it. And so if you look at it, the concept of autonomy, control and transparency is what really gives users more control over how their data is used,” she said.

Aside from privacy concerns, some Meta AR/VR users worry that if an AI-powered metaverse works, it might not be accessible and safe for everyone. Already, some women have complained of experiencing sexual harassment in the metaverse, such as when a beta tester of Meta Horizon Worlds’ social VR app reported being mostly detected by other users. Meta has since established the equivalent of 4 feet virtual safety bubble around the avatar to help avoid “unwanted interactions”.

If Meta achieves its goal of using AI to make its AR/VR environments more vivid and seamless in our daily lives, the many issues surrounding accessibility, safety and Discrimination will arise. And although Facebook said it was thinking about these concerns from the start, its track record with its other products doesn’t reassure them. Mark Zuckerberg wants to build a metaverse voice assistant for Facebook to get rid of Alexa and Siri

Fry Electronics Team

Fry is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – The content will be deleted within 24 hours.

Related Articles

Back to top button