HomeTechnologyThe challenges of a...

The challenges of a new profession: an undercover VR moderator

Metaspaces will need large teams of security guards to ensure peace and a good experience (photo: CC0 Public Domain)

Social networking companies still haven’t found adequate mechanisms to protect themselves from dangerous content, but the metaspace is already gaining momentum. How will we protect ourselves from aggression in the new virtual worlds?

Ravi Yekanti puts on his headset to go to work. He does this every day, but he never knows what will happen to him during the hours spent in virtual reality. Who might he meet? Will a childish voice with a racist threat reach him? Will some cartoon try to hit him?

Yekanti adjusts the bulky glasses as he sits at a desk in his office in Hyderabad, India, and prepares to immerse himself in an “office” full of animated avatars. His job is to make sure everyone in the metaspace is safe and having a good time. And he’s proud of it.

A new profession

The young man is a representative of a new profession that came with the rise of new technologies: VR moderator metaspace. Digital safety in this virtual zone is like a wild jungle. Sexual assault, harassment and attacks on children abound. This issue has become more pressing following Meta’s decision to lower the age minimum for its Horizon Worlds platform from 18 to 13. It also appears to have a set of features and rules designed to protect younger users. Still, someone has to enforce these rules and ensure that there are no aggressors who circumvent the rules. But there will always be individuals who will try…

Yekanti has been working as a moderator and manager of virtual reality training since 2020 and took up this activity after doing a “traditional” job of moderating text and images on social networks. He was hired by WebPurify, a company that provides content moderation services to Internet companies such as Microsoft and Play Lab. The young man works with a team located in India. His work is mostly done in mainstream metaspace platforms.

A longtime Internet enthusiast, Yecanti says he enjoys putting on a VR headset, meeting people from around the world, and giving advice to metaspace creators on how to improve their games and virtual “worlds.” He is part of a new profession – that of VR moderators, who protect safety in the metaverse as private security agents interacting with the avatars of very real people to detect misconduct in virtual reality. In fact, he does not publicly disclose his moderator status. Instead, it operates undercover. It pretends to be an ordinary user so that it can witness the violations first-hand.

So far, traditional moderation tools, such as AI filters for certain words, don’t work well in virtual worlds. That’s why moderators like Yekanti are the main way to ensure safety in the digital world. Their work is becoming more important every day.

The safety of metaspaces

The issue of metaspace safety is complex. Journalists have reported cases of offensive comments, fraud, sexual assault and even kidnapping. The biggest virtual platforms carefully hide their statistics about bad user behavior. But Yecanti says he encounters reportable violations every day.

Meta declined to comment on such matters, saying only that it has a list of safety tools and policies, and that it has trained safety specialists at Horizon Worlds. A Roblox spokesperson says the company has “a team of thousands of moderators who monitor for inappropriate content 24/7 and investigate reports submitted by the community.”

“Social media is the building block of metaspace, and we need to treat metaspace as an evolution—as the next step of social media, not as something completely separate,” said Juan Londono, a policy analyst at the Information Technology and Innovation Foundation, a think tank in Washington.

But given the immersive nature of metaspace, many tools built to deal with the billions of potentially harmful words and images on the 2D web don’t work well in VR. Live moderators are Moderators are among the most important decisions.

Cases in which adults with predatory intent attempt to establish relationships of trust with minors are among the greatest challenges. Technology cannot proactively filter and prevent this abuse. It remains for users to report the bad behavior.

“If a company relies on users to report potentially traumatic things that have happened to them or potentially dangerous situations, it’s already too late,” said Delara Derakshani, a privacy attorney who worked at Meta’s Reality Labs until October 2022 d. “The burden should not fall on the children – and by the time the communication is made, potential trauma or damage has already been done.”

First line of moderation

The immersive nature of metaspace means that rule-breaking behavior is multidimensional and difficult to capture in real time.

WebPurify, which until recently focused on the moderation of online text and images, now offers services for virtual worlds companies – since the beginning of last year. The company recently brought in the former head of Twitter’s online safety division, Alex Popken. “We’re thinking about how to control VR and AR, which is kind of new territory because you’re really looking at human behavior live,” says Popken.

WebPurify employees are on the front lines of these new cybercrimes, with racist and sexist comments being the most common. Yekanti says a moderator on his team spoke to a user who found out she was Indian and offered to marry her in exchange for a cow.

Other incidents are more serious. Another moderator from Yekanti’s team came across a user who made highly sexualized slurs referring to genitalia. Once a user approached a moderator and seemed to grab him by the genital area. (The user claims he wants to high-five.)

Moderators follow detailed safety policies that outline how to catch and report violations. One game the moderation team is working on steps into a policy that defines protected categories of people: they are defined by characteristics such as race, ethnicity, gender, political affiliation, religion, sexual orientation and refugee status. Yecanti says that “any form of negative commentary towards this protected group will be considered hateful.” Moderators are trained to respond proportionately, using their own judgment. This could mean muting users who break the rules, removing them from the game, or reporting them to the company.

In addition to very clear judgment, moderators must also have high emotional intelligence. In addition, they must have a very broad common culture. Expectations about interpersonal space and physical greetings, for example, vary across cultures and consumers.

Opposition

Moderators work behind the scenes so users don’t change their behavior because they know they’re interacting with a moderator. But moderation also means something else—defying users’ privacy expectations.

A key part of the job is “keeping track of everything,” Yecanti says. Moderators record everything that happens in the game from the moment they join to the moment they leave, including conversations between players. This means that they often eavesdrop on conversations, even when players are unaware that they are being watched and eavesdropped.

“If we want platforms to have a practical role in terms of user safety, that may lead to some privacy trade-offs that users may not be comfortable with,” Londono says.

Derakshani, the former Meta attorney, says we need more transparency about how companies handle safety in the meta universe.

- A word from our sponsors -

Most Popular

LEAVE A REPLY

Please enter your comment!
Please enter your name here

More from Author

A Breakthrough in Atomic Stability

New research has uncovered the importance of atomic ring structures in...

A Fiery Farewell to ESA’s Pioneering ERS-2 Satellite

ERS-2 was launched in 1995, four years after ERS-1, the first...

A Leap Forward in Male Birth Control: Non-Hormonal, Reversible Method Unveiled

A new study by the Salk Institute presents a groundbreaking non-hormonal...

Scientists Reveal How One Type of Lung Cancer Can Transform Into Another

Researchers catch lung cancer transformation in the act: Immunofluorescence image shows...

- A word from our sponsors -

Read Now

A Breakthrough in Atomic Stability

New research has uncovered the importance of atomic ring structures in glass, revealing how their stability influences glass’s performance and transition temperatures. This advance in understanding glass’s molecular dynamics aids in designing better glass products for high-performance applications.Glass is increasingly utilized in various high-performance areas, covering consumer...

A Fiery Farewell to ESA’s Pioneering ERS-2 Satellite

ERS-2 was launched in 1995, four years after ERS-1, the first European Remote Sensing satellite. At the time, these two satellites were the most sophisticated European Earth observation spacecraft ever developed, delivering new information to study Earth’s land, oceans, atmosphere, and polar ice, as well as being...

A Leap Forward in Male Birth Control: Non-Hormonal, Reversible Method Unveiled

A new study by the Salk Institute presents a groundbreaking non-hormonal and reversible male contraceptive method using HDAC inhibitors to block sperm production without affecting libido. This method, targeting the regulation of gene expression in sperm production, promises fewer side effects and fully reversible fertility, indicating a...