Facebook Builds AI Tool To Identify Harmful Behavior of People

The latest development can significantly improve software testing for complicated environments, specifically in areas related to privacy, safety and security

Facebook has developed a machine learning tool to train bots to facilitate the realistic simulation of the behavior of real people on social media platforms. This new development can significantly improve software testing for complicated environments, specifically in areas related to privacy, safety and security.

Mark Harman, Research Scientist at Facebook AI, explained, "We are using a combination of online and offline simulation, training bots with anything from simple rules and supervised machine learning to more sophisticated reinforcement learning."

Overcoming A Key Hurdle

For large-scale social networks, testing a proposed code update or new feature is a complex and challenging task. According to Harman, people's behavior evolves and adapts over time and is different from one geography to the next, which makes it difficult to anticipate all the ways an individual or an entire community might respond to even a small change in their environment.

Artificial Intelligence
Representational Picture Pixabay

Facebook researchers have now developed Web-Enabled Simulation (WES) to overcome this problem. "WES is a new method for building the first highly realistic, large-scale simulations of complex social networks," Harman wrote in a blog post.

Bots are trained to interact with each other using the same infrastructure as real users, so they can send messages to other bots, comment on bots' posts or publish their own, or make friend requests to other bots. Bots cannot engage with real users and their behavior cannot have any impact on real users or their experiences on the platform.

Isolated from Real Users

WES is able to automate interactions between thousands or even millions of bots. It deploys these bots on the platform's actual production codebase. The bots can interact with one another but are isolated from real users.

This real-infrastructure simulation ensures that the bots' actions are faithful to the effects that would be witnessed by real people using the platform. "With WES, we are also developing the ability to answer counterfactual and what-if questions with scalability, realism, and experimental control," said Facebook. The company has used WES to build WW, a simulated Facebook environment using the platform's actual production codebase.

(With inputs from agencies)

Related topics : Facebook Artificial intelligence
READ MORE