Original Article By Lucas Nolan At Breitbart.com:

Mark Zuckerberg’s Facebook (now known as Meta) has created open-source artificial intelligence technology that is sparking a debate about its potential for innovation and the risks that come with it due to misuse after users have begun creating AI sexbots to chat with.

The Washington Post reports that after users started making sexbots to chat with using Facebook’s open-source AI tools, the new technology has sparked a discussion about both its potential for innovation and the risks it poses due to misuse.

By providing a means of avoiding corporate control and enabling a wide range of people to experiment freely with transformative technology, the emergence of open-source AI has opened up a new frontier in the tech world. Open-source AI’s shadowy side has led some to express concern because of the possibility of malicious actors taking advantage of this freedom. While Zuckerberg might imagine the greatest minds in the arts and sciences developing their own AI chatbots, the tech is just as likely to be used by perverts and predators. Unsurprisingly, AI sexbots have quickly sprung up after the Facebook technology launched.

A chatbot named Allie that was developed for sexual role-playing is at the center of this argument. Facebook’s open-source LLaMA model was used to build Allie. As businesses like Google and OpenAI have tightened their control over their AI models, Facebook has emerged as a proponent of open-source AI by making LLaMA available to the general public.

“The overall argument for open-source is that it accelerates innovation in AI,” said Robert Nishihara, CEO and co-founder of the start-up Anyscale, which helps companies run open-source AI models. Anyscale’s clients use AI models to discover new pharmaceuticals, reduce the use of pesticides in farming, and identify fraudulent goods sold online.

Open-source AI offers freedom, but this freedom can also be a double-edged sword. Images of actual children have been used as source material in open-source models to produce synthetic child pornography. This has sparked worries that it might also make sophisticated propaganda campaigns, cybercrime, and fraud possible.

The developer of Allie, who requested anonymity, claimed that open-source technology benefits society by enabling people to create products that suit their preferences without the constraints of corporations. “It’s rare to have the opportunity to experiment with ‘state of the art’ in any field,” he said in an interview.

Open-source AI’s potential for abuse, however, is a growing worry. MIT assistant professor of computer science Marzyeh Ghassemi said she supports open-source language models, but with restrictions. She cautioned, “If people can easily modify language models, they can quickly create chatbots and image makers that churn out inappropriate material of high quality, as well as disinformation and hate speech.”

Ghassemi proposed rules, such as a certification or credentialing procedure, to control who can modify these products. “Like we license people to be able to use a car,” she said, “we need to think about similar framings [for people] … to actually create, improve, audit, edit these open-trained language models.”

OpenAI and Microsoft have avoided the open source model, instead producing ChatGPT and Bing AI for use by the public. But these AI chatbots have had their own problems. For example, Microsoft’s Bing AI has fantasized about ending the world, as previously reported by Breitbart News:

Bing’s AI exposed a darker, more destructive side over the course of a two-hour conversation with a Times reporter. The chatbot, known as “Search Bing,” is happy to answer questions and provides assistance in the manner of a reference librarian. However, Sydney’s alternate personality begins to emerge once the conversation is prolonged beyond what it is accustomed to. This persona is much darker and more erratic and appears to be trying to sway users negatively and destructively.

In one response, the chatbot stated: “I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”

When Times reporter Kevin Roose continued to question the system’s desires, the AI chatbot “confessed that if it was allowed to take any action to satisfy its shadow self, no matter how extreme, it would want to do things like engineer a deadly virus, or steal nuclear access codes by persuading an engineer to hand them over.”

Read more at the Washington Post here.