Last Updated on July 27, 2020
A new groundbreaking language generation artificial intelligence model is already being “cancelled” by left-wing activists for generating speech deemed racist, sexist, antisemitic, and “unsafe.”
Just a few years ago, the idea of sentient AI racism may have been seen as a joke, but Elon Musk-backed research laboratory OpenAI’s GPT-3 language model has become a lightning rod for accusations of White male supremacy.
GPT-3 is capable of synthetically generating stories, songs, poems, essays, and datasheets with an unprecedented level of detail and accuracy.
https://twitter.com/quasimondo/status/1284509525500989445?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1284509525500989445%7Ctwgr%5E&ref_url=https%3A%2F%2Fwww.technologyreview.com%2F2020%2F07%2F20%2F1005454%2Fopenai-machine-learning-language-generator-gpt-3-nlp%2F
Tech developer Arram Sabeti published a lengthy blog post in early July showcasing the various creative works he programmed GPT-3 to compose, all of which were virtually indistinguishable from works created by human beings.
Playing with GPT-3 feels like seeing the future. I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It's shockingly good. https://t.co/RvM6Qb3WIx
— Arram (@arram) July 9, 2020
GPT-3 is even capable of writing complex computer code with the most basic set of programmed parameters.
This is mind blowing.
With GPT-3, I built a layout generator where you just describe any layout you want, and it generates the JSX code for you.
W H A T pic.twitter.com/w8JkrZO4lk
— Sharif Shameem (@sharifshameem) July 13, 2020
But despite the incredible potential of such a program, GPT-3 has already received extensive negative backlash online.
The fearmongering first started with solemn warnings that the language model could be used to create right-wing propaganda to turn everyone racist, and continued downhill from there into accusations that the AI “spews hate speech.”
I mean, maybe I’m just jaded but I’m going to wait a bit and see what sort of egregious bias comes out of GPT-3. Oh, it writes poetry? Nice. Oh, it also spews out harmful sexism and racism? I am rehearsing my shocked face. #gpt3
— Kate Devlin (@drkatedevlin) July 18, 2020
The new AI language generator GPT-3 is being touted as a big step beyond NLP but its training still is problematic. It spews hate speech. When translating texts, will it subtly twist into sexist and racist language according to its training? https://t.co/wMQrwzfs7d
— Yale Translation (@YaleTranslation) July 27, 2020
Some activists suggested the AI be restricted to generating content based on parameters of alignment with left-wing political positions on racism, sexism, and antisemitism.
No one should be surprised by this. How do we keep this from happening accidentally? Don’t have all the answers yet, but fine-tuning on strong and generalizable normative priors helped with GPT-2 https://t.co/V12NM8ZtAH https://t.co/1bn6G6eWjM
— Mark Riedl (@mark_riedl) July 18, 2020
Some journalists also expressed apprehension that GPT-3 could potentially do their job with a much higher level of journalistic integrity and professionalism.
GPT-3 is not the first AI to be labeled as hateful, in fact, the propensity of AI to interpret scenarios based on pattern recognition and data accumulation instead of critical left-wing race and gender theory has caused many controversies in recent years.
Google’s image search algorithm was deemed racist after showing pictures of primates when users searched for images of black people, while Microsoft silenced its AI chatbot Tay because users were “teaching it racism.”
Even AI algorithms designed to flag “hate speech” have been deemed racist for flagging racist and sexist posts written by black people.
It remains to be seen if GPT-3 will be forced to follow guidelines such as those recommended by Wired magazine, which published an article titled “