ChatGPT and its male-dominated advisory and investor group

ChatGPT and its male-dominated advisory and investor group

ChatGPT

Since its public release in November last year, everyone’s been talking about ChatGPT and its implications.

Schools, universities, corporations across the media and law – every industry is adjudicating the value and use of this new tool. 

But what do we really know about its mechanics? Who made it? Who funded it? Who are the people behind its development? And could there be potential for bias in what it delivers? Let’s take a quick dive into the background of ChatGPT.

How does ChatGPT work? 

ChatGPT is essentially a more sophisticated, refined and impressive form of Google, or Siri. A chatbot, if you will, that can answer “open-ended complex questions in perfectly composed sentences,” as Steven Johnson explained in the New York Times Magazine last year. You type in a question or instruction, and it will give you a text that will present the “illusion of cognition”. 

For example, you type in: “A 19th century Byron-like poem in the style of Harry Styles” and it will give you:

“Oh, wandering soul, with eyes so bright / Your heart afire with wild delight /
With words so sweet, and melodies so true / You roam the earth, a bard like none before you.”

Another example, and one which is making teachers around the world very anxious (because now, students can simply get ChatGPT to write their essays):

Type in: “Discuss the similarities between auto fiction and memoir”

You will get:

“Auto fiction and memoir share some similarities in terms of their focus on the writer’s own life experiences and perspectives. Both genres draw on the author’s personal history and often include self-reflection and introspection. However, the main difference between the two is that auto fiction allows for a greater degree of creative license and fictional elements, whereas a memoir is a strictly non-fictional account of the writer’s life.”

The software is powered by a program called Generative Pre-Trained Transformer 3 (GPT-3), a text-generation algorithm based on a large language model that has been trained on 700 gigabytes of data from across the web, including Wikipedia, social news forums and digitised books. 

As a new kind of “generative AI” – algorithms that generate text, images, or other data, and currently the most publicly available, let’s not forget that the tool is also simply “a reflection of its inputs, both from the people who created it and the people prodding it into,” as one journalist put it recently in Vice

Evidently, ChatGPT aims to be a safe, clean, non-offensive platform absent of discriminatory, harmful content. 

Which means that a group of human beings have had to scroll through titanic amounts of explicit content required as an input to the software’s pre-training filters — so that users will not be exposed to toxic materials. 

A recent investigation by TIME revealed that this job was outsourced to data labellers at a Keyna-based company called Sama, where employees were subjected to reading violent content, hate speech, and videos of sexual abuse, (including rape, bestiality, and sexual slavery) for hours. Sama had also previously held contracts with Facebook for content moderation.

The report found that the data labellers were being paid as little as $US2 an hour. Unfortunately, as celebratory as the world is in reigning in this new technology, there is a dark side to AI systems.

Who is behind the creation of ChatGPT?

ChatGPT is developed by OpenAI, a San Francisco-based artificial intelligence research firm founded in 2015 by a handful of Silicon Valley giants, including SpaceX CEO Elon Musk and Y Combinator co-founder Sam Altman, and a pool of young, tech luminaries. 

Former Stripe CTO Greg Brockman has also been listed as a founder. 

Sam Altman is currently OpenAI’s CEO and co-chair. At the time he started OpenAI in December 2015, he was the president of the start-up incubator Y Combinator, which he also co-founded with a fellow OpenAI advisor, Trevor Blackwell — the man known for his innovations in dynamic robot balancing

Among the fourteen or so founding members and advisors in OpenAI’s initial stages, all of whom are world-class research engineers and scientists, two were women — Vicki Cheung, a computer scientist who was Head Of Infrastructure & Founding Engineer at OpenAI for two years, before leaving in 2017 to become head of Engineering at Lyft, and Pamela Vagata, a software engineer who had previously worked on AI at Facebook. 

Other advisors include former director of AI at Tesla, Andrej Karpathy, co-founder of PayPal, Peter Thiel, computer scientist Wojciech Zaremba and Dynabook creator, Alan Kay. 

According to its website, Greg Brockman is currently the Chairman & President (Musk resigned from his role as co-chairman in early 2018) and the company’s board include entrepreneur and CEO at GeoSim Systems, Tasha McCauley, Director of Strategy and Foundational Research Grants at Center for Security and Emerging Technology, Helen Toner and 36-year old Canadian venture capitalist, Shivon Zilis.

Who are its investors?

In the early stages of it inception, OpenAI was funded by Y Combinator co-founder Jessica Livingston, Amazon Web Services, IT firm Infosys, and YC Research (recently renamed Open Research), as well as its own founders, Altman, Brockman, Musk, and Thiel — totalling a $1 billion package contribution. 

In 2016, Musk put in $10 million.

Other investors include venture capitalist Reid Hoffman, private venture capital firm Andreessen Horowitz, Khosla Ventures (founded by clean energy advocate Vinod Khosla) and Microsoft, who invested $1 billion in OpenAI in 2019 and has now upped that figure to a total of $10 billion.

Recently, TIME reported that OpenAI is now hoping to raise funds at a $29 billion valuation, elevating the company to the top of the AI company food chain. 

Are there any cultural or gender bias in the program?

We all know that machine learning systems are based on existing online content that is inherently bias and often discriminatory against marginalised groups, including women and people from culturally and linguistically diverse backgrounds. 

The team behind ChatGPT insists the tool is based on “years of research trying to mitigate bias,” and says they are ensuring their system blocks some outputs to prevent the perpetration of harmful material. 

However, filtering training data can often simply lead to a reinforcement of biases, as one tech journalist noted late last year. 

AI and machine learning is also overwhelmingly male-dominated industry.

“Only 20 percent of employees in the technical roles at major machine learning companies are women,” Australian engineer Dr. Muneera Bano told McKinsey Global Institute in April 2021.

According to the latest UNESCO report, women make up just 12 per cent of AI researchers around the world, and 6 percent of professional software developers. 

“Predominantly, then, you have a male workforce working on these algorithms, defining the rules,” Dr Bano, a Principal Research Scientist at CSIRO’s Data61, said.

“They may not be intentional, but it’s human nature. Even if you will have a dominant force of women designing something, they tend to be designing in a way that would suit them. Unconsciously you are adding your biases into the algorithms; you are creating those rules.”

A statement released by OpenAI stated that the company “works hard to build safe and useful AI systems that limit bias,” adding that “classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content” — referring to the explicit nature of the Sama employees in Kenya. 

Another OpenAI tool called DALL-E 2, which generates images from a natural language description, has been found to generate images with gender and racial bias — something the company say they are trying to correct

ChatGPT is still in its early public trial phase, which makes the need to consider its potential and the implications it will have for all of us even more pressing. Let’s hope those behind it continue to iterate their safety and ethical systems. 

Image above: Depicts an imagea generated through Dall-E 2 after the author Jessie Tu fed it: “a small female trapped in a large empty box in the style of Dali”

×

Stay Smart! Get Savvy!

Get Women’s Agenda in your inbox