Will AI-powered dating apps be able to keep users safe?

Will AI-powered dating apps be able to keep users safe?

Hannah Robertson

The online dating industry has long existed as a cog in a poorly regulated machine to which technology facilitated abuse has become endemic.

In the coming months, Australian dating platforms will take ambitious steps to self-regulate. They will deliver their voluntary code aimed at an improved, streamlined domestic approach to crucial safety aspects such as user support, contact with law enforcement and platform responsibility to detect and deter poor behaviour.

While the efficacy of a voluntary code has been debated, we needn’t look far to find evidence for both its timeliness and necessity.

In recent weeks the Australian Institute of Criminology (AIC) has released a series of follow up reports to their initial study that piqued government interest in early 2023 about the experiences of dating app users in Australia. The first of these explored risk factors for receiving requests to facilitate child sexual exploitation or abuse (CSEA) via dating technologies. One in eight (12.4 per cent) of those surveyed reported receiving requests for facilitated CSEA in at least one of the following five ways:

  • Providing photos of their children or other children they had access to;
  • Pressure to provide sexual images of these children;
  • Requests to meet these children at a time prior to “appropriate” as determined by the respondent;
  • Requests for information of a sexual nature;
  • Offers to pay for photos, videos or livestreams of these children.

Though deeply concerning, these findings are perhaps unsurprising viewed alongside Professor Michael Salter’s pioneering work into CSEA perpetration, that found one in ten Australian men endorse behaviours that constitute sexual offending against a child. In lieu of regulation – voluntary or otherwise – dating apps appear an obvious choice for those looking to perpetrate harm of this nature.

In more positive news, a second AIC report released this week detailed user experiences of reporting victimisation to dating apps. Most respondents in the study of 1555 users reported positive experiences reporting dating app facilitated sexual violence (DAFSV) to platforms, and high levels of satisfaction with the outcomes of their reports.

Nevertheless, across both reports, the initial 2022 study, and other recent AIC work examining victim-survivor experiences of engaging police, clear patterns exist that see DAFSV, CSEA facilitation requests and poorer reporting outcomes to be disproportionality experienced by the same populations – women, particularly those who identify as LGB+, Indigenous users and those with health conditions and disabilities.

The rise of AI-powered dating apps

It is frustrating to learn that, just as Australian efforts to enhance user protections gain traction, Match Group who own and operate the largest portfolio of dating platforms worldwide, have announced a new partnership with Chat-GPT founder OpenAI. It plans to increase the use of AI on their current products, and to design and build standalone AI-powered dating apps that they plan to pilot later this year.

The racism, sexism and ableism embedded in artificial intelligence is well established.

In the last fortnight alone, OpenAI has been taken to task for its promulgation of racial discrimination in HR recruitment, while a study from earlier this year found ChatGPT-4 to produce biased diagnoses and treatment plans in health care settings related to gender and ethnicity. Match Group CEO Bernard Kim has praised the OpenAI partnership, explaining that it will create “… an even safer environment for our users to connect in”. Notwithstanding the great promise of AI for increased efficiency of tasks such as responding to user reports of harm, it is hard to imagine a reality in which the same algorithm is able to produce divergent outcomes to those evidenced in these HR and clinical care settings.

If nothing is to change it is entirely likely that, as early as this year, cohorts who are already most at risk of experiencing dating app facilitated abuse on existing apps, will be encouraged to explore emerging platforms from which these harms are arguably inextricable.

Irrespective of this partnership, the value of current AI solutions offered by dating apps has been inconsistent. In many ways, this technology is only as effective as platform understanding and knowledge of harm or violation of their terms of service. Indeed, there is potential for these tools to over-detect overt offensive language, while allowing more ordinary intrusive content and behaviour to go unchecked.

Under existing circumstances where app design and development are human-led, ethical tech experts would emphasise a safety by design (SBD) approach which encourages proactive anticipation of likely harms such that steps can be taken to eliminate them before they occur. It is hard to be convinced that a design and development context entirely beholden to AI could enact SBD meaningfully, and with the nuance required to capture the disproportionate harms to which vulnerable users are subject.  

While responsible design beats regulation – perhaps the horse has bolted here. Although industry leadership and collaboration certainly hold power, has the time come for more formal regulation? This was one of the topics spoken about by participants in the recent Women’s Agenda x Salesforce roundtable on the ethics of AI, many of whom emphasised the essential role governments play in fostering trust, and holding companies accountable.

While arguably necessary, Australia is not yet positioned to place any mandatory guardrails in place, but is instead working to establish a similarly voluntary AI Safety Standard in collaboration with AI experts, being led by Minister for Industry and Science, Ed Husic.

While their impacts, and the extent to which these codes will be complementary remains to be seen – if effective they are set to achieve increased protections for all Australians. However, as Match Group look to AI with tremendous enthusiasm, globally the quest to improve dating app safety, and shield vulnerable users from AI facilitated harms remains an uphill battle.  

Feature Image: ANU PhD candidate, Hannah Robertson.

×

Stay Smart!

Get Women’s Agenda in your inbox