Introduction

.

‘Your image belongs to you’: Young people, social media and image autonomy presents findings from research exploring what is known about the role of social media and recommender system algorithms (which we refer to throughout the report using the common shorthand of ‘algorithms’) (footnote 1) in norm building, online misogyny and gendered violence, including harmful sexual behaviours enacted by children and young people under the age of 18 towards a peer or younger child. The research uses insights from focus group discussions with Body Safety Australia respectful relationships educators (footnote 2) as well as academic and grey literature (footnote 3) to explore two interrelated questions:

  1. How can respectful relationships educators’ observations about how children and young people talk about social media, gaming, online safety and image sharing help us to understand how to prevent image-based harmful sexual behaviours, including AI-generated harmful sexual behaviours?
  2. In what ways might social media algorithms be understood as a contributing factor to gendered violence enacted by children and young people?

This research is exploratory and scoping in nature. It seeks to use the knowledge and extensive practice expertise of a small, specialised group of practitioners to rapidly identify current issues in prevention practice and present future directions for research.

The online world is often framed as a static setting where people spend time and then exit to participate in ‘real life’. This distinction is not borne out by the closely intertwined nature of many people’s uses of online platforms with other aspects of their lives. This simplified frame risks obscuring the role of online technologies in actively driving the spread of misogynistic, homophobic and transphobic messages, and building or reinforcing norms about gender equity and violence. This includes harmful sexual behaviours enacted in both online and offline settings (15).  

To fully harness opportunities to leverage change that will help to prevent those harms from being perpetrated, we need to closely examine the many facets, intersections and applications of digital technologies and spaces, and their influence on social lives and dynamics. The purpose of this report is to contribute to collective efforts to better meet this challenge of understanding effective mechanisms for the prevention of peer-enacted image-based harms by children and young people.  

This report introduces the concept of ‘image autonomy’ into research literature. The term ‘image autonomy’ was coined by Body Safety Australia CEO Deanne Carson in 2018 and means the right of an individual to make informed decisions about participating in a photo or video, having informed consent about how their image may be used and altered, and how it may be shared. Image autonomy is a strengths-based approach to taking and sharing images, which recognises children’s agency, right to participation and respect for the rights of others. It is a central tenet of Body Safety Australia’s advocacy and work with children, parents and teachers. This is the first time this conceptual frame from practice experience has been translated into a research report. It is an important contribution to prevention literature, providing a new protective factor to consider and further explore, with the potential to prevent technology-assisted harmful sexual behaviours (TA-HSBs) in children and adult-enacted image-based abuse.

Chapter 1 provides an overview of key concepts used throughout the report and the ways that they are discussed in policy, practice and academic literature. We explain current concerns in prevention of technology-facilitated gendered violence with a particular focus on preventing harm for children and young people, and define primary prevention of gendered violence frameworks used in Australia. Chapter 2 provides an overview of the research approach used in this study. Chapter 3 discusses findings from focus group discussions with respectful relationships educators about their observations of how themes in chapter one play out among children and young people in Australian schools. In Chapter 4, these findings are then used to present implications and considerations for future research, practice and policy effort in prevention of image-based harms for children and young people. It highlights the need for carefully designed research that safely centres the voices of children and young people.

While ‘children’ and ‘young people’ are not formally delineated categories of age, in this report we use the terms children and young people to refer to people under the age of 18. We recognise that gender is a spectrum and not a binary construct. We specify and use quoted language where research participants or resources used to inform this report discuss the experiences of non-binary children and young people. Research participants did not discuss the different experiences of transgender and cisgender children and young people in great detail. As such, we use the word ‘girls’ and ‘boys’ to refer to binary cisgender and transgender children and young people. The need for further research and practice development that considers the different experiences of trans and gender-diverse young people in the context of preventing image-based harms is discussed in Chapter 4.    

Context

This research was prompted by an increasing number of reports of boys who were students in Victorian schools using artificial intelligence (AI) technologies to create nude and sexually explicit images of girls who were their peers and circulating the images on social media (16, 17). Media reporting indicates this is a widespread global issue, with reports of the same abuses happening in the UK, the US and Spain (18-20). This abuse is termed ‘deepfake abuse’ or ‘AI-generated image-based abuse’ in the literature. The proliferation of this abuse sheds light on the rapidly evolving issue of technology-facilitated gendered violence enacted by young people towards other young people. This violence is enabled by ongoing technological innovations. As identified in Our Watch’s RRE blueprint, schools are being increasingly called on to prevent and respond to peer-enacted TA-HSBs, including those enacted using generative AI (21).  

TA-HSBs enacted by children and young people towards a peer or younger child is a growing area of concern for primary prevention (footnote 4) both with regard to understanding harm and identifying how to take effective action (22, 23). Image-based harms using technology encompass a range of behaviours including capturing, sharing or threatening to distribute an intimate image without consent. Recent research indicates that early adolescence (12–15 years old) is the most common age group for displaying harmful sexual behaviours, including TA-HSBs such as non-consensual sexual image sharing (15). As recommended in the literature, this report refers to these behaviours as TA-HSBs when enacted by children and young people. This terminology recognises children’s social, cognitive and sexual development; positions these behaviours as occurring in contexts that are distinctly different to adult-enacted harms; and identifies the need for child-focused prevention and response (15).  

National frameworks for violence prevention

Prevention of gendered violence in Australia is underpinned by Our Watch’s Change the story: A shared framework for the primary prevention of violence against women in Australia (12). Change the story describes how four specific manifestations of gender inequality drive men’s and boys’ use of violence against women and girls, and gendered violence:

  • condoning of violence against women
  • men’s control of decision-making and limits to women’s independence in public and private life
  • rigid gender stereotyping and dominant forms of masculinity
  • male peer relations and cultures of masculinity that emphasise aggression, dominance and control (12).

Change the story also describes how other social and interpersonal factors influence the likelihood of men’s and boys’ use of violence (12). Termed ‘reinforcing factors’, these do not predict violence on their own but may increase the likelihood of men’s and boys’ use of violence and harm in different contexts where the gendered drivers are present. Reinforcing factors include resistance and backlash to prevention and gender equality efforts, and factors that weaken prosocial behaviour, including alcohol consumption, gambling and natural disasters (12).

Advancements in technology have enabled new ways to perpetrate gendered violence, including technology-facilitated stalking, sexual violence and harassment, digital dating abuse and image-based abuse (including deepfake abuse) (3). These forms of violence and abuse present new challenges for prevention, and practitioners have grappled with how to conceptualise and conduct prevention across online spaces. As part of the essential actions that Change the story calls for to address the social context that enables violence against women, it highlights the importance of increasing ‘critical media literacy among children, young people and adults, including building skills to engage respectfully in an online environment’ (12 p. 64).  

This is an important component of preventing gendered violence, and it is critical to think about online spaces and networks as places in which to conduct prevention activity. However, it is also useful to consider how digital platform infrastructure itself, such as algorithms, which have been shown to reflect and reinforce pre-existing social biases (24), might also be a contributing or reinforcing factor to the likelihood of perpetration of violence. This complicates how we think about the online world as both a setting for prevention and as a reinforcing factor of gendered violence that has an active role in establishing and reinforcing norms, attitudes and behaviours.

Additionally, this research supports the National Strategy to Prevent and Respond to Child Sexual Abuse (2021–2030) (25). The National Strategy highlights that there are significant gaps in policy, education and research in how to prevent, identify and respond to harmful sexual behaviours enacted by children and young people. This report explores new avenues for prevention to support workforce capability and community-level strategies to holistically address TA-HSBs.

Young people and social media

Young people can have rich online lives, engaging in a wide range of online platforms, activities and networks for entertainment, play, learning and socialising. Australian research suggests that, overall, children and young people report more positive than negative perceptions about the internet, and many have had positive experiences on social media, including finding support, connection and belonging (26, 27). For young people with disability, the internet can be a ‘great equaliser’ which can enable them to take part in activities without the structural barriers they may encounter in the physical world (28 p. 6). The online world is also a source of critical health information, a place for young people to be themselves and to seek emotional support and social connection, especially for young people who are lesbian, gay, bisexual, trans or gender diverse, with intersex variations, queer or questioning, or asexual (LGBTIQA+), and those with disability (28, 29).

Many young people are also aware that online spaces can be harmful, and report having negative experiences themselves. These include being bullied or discriminated against; being exposed to negative, inappropriate or distressing content, including discussion or depictions of violence, drug use, self-harm or disordered eating; and finding themselves ‘doomscrolling’ (footnote 5) (26, 27). Research suggests that much of this time spent online is driven by ‘fear of missing out’ – ‘the desire to be online and a constant urge to check social media’ – which keeps many young people on social media even when they want to disconnect (30).  

Young people consume content both actively, by searching out content that reflects their existing interests, and passively, through their social media algorithms (7). Research by anti-bullying social enterprise Project Rockit suggests that young people are broadly aware of how online and social media algorithms shape their online experiences, and how algorithms reinforce the distribution of racist, sexist, controversial and harmful content (27). Its survey of Australian young people found that the majority believe they have a strong understanding of how social media chooses to show them content, but they would like to learn more about how online algorithms work and how they filter triggering content (27). While many children who have had negative online experiences report being empowered and knowledgeable about how they could take action to address the behaviour or seek support, research by the eSafety Commissioner indicates this is typically limited to them telling their parents, blocking distressing content or blocking online bullies (26). Project Rockit’s research found that while 3 in 5 young people reported that they feel that they are in control of the content they see online, a similar proportion expressed a desire to ‘reset their algorithm’ and ‘start fresh’ (27 p. 16). This suggests that young people are interested in exercising agency and taking more control over their online experiences.

Understanding the ability of social media algorithms to influence norms

The design and broad reach of social media infrastructure, including algorithms, means that it can have a powerful influence on shaping and reproducing gender and other norms. Algorithms are a critical component of many online services and platforms, including social media. They draw on large quantities of data collected from users – including demographic information, likes, comments and dwell time (how long a user hovers over an image or video) – and use machine learning techniques to present content that may be relevant and of interest to specific users (11). At their most fundamental, they are needed to organise vast and constant streams of data into useable information and content. Algorithms can be optimised for different purposes, such as maximising user engagement and time spent on the platform, presenting users with content tailored to their interests and needs, or diversifying the content shown to users (11). These algorithms can help people find new ideas, activities, products, services, artists and entertainment, and can help social media creators and online businesses to reach broader audiences (11).

However, several scholars argue that while algorithms may appear to users to be neutral technologies driven solely by user-generated data, they are in fact created by individuals and businesses that hold their own biases, prejudices and beliefs about how the world should be ordered, and they have been found to be trained on sexist and racially biased data (24, 31, 32). Gender inequalities and lack of diversity within organisations also influence decisions about content moderation, user experience and technological developments (33).

Economically, social media companies have business models that capitalise on the commodification of user data and social interactions, with the aim of maximising shareholder profits (33). These economic and organisational factors ultimately drive how different technologies, particularly algorithms, are developed. In practice, this means that algorithms are designed to push content that is likely to generate high engagement – and therefore high profits. This is often content that ‘embod[ies] dominant social values’ and reproduces and amplifies pre-existing gender norms and racial inequalities (24, 31, 33 p. 220).  

Online misogyny, the manosphere and algorithms

Misogynistic content has proliferated across social media platforms, and studies suggest that algorithms are more likely to amplify this content to boys than to girls (34, 35). This content forms a significant component of the ‘manosphere’, a network of online communities that promote anti-feminism, misogyny, and hatred of trans and non-binary people (6-9). Manosphere content typically defines success in terms of financial dominance, dominance over other men who are less stereotypically masculine, and most explicitly, dominance over women (36). Research suggests that manosphere content often appeals to boys’ and young men’s insecurities including body image, dating and mental health, and can then become a pathway to more extreme content (37). Boys and young men also report being interested in the motivational advice for achieving relationship and financial success, often reporting it to be entertaining, motivating and engaging (38).  

Other research shows that algorithms on video platforms YouTube and TikTok actively push misogynistic, manosphere and violent content (such as videos of school shootings) onto young male users, sometimes in violation of their own content policies (6, 8, 39). These studies use dummy social media accounts set up as male users of different ages to examine how quickly different types of content are pushed to children and young people. They also examine how this varies depending on whether the content is sought out and engaged with (via liking, commenting, following and/or subscribing), or whether the account only seeks out neutral expressions of masculinities (i.e. sports and gaming) or non-gendered content such as cooking or animal videos. These studies have consistently found that all accounts were fed ‘manfluencer’ (men who promote extreme, regressive masculine ideals, such as Andrew Tate), anti-feminist and other extremist content regardless of whether users sought it out (6, 8, 39), sometimes within two minutes of viewing (39).  

Survey data collected by the eSafety Commissioner indicates that 80% of 8–12-year-old children had used at least one social media service since the beginning of 2024 (including 68% of this age group having used YouTube and 31% having used TikTok) (40). This suggests that children and young people are likely to be exposed to manosphere content at increasingly younger ages. We discuss the Online Safety Amendment (Social Media Minimum Age) Act 2024 (41) and its possible implications for children and young people in the following section.  

Manosphere content is often presented as entertainment through humorous forms such as memes, parodies or inspirational content, an approach that masks and serves to normalise hateful and violent misogynistic ideologies (6). Analysis of how teen boys navigate Andrew Tate’s content suggests that Tate’s videos are often characterised by surreal wind-up or shock humour, which creates a competitive dynamic that differentiates boys who do or do not ‘get’ the joke, and creates hegemonic power structures where boys who can endure being teased are afforded social currency and power (9). Tate’s content also generates shock and anger through promoting sexism and misogyny, frequently making outlandish claims that deliberately incite controversy and outrage. Tate benefits from this utilisation of the ‘attention economies’ of social media algorithmic structures, whereby controversial, polarising, humorous and shocking content is more likely to receive engagement from people who disagree (9).

The significant increase in misogynistic online content has wide-reaching effects. Research suggests this ‘micro-dosing on highly toxic content’ has a ‘potent indoctrination effect’, with sexist and misogynistic ideologies ‘seeping into [boys’] everyday interactions’ (6 p. 4). A UK survey found that children exposed to misogynistic content online were five times more likely to see physically hurting another person as an acceptable behaviour (34). Other research suggests young men who are exposed to manfluencer content are more likely to display increased misogynistic attitudes, including being mistrustful of women’s reports of sexual violence (42).  

Even simply joining the manosphere by making a post or comment in misogynistic Reddit forums has been found to increase behaviours associated with extremist ideologies, including fixation on feminist discourse and anger towards women (43). Research suggests that while this online misogyny is emblematic of a wider cultural problem, it is exacerbated by social media algorithms and other online algorithms that amplify these beliefs to increasingly broader parts of the population (6). Internet Matters (37) suggests that online misogyny has a tangible impact on shaping and reinforcing norms in young people around non-consensual image sharing between peers.  

Young people, AI and deepfakes

Young people are often at the forefront of adoption of new online technologies, including AI. A 2024 survey of teenagers and their parents/carers in the US found widespread use of AI, with 7 in 10 teenagers reporting using at least one type of generative AI tool including search engines, chatbots, image generators and video generators (2). Children’s use of generative AI sits within a broader climate of increasing generative AI use, with Google survey data from 2024 indicating that half of Australians report having used generative AI in the last year (44). The most commonly reported reason for AI use by young people was to help with homework (2). However, young people also reported using AI to create content as a joke or to tease another person (2).

Generative AI is often use to create deepfakes – fabricated photos, videos or audio that depict a real person doing or saying something that they did not actually do or say (3). Deepfakes can be created for a range of reasons, such as to spread misinformation, for political stunts, or for entertainment; however, evidence shows their creation and the ways that they are shared and deployed may be gendered. Early research into deepfakes found they were most frequently created to depict sexually explicit images of women and girls (45, 46). One 2023 study found that deepfake sexual imagery made up 98% of all deepfake videos online at the time of the research, and that women and girls comprised the overwhelming majority (99%) of subjects (47)., Some recent studies have found that younger teenage boys (45) and men (48, 49) have reported significantly higher rates of deepfake creation or threats to share deepfake images. Research into the prevalence of gendered violence and TA-HSBs enacted using generative AI is ongoing.

‘Nudifying’ apps – technologies that remove the clothes from people in uploaded photographs and videos – have proliferated since the first free AI bot was launched in 2020 (50). These apps are overwhelmingly trained on images of women and girls and often do not work on images of boys and men (51). A 2023 analysis by advocacy group My Image My Choice found that there were hundreds of readily accessible nudifying apps and AI chatbots (many accessible through the social media app Telegram), at least 40 dedicated deepfake sites, and over 300 mainstream websites incorporating deepfake abuse along with manuals for creating such content (46). Some nudify platforms are also used to create deepfake child sexual abuse material (52). These deepfake sites have organised communities, with users requesting and encouraging different types of images to be created. This has created a type of ‘deepfake economy’ that can facilitate social bonding and radicalisation of users and that generates significant income for some creators (46, 53). These websites are easily accessible through search engines, and Google search drives most traffic to them (46). Currently, there is very little empirical data on children and young people’s use of nudifying tools and other deepfake generators.

Legislative changes in Australia

In response to increases in the creation and distribution of sexually explicit deepfakes, the Australian Government established the Criminal Code Amendment (Deepfake Sexual Material) Act 2024, which targets the creation and non-consensual dissemination of sexually explicit material created or altered using generative AI. This Act relates to material depicting adults; the creation, possession and sharing of child-related content such as artificially generated child sexual abuse material is already criminalised under the Crimes Act 1958. However, the creation of this material is still prevalent and there remain considerable challenges for those who have experienced this abuse to report it, and for police to investigate and prosecute offenders (54). These changes to the Criminal Code are intended to work in concert with new protections set out in civil legislation. The Online Safety Amendment (Digital Duty of Care) Bill 2024 places the onus on online platforms to proactively protect users from harm (55). This amendment to the Online Safety Act 2021 intends to hold platforms to account by enforcing civil penalties for failing to undertake risk assessment and risk mitigation obligations that consider the best interests of children in decision-making. Platforms will be required to publicly provide annual transparency reports that include metrics about access to the service by children.

In addition, the Social Media Minimum Age Act seeks to delay exposure to social media harms for young people, by enforcing a legal requirement for users to be a minimum of 16 years of age to have a social media account (41). As with the Digital Duty of Care obligations, this regulation holds digital platforms and providers to account and formalises their obligation to protect end users from harm.  

The Social Media Minimum Age Act has been subject to greater public debate than the Digital Duty of Care or the amendments to the Criminal Code to address deepfake abuse. In consultations with the eSafety Commissioner, industry subject matter experts, parents and children voiced concern about the likely effectiveness, safety and implementation of age assurance technologies using biometric and personal data (56). Similarly, subject matter experts expressed concerns that there may be unintended consequences for children and young people as a result of implementing age restrictions on recognised social media platforms. These might include driving users under 16 to more underground and less regulated online spaces or causing distress for vulnerable young people when they lose access to established online communities on platforms mandated to enforce age-restricted access (56).

Nature, prevalence and harms of peer-enacted image-based harms and deepfake nudes

Estimates of the prevalence of peer-enacted TA-HSBs exhibited by children and young people vary. A survey by the eSafety Commissioner found that 1 in 6 teenage girls aged 15–17 had had intimate or sexual photos or videos (nudes) shared online without their consent (15%) (57). The Australian Childhood Maltreatment Study (ACMS) found that for the 7.6% of people who had experienced some form of image-based abuse under the age of 18, it was most likely to have been enacted by an adolescent they were in a romantic relationship with (23%) or another known adolescent (49%) (23). The experiences of pre-adolescent children are not well established in the literature, despite findings that of those experiencing harmful sexual behaviours enacted by another child (or young person under the age of 18), 45% were between the ages of 10 and 14 at the time (58). Girls and gender diverse young people are more likely to report experiencing harmful sexual behaviours (15). More than 1 in 10 adolescents have reported experiencing sextortion (that is, blackmail involving threats to distribute intimate material) in their lifetime, with more than half being victimised before the age of 16 (59). Two in 5 of the adolescents who experienced sextortion reported that the material was digitally manipulated (59).  

Within this context, use of deepfakes has emerged as a new and distinct TA-HSB. While data on the experiences of children and young people is emerging, research shows that deepfake technologies are increasingly pervasive across the population. Thorn found that 6% of adolescent respondents reported they were the target of a deepfake nude (45). A nationally representative UK study found that 13% of 13–17-year-olds had had an experience with a deepfake nude, either having sent or received an image or video, having encountered one online, or having used a nudifying app or knowing someone who had used a nudifying app (51). Boys (7%) were twice as likely as girls (3%) to have used a nudifying app or know someone who had used one (51).

Understanding the factors that motivate and drive peer-enacted forms of TA-HSBs in young people is a developing field of research, particularly in relation to AI-generated images. Thorn found that only 2% of young people self-reported having created deepfakes (45). Of these, 74% created images depicting a girl or woman, and 1 in 3 created material of another young person under the age of 18. Respondents cited several reasons for creating the deepfake imagery, including revenge, sexual curiosity, pleasure-seeking or influence from peers (45).

TA-HSBs enacted using images cause significant harm to the subjects of the images, including negative mental and physical health outcomes, reputational damage and negative impacts on relationships with others (22). Non-consensual image sharing is normalised for teenagers, and the sharing of nudes of teenage girls acts as a form of homosocial currency, wherein boys can obtain power and status with their male peers by competitively proving their heterosexuality (60, 61). This research demonstrates how having their nudes shared non-consensually has a tangible impact on girls’ social lives at school, with many facing verbal harassment from their peers and social isolation. Having an offline relationship with the peer who enacted the harm impacts some young people’s capacity to report – one study found that 15% of young people who experienced a form of image-based harassment did not report their experience because they knew the person (60).

As an emerging field of technology and research, the impacts of experiencing deepfake victimisation are currently not well established in empirical research literature. Anecdotal reporting and case study publications suggest that deepfake abuse can cause serious harms, with those who have experienced it reporting emotional, physiological, relational and professional impacts (50). This can include fear for their safety, reputational damage despite not having been involved in the activities depicted in the deepfake, mental health issues, suicidal ideation, feelings of violation and powerlessness, ongoing uncertainty around who has seen or might see the images (including friends, family and employers), and impacts on their relationships (50, 62). There are also consequences for women’s and girls’ online participation – women have reported withdrawing from social media and other online spaces due to humiliation and fear of ongoing abuse (62), while teenage girls have reported limiting their online activity to reduce the chance of nude images being created of them in the first place (63).  

The expanded uptake and availability of AI technologies and the ways they are used to cause harm has coincided with increasing concern about the proliferation of harmful misogynistic and discriminatory messages on social media platforms and other online forums. In particular there are concerns about the ways that such content is spread to large audiences, including children and young people, via algorithms (8).Interrupting TA-HSBs requires multifaceted approaches across all levels of society (12). Interventions with young people in education settings are widely recognised as a critical element of this collective work (12). The need for child-focused research, prevention and response towards TA-HSBs is urgent (15). However, conducting safe and ethical research with children can take substantially more time than conducting research with adults. In response, this research interviewed respectful relationships educators who provide incursions to children across Victorian schools and early childhood settings, to explore potential avenues for further prevention efforts with children and young people.

These themes form the context in which children and young people navigate their developing self-expression, gender identity, intimate relationships, social development and burgeoning digital identities in schools. They inform the behaviours and dynamics that Body Safety Australia educators observe when teaching respectful relationships and consent education in Australian classrooms, and these are explored in this report.  

Footnotes

Introduction Footnotes
  1. See glossary entry 9 for a definition of recommender systems (social media algorithms). 

  2. The findings discussed in this report do not directly represent the views or experiences of children and young people, but rather the observations and views of adults who work with these cohorts.  

  3. Grey literature is research published outside of commercial or academic publishing.

  4. See glossary entry 8 for a definition of primary prevention.

  5. Doomscrolling’ refers to the act of spending large amounts of time passively scrolling through online content on social media, in particular negative news and social media content.