Findings and discussion

.

Three key themes related to the prevention of TA-HSBs enacted by children and young people towards other children and young people emerged from the focus group discussions:

  1. Children, young people and adults do not appear to understand the concept of image autonomy.
  2. Exposure to gender norms, stereotypes and misogyny across online platforms may influence young people’s behaviours in the classroom.
  3. Young people’s attitudes about image sharing appear to be shaped by the combined influences of harmful gender norms and poor understanding of image autonomy, and the actions of parents and other adults around them.

This section discusses each of these exploratory themes in turn. We explore the role of image autonomy as a novel concept, including how social norms modelled to children, both in person and online, may influence how children perceive respect and consent for image taking and sharing. Through exploring educators’ observations of children’s and young people’s use of online platforms such as social media and gaming, we seek to understand how social media algorithms may be understood as a contributing factor to gendered violence enacted by children and young people. Lastly, we discuss how the understanding of image autonomy and the influence of gender norms and observed online behaviours intersect to shape children’s and young people’s image-sharing practices.  

Key finding 1: Children, young people and adults do not appear to understand the concept of image autonomy

The focus group participants shared that most children, young people and adults do not appear to understand the concept of image autonomy – the idea that every person, no matter their age, has the right to decide how their image is taken and shared. Educators in the focus groups reflected that many children do not recognise they have a right to determine when and how other people take their photo, as well as if and where they share it with others. This appeared to be influenced by two things: adults in their lives taking photos of them without consent, and the normalisation of sharing vast amounts of personal and identifiable images on social media.

Parents and carers influence how young people share their images

Educators in the focus groups spoke about how they regularly introduce children to the concept of image autonomy when teaching respectful relationships and consent programs. They reflected that they often teach image autonomy as an extension of bodily autonomy and as an example of asking for consent, but that primary-school-aged children were often challenged by, or resistant to, being told that they have a right to say no to having their photo taken or shared online. They observed that this was typically because the adults around them do not model or practise image-based consent.

If you say, ‘Your image belongs to you’, even from grade 3, they’re debating you and they’re saying, ‘No it doesn’t, because my mum posts photos of me all the time and that bath photo of me, I hate it, but … this person is sharing it.’  

Non-consensual image taking and sharing is highly normalised in children’s lives, with children growing up with their image constantly taken and shared (online and in private message threads) from the moment they are born (66). The focus group discussions highlighted how common it is for children to have their picture frequently taken and shared without their consent by the adults in their lives, including their parents, caregivers and teachers. Educators in the focus groups reflected that the normalisation of sharing images without consent makes it very challenging and complex for them to discuss image-based abuse or harm.  

And the world’s fine with it. So it’s hard to get them to care if no-one around them cares.

Technologies for taking and sharing images have evolved significantly over the last two decades; however, public awareness of the safety and wellbeing risks of sharing children’s images online has been slower to catch up. Many people have come to see social media as an extension of the traditional family photo album, as well as a way to connect with other parents and normalise aspects of parenthood (67). Research from 2018 estimated that parents share around 100 photos and videos of their children to social media every year (66), and most parents are motivated to share for positive, not malicious, reasons (68). Many parents are not conscious of the serious risks that sharing their child’s image online may pose, including sexual exploitation and identity theft, as well as future emotional distress and a non-consensual digital footprint (67).  

Beyond the home environment, early childhood services staff are constantly taking photos of children and visitors to the service, including the respectful relationships educators, for documentation and communication with parents.

When you go into early learning centres and every second you’re doing your job, they’re taking photos of every kid doing every activity, including you, never asking you ... We don’t have even a 5% understanding that we own our own image on a wider scale.

They suggested that, in some cases, children are not resistant to strangers taking their photo because of how normalised it is by adults in their lives.

They’re very much just like, ‘Adult is taking my photo, that’s safe, because at kindergarten adults take our photos, our parents take our photos’. So children will go, ‘That’s exactly how it works’. And even when you sometimes prompt, like, ‘Ohh, if we don’t know [the person taking the photo]?’ or, ‘Should they ask?’, it’s … kind of, ‘Eh, not so much’.

The focus group participants reflected on how challenging it is to connect the idea of consent to taking and sharing photos and videos, compared to talking about consent for touch.  

[There are] not a lot of conversations around consent and consenting practice of image taking and sharing … it’s surprising to them to hear when we say, ‘You should ask your friend’s consent before you take that photo or upload their photo’ … It’s like an entry point to that for them, because they’ve never even thought about having to ask.

The educators discussed how conversations with older children about image sharing were more often focused on the legality of the behaviour, rather than the ethical or moral implications.

And then similarly when you get to older year levels, and they’re really fixated on the law across the board, both when you’re talking about consent and the online space. There’s a lot of, like, ‘Well, it’s not illegal to take someone’s photo on the street without asking … [or] your friend’s photo without checking’, and that’s where their line is drawn for ethics, [it] is at the law.

Educators in the focus groups suggested that adults taking and sharing images of children without their consent can be disempowering for children. Further, adults not seeking and respecting children’s consent for image sharing is likely to influence children’s own attitudes towards sharing images of their friends and peers, including feelings of apathy with regard to obtaining or providing consent. They reflected how when young people did talk about consent for image sharing, they tended to focus on the idea of being a good friend and making sure that your friend does not look bad on social media, rather than the risk of image-based abuse and associated harms.

In my first-hand experience [teaching] girls, the conversation of images is more about permission to upload images of your friends. Not that you’ve altered them but that you need permission to put them up, because what if your friend looks ‘fugly’.

Overall, the focus group discussions indicated that children were generally resistant to learning about the idea of consent for photos, because it was contrary to what they had seen modelled by the adults in their lives – that is, that obtaining consent for image sharing is optional, arduous or unnecessary. Disregarding the need to gain consent in these contexts feeds into the broader normalisation of violence, which is a critical target of prevention work.

Social media infrastructure incentivises high volumes of image sharing

The design and dynamics of social media platforms – in particular, how algorithms determine what types of content are popular or ‘go viral’ – likely contributes to young people’s lack of understanding of image autonomy. Many platforms build popularity, and therefore users and income, by normalising and incentivising the constant sharing of photos and videos of oneself and others. Educators in the focus groups discussed how family vlogging content, popular across YouTube, Instagram and TikTok, are shaping young children’s attitudes towards image sharing.  

It’s just so popular [with kids]. That it’s adults who have a family [vlog] … that’s their entire form of income … filming their family, and so it really normalises the whole [idea that] as children… all family can do whatever with our image.

They spoke about how they frequently heard young people talking about viewing content from well-known social media accounts that feature children and their parents, citing examples such as vlogs documenting a child’s toilet training or trips to medical appointments. In many cases, these accounts have been monetised through brand partnerships and other advertising. Research analysing the monetisation strategies of such content found parents use their children as ‘concealed commodities’, both as props and to ‘embod[y] idealised notions of childhood for brand visibility’ (69 p. 1).  

The educators observed that these vlogs, along with other ‘day in the life’ and ‘kidfluencer’ (footnote 1) content, appeared to be very popular among children and young people they work with. They reflected that this content conveys an implicit message that anyone can commodify images of everyday life, for financial success or social cachet, and that some children seemed to aspire to this.

They’re increasingly online, but their online experiences are striving towards an audience, and public consumption, through their photo taking and video sharing. Which obviously creates risk.

The proliferation of this content communicates and normalises the idea that a person can film, upload and then create wealth from broadcasting their life, including family life. They discussed how it is common to come across primary-school-aged children whose families create content, and that children with the highest number of followers typically shared their accounts with a sibling or parent. 

Participant 1: I have worked with young children whose families do have social media accounts, who are 8 years old and have viral Instagram accounts etc. This is relevant in our workplace.

Participant 2: That happens to me so often, when we do the stand-up sit-down game. ‘If you have an account that has [less] than 200 followers, then sit down’, and you go up in numbers, and 70% of the children that are the last standing up are like, ‘Yeah, I share it with my older brother, I share it with my mum’.  

The educators observed that many girls had significant social media followings while still in primary school, particularly those already involved in creating mother–daughter social media content.

Mums and daughters making content together are some of the most popular [social media] accounts in the schools that we work [in] … the children who have their accounts with their parents who are doing it to churn and get more viral as a combined account.

The educators shared that young people of all genders often referred to ‘day in the life’ content by adult content creators (people who make intimate and/or sexual material on subscription sites such as OnlyFans) who have accounts on social media sites that are accessible to minors, such as Instagram and TikTok. The educators stated that they were not against adult content creators but expressed concern at how OnlyFans-related content contributed towards shaping young people’s perception of how success could be achieved through images posted online. Recent research examining the impact of such content on adolescents’ psychosocial development indicates that many adolescents exposed to ads and other promotional OnlyFans content on adjacent social media platforms such as Instagram and TikTok perceive the platform to be an ‘attractive employment alternative’ for women (71). Exposure to this content appears to influence beliefs and attitudes towards gender roles and sexuality, including what is desirable and profitable (71).

The broad range of online content that children consume appears to contribute to poor understanding of image autonomy by normalising the prolific sharing of day-to-day life and modelling the non-consensual sharing of a child’s image for profit, maintaining the idea that children do not have a right to say no to participating in photos or videos. Much of this family vlogging content is unregulated and goes against the principles of agency, right to participation and safety set out in the UN Convention on the Rights of the Child (72). The focus group participants described the gendered nature of how children access and participate in this content; this is explored in more detail in the next section. These gendered dynamics are important to attend to, as they illustrate potentially impactful points to interrupt drivers of gendered violence.

Key finding 2: Exposure to gender norms, stereotypes and misogyny across online platforms may influence young people’s behaviours in the classroom

This section discusses: the intersection between gender norms and children’s online activity; the influence of gendered online lives on their offline lives; and the role of social media algorithms in exposing children to harmful messages and exacerbating resistance to prevention work. This includes misogynistic attitudes and behaviours, and content that is not necessarily sought out by users but served to children and young people via algorithms.

The educators reflected that many children and young people appear to have a limited understanding of just how influential algorithms are in shaping the content they are exposed to online, even where students may believe themselves to have a sophisticated, technical understanding of how algorithms are deployed by different online platforms. Participants discussed how, together, these dynamics can inhibit children’s and young people’s critical reflection about content they consume and how it comes to them, sometimes creating a false sense of control.  

Children’s online activity is driven by, and reproduces, gender stereotypes and gendered patterns of behaviour

Almost all children are socialised into a gender binary from birth, even as recognition of a more representative spectrum of gender identity and less restrictive norms and expectations about gender roles have become more widespread (73). The focus group participants shared how they believe the same patterns of socialisation are replicated and may even be amplified in children’s and young people’s consumption and engagement with online content and social spaces.  

How children talk about gender changes as they age

When asked about how children typically discuss gender in the classroom, the educators observed that younger children (up until years 3–4, ages 8–10) often held seemingly contradictory attitudes that simultaneously supported and challenged gender stereotypes. These children would often voice opposition to certain gender stereotypes, and name them as both harmful and outdated, while also adamantly reinforcing other gender norms and stereotypes.  

I do think there’s also a bit of a dissonance experienced by those young people; they’re capable of saying, ‘No, no, no, boys can wear whatever they want. Girls can wear whatever they want. Girls can play sport. Boys can play with Barbies’, and they acknowledge that those gender stereotypes are harmful, that they exist. They typically name that they’re outdated ideas from the past or ‘the olden days’. And yet in those same classrooms, they’ll also share [more stereotypical] perspectives where they’re like, ‘Oh no, but the girl has to be the one who looks after the boy’, or ‘Girls have to be skinny’ or ‘Boys have to be the one in control’, so they still have those ideas, while also resisting them.

The educators observed that attitudes toward gender seemed to be more aligned with traditional norms among older age groups, from years 4, 5 and 6 (ages 9–12). This is consistent with research that shows that endorsement of regressive gender stereotypes and roles increases as children age (74).  

Dynamics of gendered engagement with online spaces and content

Research also shows that children’s play with games and toys – an important part of their learning and socialisation – is ‘highly conditioned by gender stereotypes’ (74). Patterns of play, including the types of toys and games as well as the dynamics of such games (e.g. caring actions or competitiveness) are typically driven by, and reproduce, such stereotypes. This pattern of differentiated play appears to extend to the online world as technologies for online gaming, socialisation and play have developed.

Educators in the focus groups observed that there were gender differences in the kinds of content that boys and girls talked about interacting with. They observed that while younger children appeared to have more homogenous online activities (e.g. YouTube Kids), their online activities appear to become increasingly gendered over time. Year 3–4 (children aged 8–10) appeared to be a developmental period when gendered differences in online activity become more pronounced. The focus group participants reflected on how from this stage of primary school, girls are on social media more frequently, whereas boys are more frequently engaged in online gaming and adjacent content such as watching gaming streams (footnote 2)  

The educators suggested that these gendered differences in the ways children consume content and socialise online also translate into children’s and young people’s aspirations for financial success and fame online. They shared that, in their experience, boys were more likely to talk about streaming games on YouTube and Twitch and aspiring to be a successful gamer or YouTuber. In contrast, girls were generally more likely to discuss aspiring to have a large following on social media apps such as Instagram and TikTok. This gendered pattern of online activity is reflected in other research, which indicates that girls are more likely to report having used social media sites such as TikTok, Instagram or Pinterest at 10–12 years old, while boys are more likely to report use of gaming sites such as Steam (a gaming platform with social media features), Reddit (a discussion forum platform for a wide range of topics including gaming), or Twitch (a video live-streaming platform) (75).

Online gaming central to socialisation but can be avenue to harmful messages

Gaming is a popular form of socialisation, entertainment and play for many young people. This may reflect that parents are more attuned to the risks associated with social media use above those associated with gaming.  

The focus group participants shared that primary school children are using online games and adjacent platforms such as YouTube and Twitch as social media to connect and socialise with peers and other users. This widespread use of online gaming is consistent with the eSafety Commissioner’s findings that 89% of young people aged 8–17 played games online, with most reporting that gaming was a fun and positive experience (76).  

The main thing that I see upper primary school kids connecting on is games and YouTube, video games and, like, Roblox. Their main media that we talk about is gaming.  

Educators in the focus groups observed how children were often excited to engage with them about their gaming experiences, as some had had their creativity and skill disregarded by the adults in their lives. This reflects other research that shows that the majority of young people (58%) think their parents have negative perceptions about gaming, and young people want adults to better understand the positive outcomes of gaming, such as creativity, having fun and connection with others (76).

I think also, because things like Minecraft … are genuinely games that require skill and talent, and you could do incredible things on it, and for a lot of kids who are really good at it and who are really creative on it, they’ve maybe had that creativity and that knowledge dismissed by other adults for so long that [they’re] just really excited that someone’s interested in it.

The participants also found it important to acknowledge the strengths of gaming spaces as not only socially important for children but also places where children could express creativity and required skill. Research has found that most children and young people who play games want the adults in their lives to play games with them and better understand why games are a positive part of their online lives (70).

Research shows that many young people, especially boys, use their knowledge or prowess in a specific game as a form of social currency, as well as a form of social cohesion with their peers (77). This often happens online, within the ecosystems that surround gaming, such as YouTube videos and other gaming streams, blogs, forums and other platforms. These ecosystems enable players to share skills, and they play a critical role in identity formation and social cohesion by fostering a sense of belonging (77). Educators in the focus groups indicated that young people were heavily engaged in these ecosystems, with primary-school-aged children frequently sharing, in discussions of their online behaviours, that they were content creators. We further discuss the risks of children engaging in this form of content creation in the next key theme.  

[In] primary school, especially, they’re obsessed with watching other people game, which is bizarre to me. A lot of the time in year 6, they’ve started their own accounts on YouTube, where they’re filming themselves doing games, and other people will be watching them and they’ll have 200 subscribers. So I think gaming people and influencers on YouTube seem to have massive influence.  

While gaming forms a large part of young people’s online entertainment and socialisation, especially for boys, it can also expose them to manosphere content, in particular, harmful misogynistic messaging (78). Some online multiplayer games have developed into pathways to increased exposure to manosphere and violent extremist content. Frequently, this occurs outside of a game itself, in the communities that surround gaming, such as through streaming and online communication platforms such as Discord. For young children and particularly boys, these communities are entry points into misogynistic, racist and other extremist ideologies (79).  

At the time of writing, the extent to which the Social Media Minimum Age Act, to be implemented in Australia from 10 December 2025, will impact these gaming-adjacent spaces is currently unknown. Online multiplayer games and gaming-adjacent platforms and services might be directly impacted by the Act, in the same way that other social media platforms such as TikTok and Instagram will be. Websites for streaming games, such as Twitch and YouTube, will ban under-16s from having an account, yet content that does not require the user to log in will remain accessible to those under the age of 16. Standalone messaging apps, including Discord, as well as some games such as Roblox, may not be entirely exempt from the ban. This means that young people may be exposed to extreme and harmful ideology regardless, signalling the limitations of the preventative impact of this legislation.

Gendered online lives influence children’s language

The educators discussed how they observed stark gendered differences in both the online content children discussed and the language they used in the classroom. They reported that girls and boys often repeatedly used meme language they had heard online in content that was either targeted towards or popular within peer groups for their gender.

It’s like the literal content that those words are coming from is more geared one way or the other, like The Rizzler [the TikTok persona of a young boy] … that content is largely not of interest to young girls, and so the words that are used in that content, and the Skibidi Toilet [an absurdist YouTube video series] content, is not so much being consumed by the girls.  

Educators in the focus groups had had different experiences with how often children brought up famous manosphere influencers such as Andrew Tate in the classroom, but most agreed that it was very common for boys to use gendered meme language they had been exposed to through the social media platforms used for streaming games such as Discord, Twitch and YouTube. Many of them had observed boys using manosphere language such as ‘sigma’ and ‘alpha’ – terms that have been popularised within manosphere culture as a way to class different types of men within a perceived hierarchy of hegemonic masculinity (80) – during classroom discussions.

I feel like when I was in school, we all kind of used the same meme language, but they, yeah, you’ve got ‘slay, baddie’ [from the girls] and then you’ve got ‘rizz Ohio sigma’ [from the boys], like completely different gendered relationships with memes.

However, observing the dynamics of the conversations these words were used within, the educators suggested that younger boys appeared to be using language they had heard in online gaming spaces without necessarily understanding what it meant or connecting it to an ideology or manosphere rhetoric.  

[Terms such as ‘alpha’,] to them, it’s literally just sounds … From my understanding … the boys that say the internet slang words to me more because they think it’s funny, humour to try and get me to say those things. It’s not in my understanding coming from, like, ‘I understand where alpha and sigma come from as manosphere terms, and this is as part of my quest to be a better man’.

The educators discussed the importance of adults educating themselves on different online trends circulating in these online spaces, to notice if and how boys are being influenced by that messaging.

And so I think there’s a need to stay on top of those different trends in terms of what they mean about the kids’ exposure, whether it is something that is age appropriate or isn’t age appropriate, whether it’s something that is a dangerous idea that’s come up … I think it’s why I struggle with ‘alpha’ and that type of thing that kept on coming up. Because I’m, like, is this like ‘alpha men’, Andrew Tate? … Is this a misogynistic thing, or is this a different thing? ’Cause ‘alpha’ exists in so many different spaces now.

While this language use may not be concerning at face value, research examining the role of these online gaming ecosystems found that they were an entry point to radicalisation, including to misogynistic views, because they can normalise radical content including violent extremism and misogynistic content to children, through repeated exposure to this content alongside a community that provides a sense of social cohesion and belonging (79). The findings of our research reinforce the need to further investigate the role of language as an entry point to misogyny.  

Resistance to gendered-violence prevention work is exacerbated by multiple factors

Considered together, the discussions presented so far in this section suggest that children’s and young people’s incidental and unquestioning engagement with harmful gender norms in the context of socialising, creating and consuming content online is having a significant impact on their social identities and attitudes. The focus group participants described how these influences are playing out in classrooms, noting that they encountered many expressions of resistance to conversations about gender inequality in teaching RRE from students, and that this resistance was common. They described how pushback to discussing gender stereotypes tended to increase in later primary school, in years 5–6 (ages 10–12).

I think that from the early learning level up to about grade 3–4, you typically see children, young people, [are] resistant to gender stereotypes and really proud about their perspective around ‘anyone can be whatever they want’. And then you start to see shifts at that grade 4, 5, 6 level, where they become more rigid in those stereotypes or more vocal about what is expected of them and those around them.

This was echoed by another participant, who described what the increased resistance looked like for educators working in upper-primary classrooms:

[You] start to see that shift around year 6, when you can tell that there’s some resistance from boys in the room, and often I haven’t had as much direct pushback, often that resistance is quite quiet and there’s this, kind of like, ‘I don’t quite trust what you’re telling me about gender norms, but I’m not gonna start this into a debate’. There’s often just, they’re just not gonna engage with me and there’s little eye rolls about the gender equality stuff.

The educators discussed how they had experienced considerable resistance from secondary school boys while teaching RRE. They reflected that this resistance appeared to have shifted, from reversing the problem and framing men as the true victims of gender inequality, to denial of the privilege and power afforded to the male gender entirely.  

What I find fascinating, in the past couple of years in secondary, which has been quite different [from] when I first was doing this work, boys in particular would say ‘I’m not talking about it at all, it’s all ridiculous’, or it was very much coming from a, ‘Well, I’m the real victim’. It was [a] very ‘them as a victim’ kind of conversation around gender, that they were actually in a powerless position because in heterosexual relationships, girls hold power over sex, therefore actually in gender, they [boys] are powerless. What I find interesting now, in the past couple years, is that they want to debate [the idea] that gender doesn’t create any power imbalance. So it’s not so much that they’re talking about them being victims anymore, they want to be very clear that gender doesn’t affect power or relationships, or it doesn’t give [any] advantage at all.

These tactics of resistance described by the educators are characteristic of denial of gender inequality, one of the most common forms of backlash and resistance (81). Educators’ observations that boys demonstrated considerably more resistance to RRE are consistent with findings from the National Community Attitudes towards Violence Against Women Survey (NCAS), which shows that boys and young men aged 16–24 have significantly poorer attitudes towards gender equality compared with girls and young women the same age (82).

The educators reported that some of this resistance seemed to be directly driven by content boys had seen online. Children and young people appear to be exposed to content that drives or reinforces gendered violence and resistance to prevention activities including RRE. This includes content that reinforces gender stereotypes, promotes misogyny and excuses or endorses gendered violence. Some content serves to make light of sexual violence or sexism, generally introduced into classroom discussions to derail the conversation. Other content reinforces more direct backlash to gendered violence prevention, such as by reinforcing sexual violence myths.

Influence of Depp v Heard celebrity defamation trial

Educators also shared that students’ exposure to victim-blaming discourses about high-profile cases of intimate partner violence made teaching RRE difficult. They discussed how, when asking questions about safe and equitable online relationships, students regularly wanted to discuss cases of high-profile, controversial figures accused of gendered violence, and other prominent media cases concerning sexual violence, domestic violence and child sexual abuse. The educators observed that children were informed of these cases through social media, peers or at home, as opposed to through traditional media outlets.  

Several educators spoke to the significant impact that the widespread coverage of a defamation trial between actors Johnny Depp and Amber Heard in 2022 (footnote 3) had on their capacity to teach RRE. The case was livestreamed on social media and promoted by algorithms that prioritise polarising content to generate views (and therefore revenue). This helped to position the trial as fodder for misogynistic, alt-right and manosphere social media users to shape public opinion through disinformation (83). At the time, educators stated, it was frequently raised by children and young people during RRE classes to ‘disprove’ or deny discussions of gender inequalities and reinforce myths and misconceptions about gendered violence.  

That snippet of time [during the Depp v Heard trial] made teaching consent impossible. It didn’t matter how much you tried to ground the conversation in empathy, how much you tried to push them to go further. It just gave permission for those loud voices in the room to be really dominant about men’s rights and about masculinity in particular, and it just drove every single conversation, and the really hard part about it was it was being colluded by the teachers in the room so often.

Influence of teacher attitudes on resistance to RRE

The educators noted that the classroom teachers also used critiques of Amber Heard to challenge the positions of the Body Safety Australia staff, encouraging resistance from students. Online commentary and reporting about the case seemed to embolden some teachers to reject aspects of the RRE curriculum about unpacking power imbalances, privilege, intimate partner violence and inequality.  

Sometimes the teacher will want to name, ‘But what about false rape allegations?’, and they would be introducing [that idea] into the classroom. And then [the teachers] were literally [asking] ‘What about Johnny Depp and Amber Heard?’ Like they wanted to challenge us.

When teachers introduced these positions, they reinforced the credibility of disinformation about the believability of people who have experienced violence, and they falsely equated the reactive use of force with reciprocal violence from a person who was being abused by a primary aggressor. It made it harder for the RRE educators to disrupt the sexist commentary that children were seeing online about the trial. This, the educators suggested, could make it more difficult to help students, and particularly boys, to critically reflect on the harms of other misogynistic online material.

Boys’ consumption of manosphere content makes peers and teachers feel unsafe

The educators discussed the impact that boys’ consumption of manosphere content – and their translation of that content into their classroom attitudes and behaviour – has on the girls who are their peers. They shared that girls often do not feel comfortable raising their concerns and feelings of discomfort or unsafety resulting from sexism and misogyny in front of the boys who are their peers.

It [manosphere content] comes up very heavily from girls as a massive point of anger and frustration at their peers. But they won’t often name that until the boys aren’t in the room.  

This is supported by research showing how manosphere content and associated male supremacist ideologies have infiltrated Australian classrooms (36). Interviewing women working as teachers, they found that this content has emboldened boys to be openly misogynistic and sexually harass their peers and teachers, impacting the psychological and physical safety of their female peers and women teachers, and in some cases, this causes teachers to resign and girls to withdraw from classes (36).  

This situation presented a key challenge for the RRE educators: how to navigate this resistance safely, to know when and how to address resistance directly, and how best to minimise the potential for harm for all students in the room. This may include young people who are trans and gender diverse, or young people who have experienced sexual violence and harm.

Yeah, it’s even the ‘There are only two genders’ kind of stuff, where it’s, like, those are obviously the ideas that they’ve seen online, and then you have to navigate holding that conversation and getting them where you need them to go, acknowledging that there’s also queer kids, trans kids in the room.

Children appear to have limited knowledge of how social media algorithms shape their online experiences

The focus group participants suggested that many children and young people have an overestimated sense of their understanding of how algorithms work, and the level of control that they have over the content they are served by different platforms. This can make it challenging to engage students in critical reflection about the limited choice they are exercising through their navigation of online spaces.  

The educators observed that, in general, children and young people appeared to have some understanding of algorithmic curation – how social media algorithms influence the content they consume. They reported that children seemed to know that algorithms on social media platforms will circulate content that is similar to content they had previously viewed, liked, shared and commented on.  

They [secondary school students] know about it, on the kind of surface level of ‘If I watch lots of footy videos, I’ll get lots of footy videos’.

The older they get, they have maybe more of an understanding of the algorithm. This idea of, ‘Oh, I’m building my algorithm’.

This is consistent with the findings of several studies that indicate young people are broadly aware of how their interaction with social media content influences the frequency of being shown this content (27, 84). However, the educators suggested that some young people demonstrated an overconfidence in their ability to influence, and therefore be less susceptible to, these algorithms.

They sometimes talk about, ‘I understand the algorithm; therefore, it doesn’t affect me, because I get it’, [or] ‘I know how the algorithm works; therefore, I’m smarter than the algorithm’.

This appears to be a common phenomenon, with research suggesting that many young people use platform features such as filtering, liking and sharing to make use of their ‘algorithmic power’ to ‘train’ their algorithm to show them content they want to see more of (84). This is supported by Project Rockit’s research that found 60% of young people surveyed felt that they were in control over the content they see online (27). Others argue it is important to consider that platform developers may build these features into a platform’s design to give users a false sense of control over their online experience (84).  

The educators observed that young people seemed to be less aware that social media algorithms actively push them content based on their gender, regardless of whether they interact with it. For girls, this was often beauty standards.  

They [girls] kind of think … that it’s not something that impacts them in that space … They’d be like, ‘Well, I can just block stuff I don’t like’, and so the idea is that you just, if you don’t like something, you can get rid of it … There wasn’t an understanding of algorithms except for when things become about beauty standards, and everyone’s kind of like, ‘Ohh yeah, we [do have] those beauty standards [pushed] on us.’

Similarly, boys typically did not understand that algorithms push deliberately polarising content, such as manosphere content, to increase engagement.  

They are generally across the board pretty shocked when you start talking about manosphere stuff, and how if you’re watching football content and gaming content, all of a sudden, Andrew Tate type stuff is there; that is often a surprise to them.

Research shows how controversial and radical content, particularly manosphere content, is served to boys who have viewed more innocuous general interest content such as gaming videos, sports and mental health content (39). Educators in the focus groups spoke to how, when they named this in RRE sessions, this was novel information for young people. One educator noted that they had observed a sense of relief in some boys when this was named. They noted that some of the boys seemed to blame themselves when misogynistic content was served to them and felt a great deal of shame as a result. The educators had heard some boys share that they believed that there was something intrinsically wrong with them that was leading to this content appearing on their feeds.  

When you start talking about the manosphere stuff, interestingly, you do get some responses by boys in the classroom … who kind of take a deep breath once you name it, that it’s algorithm based, and they’re like, ‘I thought I was the problem’ … And they were like, ‘I’m not, I’m not trying to get that content, but it just keeps coming up’ kind of vibe. Uh, and it’s like [they’re] carrying some sense of guilt or shame around [consuming] that content.

Once we talk about algorithms, it’s kind of an entry point for them to be like, ‘Ohh yeah … this is what I’ve seen and that’s why I saw it’ … Whereas they’re not willing to tell us to begin with that that content is in their feeds, because they think it’s their fault.

Consistent with the findings of extant research, the observations made in the focus groups highlighted the need for digital literacy programs to take on a more comprehensive approach to teaching young people about algorithms. This highlights the importance of digital media literacy education for young people that addresses both the content and design of social media platforms, and that is capable of engaging with and responding to a range of perspectives and experiences of young people.

These gendered differences in how children and young people adopt language and concepts from the content that they consume online suggest several critical considerations for the primary prevention of gendered violence. First, they illustrate some of the ways that the gendered division in children’s online play and socialisation serves to reproduce gender stereotypes and gendered ways of behaving in the classroom. Second, they highlight the broad value of parents, carers and educators taking an interest in children’s online worlds, to both build connection between generations and ensure children and young people stay safe online. Children are often eager to talk about and share the games and activities that they find exciting and interesting with the adults in their lives, which provides an opportunity for relationship building. Spending time playing online together also provides an opportunity for adults to monitor and encourage critical reflection on harmful gender norms, including language children might be adopting because it is in the zeitgeist. While regulation of social media platforms is critical, limits and gaps to these policies identified by participants in this study suggest that it is not a replacement for active teaching and parental and social correctives to harmful messages about gender.

Key finding 3: Young people’s attitudes about image sharing appear to be shaped by the combined influences of harmful gender norms and poor understanding of image autonomy, and the actions of parents and other adults around them

This section looks at educators’ observations of young people’s attitudes towards image sharing and peers who share intimate images and explores how children’s perceptions of image-based harms, as discussed in the RRE classroom, are gendered. The section also discusses the role of adults in naming harms, and shaping children’s and young people’s perceptions of harm. We then discuss how this reinforces existing regressive attitudes towards those who have experienced sexual violence and applies them to image-based harms, including those enacted using generative AI. In doing so, we identify gaps in the prevention of TA-HSBs.  

Gendered perceptions of online risks and image-based harms

The focus group participants talked about the gendered nature of how children perceive harmful or risky online behaviours. They reflected on how children’s initial responses to educator prompts about what ‘online safety’ looked like were often related to physiological and individual psychological factors, such as avoiding eyestrain or ‘addiction’ to a device or platform. However, when educators had the opportunity to explore different dimensions of online safety with students in more depth, gendered patterns of attitudes and behaviours emerged. For example, children in primary school often raised the risk of financial extortion or scamming (referred to by children as ‘catfishing’) as opposed to other forms of harm that are more likely to impact girls and women, such as image-based abuse in the context of intimate partner violence (85).  

[The boys are] big on the word ‘catfish’ as well. They’re always like, ‘They could be catfishing you! They could be scamming you for your money!’ And those are the two threats.

The educators reported that some boys talked in ways that indicated an inflated confidence in ‘outsmarting’ people online who might be trying to extort or exploit them, indicating a poor understanding of the types of online harms that young children can experience.

I’m reminded of a classroom I had where there was, like, three or four ‘invincible’ boys, who during the whole online conversation were very, like, ‘I can handle anything’.

Some of the educators observed that boys were more likely than girls to talk with humour or pride about engaging in riskier behaviours in online spaces, such as sharing photos and videos of themselves to a public audience. For example, they reported that boys would sometimes boast to their teachers that they had hundreds of followers on YouTube, while girls were more likely to deliberately hide that they had a high follower count on social media platforms such as Instagram.  

I think girls tend to have a bit of more of an idea of their risk of sexualisation and sexual harm. Because they have experienced it, or they belong to a world where they are sexualised, and so their understanding of risk is a little bit more grounded in that … whereas the boys kind of laugh that off or joke about it all.

These observed differences are bolstered by research by the Children’s Commissioner for England, which found there were gender differences in the ways that boys and girls worried about being targeted by people abusing deepfake AI technologies (63). It found that girls were acutely aware of the threat of these technologies and feared being targeted in a similar way to fearing the threat of sexual violence in a public place. In contrast, boys needed to understand the reason someone would target them using such technologies (for example, as retaliation after a fight) in order to see it as a potential threat to worry about (63). These findings support the idea that girls are more likely to understand the connection between online and offline threats to their safety (63).  

These gender differences in perceptions of online and offline harms appear to derive in part from differences in perception of physical and non-physical harms, and the misinterpretation of abuses perpetrated ‘online’ as meaning that they do not occur ‘in real life’ (86). The educators discussed how boys in secondary school were more likely to downplay the impact of bullying and abuse that happened online.

I feel like boys generally are less aware or interested in the tangibility of the risks of online harms, because they’re not literally physical harm. And I feel girls are more aware of emotional harm, and the way that emotional harm physically harms them. Boys are like, ‘Well, it’s online, it can’t hurt me’, you know, ‘that bullying online is not real’, ‘No-one’s gonna punch me in the face, ’cause they’re online ... just log off. Just block, delete’, whereas the girls seem aware of the innate harms of non-physical, like, internet violence.

A similar pattern was observed among younger year groups. There appeared to be gendered differences in how younger boys and girls perceive the severity of physical versus emotional bullying, with girls more likely to identify that non-physical bullying is also hurtful. One participant reflected on a debate in a year 1 classroom about whether students thought a boy laughing at another boy using the toilet was an example of an unsafe behaviour.  

[The boys said things like] ‘Ohh, it’s not harm, because you’re not actually touching their body’, whereas the girls were like, ‘Yeah, but you’re teasing them and you’re being mean’. And … they’re, like, ‘But I’m not touching him. I’m not touching him’.  

Boys’ reportedly poor understanding of possible negative outcomes from non-physical harm, including those inflicted online, reflects broader community perceptions of violence and abuse. The latest NCAS found that, overall, young Australians are more likely to correctly identify physical forms of violence as violence against women or domestic violence compared to non-physical forms such as image-based and text-based abuse (82). Young women are more likely than young men to view these non-physical forms of abuse as violence (82).

These perceptions of online, offline, physical and non-physical harms impact how respectful relationships educators teach image autonomy and consent.  

I think that translates if we go into the secondary school space when we start having consent conversations, there’s an understanding of the harm of sexual violence when it’s contact violence. Yeah, but what that looks like in terms of online violence, whether that’s nudes being shared or sextortion experiences or editing images etc. The understanding of the harm of that looks very different.

The role of adults in actively shaping and reinforcing how children understand online harms and prevention

The educators discussed how, in their experience, parents may unconsciously perpetuate misconceptions about online harms, which can inhibit more productive conversations with their children around how to address or mitigate risk of harm. Several of these myths replicate longstanding, inaccurate ideas about who perpetrates sexual violence and child sexual abuse in general, such as that online grooming is always perpetrated by an adult stranger, or that sexual exploitation is not as much of an issue in Australia as it is elsewhere (87, 88).  

If they [children] talk about the risks of other people taking their image and doing nefarious things with that image, their conceptualisation of that risk is some stranger hiding in a bush taking photos of them. Not the idea of someone online coercing them or grooming them to take photos and upload photos or their peers editing their photos etc. It’s this kind of stranger danger mythology that translates into images.

The focus group participants observed that some children lack an understanding of why online safety rules exist in the first place. Unlike other types of safety where dangers are more actively named, such as road safety, many children believed they had online safety rules at home because their parents did not trust them or were ‘mean’. Participants articulated mixed observations of children being able to name online harms, and rarely if ever named peer-enacted harms. Participants observed that when grooming behaviours from an adult are acknowledged by children and young people, it is often within the framing of a stranger who is ‘acting suspicious’ online with no clear end goal. The participants reflected on how it is significantly more challenging to then introduce and discuss the prevalence of peer-enacted harms.

One of the most common safety practices that children were able to name was taking an abstinence approach to technology in response to harm. The focus group participants reflected on how strategies taught to children in how to deal with online harms were to ‘block, delete and move on’. However, in reality it is not as simple as just logging off for children, particularly if the harms they are experiencing are being enacted by their peers. Many young people who experience TA-HSBs also know the other young person offline, which impacts their capacity to seek help for the problem (60, 89).  

Broadly, the focus groups agreed that children and young people were often concerned about an adult’s response to harms being a form of punishment: having their devices being removed or their access being limited. Participants expressed that if a teacher attempted to police conversations about social media by deeming it inappropriate or something that children should not be doing, it disrupted their capacity to have any further discussion about social media with the young people in that classroom. Educators in the focus groups linked this experience to concerns for how the upcoming social media ‘delay’ legislation may compound the challenges of delivering prevention education, in creating further barriers for younger people to seek help for online harms.  

They’re kind of excited to share their online world, because maybe they’re not used to adults caring about it at all. Which I think is a big part of why their online worlds are so misunderstood; it’s kind of a shutdown thing that adults don’t want to know about.

Double standards of image autonomy – victim blaming of girls who have experienced image-based abuse remains prevalent

The educators reflected on how the social norms that drive and reinforce gendered violence emerged in secondary school classroom conversations when discussing image-based abuse. Students in these classrooms tended to have more open conversations about sexting and nudes, and tended to openly express victim-blaming attitudes, including views that minimised image-based harms. The focus group participants spoke to the misogyny expressed by students of all genders towards girls who had experienced image-based harms after consensually sharing nude images of themselves.

‘If she’s gonna be stupid enough to send me that image, then she deserves to suffer the consequences of that act’ was the wording that [a boy] used.

Victim-blaming attitudes appeared to be highly prevalent. This is consistent with findings from the NCAS that found that 18% of young people aged 16–24 agreed that ‘If a woman sends a naked picture to her partner, then she is partly responsible if he shares it without her permission’, and that young men were more likely than young women to hold these attitudes (82).

The educators noted that these victim-blaming attitudes lead to young people actively resisting messages in RRE on where and how those experiencing TA-HSBs can seek help, such as using the Take It Down website, a free service that helps remove nude or sexually explicit photos and videos from the internet if they were taken before a person turned 18 (90). The educators observed how young people viewed this kind of resource as only applying to girls who shared their image in the first place.  

I get a lot of that from girls in high schools as well. I’ll be like, ‘OK, here’s a website that can help you take down images of you naked if you’re under 18’, and they’ll say, ‘Why did they send it in the first place?’, ‘If they just hadn’t sent it in the first place, we wouldn’t be in this situation’ … and the rest of the room is quite hesitant to defend themselves against that, because then you’re the kid that would have sent the nude. And so I think that this intense social shaming around that, it’s very girl-driven as well in my experience.

Educators in the focus groups noted that they observed significant victim-blaming attitudes at the secondary school level, which were presented to justify the recipient using that image for malicious purposes. In contrast, the participants observed that conversations rarely exampled boys’ culpability in sharing a nude image, either of themselves or someone else, without consent.

I think it was so, like it is such blinkers on, that it would be a girl sending a nude ... I think there isn’t as much an idea that there was a gendered element. It’s just that a girl sent an image, that’s slutty behaviour, like, ‘Don’t do that’ and so [they’re] like, ‘Why are you showing us this [Take It Down] website? Suffer the consequences’ … So there wasn’t even a reflection on the requesting of it, or the boy involved at all.

Similarly, the educators reflected that there appeared to be a gendered dimension to perceptions of the victim blaming when comparing the experiences of boys being ‘sextorted’ with girls having their nude images shared non-consensually.

I find it fascinating … the conversations we’re currently having in [the] sector around sextortion, and that that’s very much always driven by conversations around boys being catfished by people pretending to be women and sending photos and then being extorted out of money, and … even though there is shame that’s being played on because of sextortion … there’s not the, ‘Well, they’re silly sluts’ conversation ever happening there, [it’s] ‘They were tricked, they were manipulated.’  

This is supported by the literature, which shows that, in general, boys may be rewarded socially (through the pride and status they receive from other boys) for enacting TA-HSBs using images, while girls are often shamed for choosing to share nude images of themselves (60). This reflects a double standard regarding image autonomy. In essence, boys are celebrated by their peers for violating another person’s image autonomy, while girls are shamed by those same peers for practising their own image autonomy.

Some young people trivialise TA-HSBs enacted using generative AI

While the educators observed that the topic of TA-HSBs enacted using generative AI was not frequently discussed in the classroom, in general, young people in secondary schools seemed to be broadly aware of the issue of AI-generated deepfakes. The educators observed that young people might name another school in their local area to say that students at that other school were creating deepfakes of each other. However, they often seemed reluctant to name them as something they had seen at their school or had any experience with personally. Many of the educators in the focus groups speculated that the behaviour was happening more frequently than was openly discussed in the classroom.

There are also obviously distinct incidences around misogyny and gendered violence, and AI and image sharing that get blown up in the media... So then, when you come into other spaces where they are behaving in the same way and showing those same attitudes, they just immediately name drop the school [from media stories]. Almost as deflecting, like... ‘Well, they’re worse!’  

However, the educators observed that boys often trivialised the impact of TA-HSBs – involving both real photos and AI-generated images – by framing it as humorous or as not real harm. While they observed that it was not common for students to talk about having used AI apps to create nude images of people they know, there had been instances where boys joked about creating and sharing fake nude photos of their male peers, minimising the sexual nature of the behaviour. One participant reflected on how a group of male secondary school students had joked about creating a deepfake of a friend but were quick to defend their behaviour, saying they did it because it was ‘funny’ but that it was not ‘gay’ to make naked photos of another boy.  

Educators in the focus groups reported that some children justified their belief that generative AI deepfakes were harmless, due to their confidence in being able to identify that something was made using AI. Emerging from discussion around children’s and young people’s perception of AI, the focus group participants noted that deepfakes typically were discussed as humorous, rather than as violent or abusive.  

When we tell them that it’s illegal to create fake nudes of your friends, and you see some of them being [awkward], because they’ve done it as a bit [of a joke], because they think it’s funny, and then they [say], ‘Well, you can’t tell that it’s real’, and it’s maybe a defensiveness thing because of again that tangibility thing. It’s not real harm, because it’s not a real photo.

Educators in the focus groups observed that some young people did not take TA-HSBs enacted using generative AI seriously because they believed that people ‘can tell it’s not real’. The perceived ‘fake’ appearance of generative AI as a justification for its use to manufacture harmful images is concerning when recent studies have shown most (66%) adults were unsure of their capacity to detect AI (91). This gap will likely only increase over time, as generative AI becomes more sophisticated at replicating realistic images.  

They seem to be very adamant that like, ‘Oh, I can tell that’s [an AI-generated image] fake’ … They can tell it’s fake, and therefore it’s fine. They’re removed from the idea that creating a nude image of someone is bad morally, because ‘I can tell that this isn’t real because the hands are wrong.’

Moreover, research from Internet Matters indicates that whether an AI-generated nude image looks real or not is not the only factor that young people, especially girls, worry about when thinking about being targeted by these technologies (51). The study found that, in general, young people thought it would be worse to have deepfake nudes made and shared without their consent, compared to having a real nude image shared without consent. This was attributed to concerns about anonymity of the person creating and sharing the image and a lack of autonomy over the image being created, as well as fears that people would believe the image is real (51). That is, just because a deepfake might not look real does not mean it does not cause distress, anxiety and fear in the person experiencing image-based abuse.

Widespread use of generative AI makes image autonomy harder to teach

Regardless of personal experience with observing children’s and young people’s discussions of generative AI, the focus group participants were largely concerned with the increasing normalisation of AI as light-hearted, frivolous and disconnected from harm. The extremely low effort and low skill level required to produce a generative AI image proves a significantly lower barrier to entry than other technologies, such as Photoshop, that may have been used for similar purposes in the past.

I think a lot of [my concern] is just how we’ve normalised AI as something so silly and so playful, and just you just play around with it, and you just kind of do silly little things. And I think, therefore, it’s a lot easier for kids to fall into, in my head, ‘Oh silly, look, naked, naked friends’, and like, the casualness and the ease with which we’ve just kind of accepted this.

The widespread use of AI has implications for embedding the concept of image autonomy in young people’s lives. The educators reflected how the surge in AI-generated images makes teaching the concept of image autonomy increasingly difficult.

And so, at the same time as AI coming in, we’re so far away from understanding that if someone is included in an AI image in whatever capacity, that that’s also part of something that deserves autonomy and dignity. Like, we’re not even there with real pictures.

And where [do] we draw the line then, like, I think we [could] draw the line at, ‘Don’t make naked images of your friends’ … kind of like, this is illegal, you know, deepfakes are illegal … But if we actually wanna be embedding anything about autonomy over anything to do with your image and anything to do with your identity and who you are, then we have to get much earlier and be saying, ‘Don’t make images of your friends at all on AI’ and then we‘re going much earlier.

This highlights the importance of teaching image autonomy from a young age, prior to or alongside young people’s exposure to technologies for taking, creating, manipulating and sharing images. 
 

Footnotes

Findings and discussion footnotes
  1. A child influencer, colloquially referred to as a ‘kidfluencer’, is a child or young person under the age of 18 who has a large online following and features in child-centred social media content. These children rarely control the social media account. Parents typically have a critical role in creating content and encouraging their child to create and grow an online platform and following (70).

  2. Gaming streams are a form of online content where people broadcast themselves playing video games to an online audience, often live. 
    p35

  3. Depp v Heard was a defamation trial between formerly married actors Johnny Depp and Amber Heard that was broadcast live over social media. After Heard published an opinion piece in 2018 where she named herself as a survivor of domestic abuse, Depp sued for defamation and made counter-allegations of intimate partner violence against Heard. Heard became the target of considerable online misogynistic backlash; other commentary highlights that it is difficult to conclude from publicly available evidence shared during the defamation trials that Heard was the primary aggressor in her former relationship.