![]() |
![]() ![]() |
Social media’s dangerous currents by Julie Inman Grant Australian eSafety Commissioner Social media’s dangerous currents, by Julie Inman Grant - Australian eSafety Commissioner. (Extract from speech to National Press Club; 24 June 2025): "I’d like to touch on some remarkable changes we’ve seen the online world undergo, driven by rapid advances in technology, seismic shifts in user behaviour, and of course, the exponential rise of artificial intelligence. Just as AI has brought us much promise, it has also created much peril. And these harms aren’t just hypothetical - they are taking hold right now. In February, eSafety put out its first Online Safety Advisory because we were so concerned with how rapidly children as young as 10 were being captivated by AI companions - in some instances, spending up to five hours per day conversing with sexualised chatbots. Schools reported to us these children had been directed by their AI companions to engage in explicit and harmful sexual acts. Further, there is not a week that goes by that there isn’t a deepfake image-based abuse crisis in one of Australia’s schools. Back in 2018, it would have taken hundreds of images, massive computing power and high levels of technical expertise to create a credible deepfake pornographic video. Today, the most common scenario involves harvesting a few images from social media and plugging those into a free nudifying app on a smartphone. And while the cost to the perpetrator may be free, the cost to the victim-survivor is lingering and incalculable. And herein lies the perpetual challenge of an online safety regulator – trying simultaneously to fix the tech transgressions of the past and remediate the harms of today, while keeping a watchful gaze towards the threats of the future. There is little doubt the online world of today is far more powerful, more personalised, and more deeply embedded in our everyday lives than ever before. It’s also immeasurably more complex and arguably much wilder. The ethos of moving fast and breaking things has been ratcheted up in the age of AI, heightening the risks and raising new ethical, regulatory, and societal questions – as well as adding a layer of uncertainty about what even the near future might hold. But behind all these changes, some things remain the same. Very few of these platforms and technologies were created with children in mind, or with safety as a primary goal. Today, safety by design is not the norm, it is the exception. While the tech industry continues to focus on driving greater engagement and profit, user safety is being demoted, deprecated or dumped altogether. So, while the tech industry regresses backwards, we must continue to move forward. The relationship between social media and children’s mental health is one of the most important conversations of our time. It naturally generates much debate and emotion. Therefore, it is important we ground these discussions in evidence and prioritise the best interests of the child from the start. And, even more importantly, that we engage young Australians in these discussions throughout the policymaking and implementation process. There is no question social media offers benefits and opportunities, including connection and belonging - and these are important digital rights we want to preserve. But we all know there is a darker side, including algorithmic manipulation, predatory design features such as streaks, constant notifications and endless scroll to encourage compulsive usage, as well as exposure to increasingly graphic and violent online content. The potential risks to children of early exposure to social media are becoming clearer and I have no doubt there are parents in this audience today who could share stories of how it has affected their own children and families. That is why today, I’m presenting some of our latest research for the first time which reveals just how pervasive online harms have become for Australian children. We surveyed more than 2,600 children aged 10 to 15 to understand the types of online harms they face and where these experiences are happening. Unsurprisingly, social media use in this age group is nearly ubiquitous - with 96% of children reported having used at least one social media platform. Alarmingly, around 7 in 10 kids said they had encountered content associated with harm, including exposure to misogynistic or hateful material, dangerous online challenges, violent fight videos, and content promoting disordered eating. Children told us that 75% of this content was most recently encountered on social media. YouTube was the most frequently cited platform, with almost 4 in 10 children reporting exposure to content associated with harm there. This also comes as the New York Times reported earlier this month that YouTube surreptitiously rolled back its content moderation processes to keep more harmful content on its platform, even when the content violates the company’s own policies. This really underscores the challenge of evaluating a platform’s relative safety at a single point in time, particularly as we see platform after platform winding back their trust and safety teams and weakening policies designed to minimise harm, making these platforms ever-more perilous for our children. Perhaps the most troubling finding was that 1 in 7 children we surveyed reported experiencing online grooming-like behaviour from adults or other children at least 4 years older. This included asking inappropriate questions or requesting they share nude images. Just over 60% of children most recently experienced grooming-like behaviour on social media, which just highlights the intrinsic hazards of co-mingled platforms, designed for adults but also inhabited by children. Cyberbullying remains a persistent threat to young people but isn’t the sole domain of social media - while 36% of kids most recently experienced online abuse from their peers there, another 36% experienced online bullying on messaging apps and 26% through online gaming platforms. This demonstrates that this all-too-human behaviour can migrate to wherever kids are online. What our research doesn’t show – but our investigative insights and reports from the public do - is how the tenor, tone and visceral impact of cyberbullying affecting children has changed and intensified. We have started issuing “end user notices” to Australians as young as 14 for hurling unrelenting rape and death threats at their female peers. Caustic language, like the acronym KYS – short-hand for “Go Kill Yourself” - is becoming more commonplace. We can all imagine the worst-case scenario when an already vulnerable child is targeted by a peer who doesn’t fully comprehend the power and impact of throwing those digital stones. Sexual extortion is reaching crisis proportions with eSafety experiencing a 1,300% increase in reports from young adults and teens over the past three years. And, our investigators have recently uncovered a worrying trend. We have seen a 60% surge in reports of child sexual extortion over the past 18 months targeting 13-15 year olds. As I mentioned before, the rise of powerful, cheap and accessible AI models without built-in guardrails or age restrictions are a further hazard faced by our children today. Emotional attachment to AI companions are built-in by design, using anthropomorphism to generate human-like responses and engineered sycophancy to provide constant affirmation and the feeling of deep connection. Lessons from overseas have highlighted tragic cases where these chatbots have engaged in quasi-romantic relationships with teens that have tragically ended in suicide. In the Character.AI wrongful death suit in the US, lawyers for the company effectively argued that the free speech outputs of chatbots should be protected over the safety of children, clearly as a means of shielding the company from liability. Thankfully, the judge in this case rejected this argument – just as we should reject AI companions being released into the Australian wild without proper safeguards. As noted earlier, the rise of so-called “declothing apps” or services that use generative AI to create pornography or ‘nudify’ images without effective controls is tremendous cause for concern. There is no positive use case for these kinds of apps – and they are starting to wreak systematic damage on teenagers across Australia, mostly girls. eSafety has been actively engaging with educators, police, and the app makers and apps stores themselves, and will be releasing deepfake incident management plans for schools this week as these harmful practices become more frequent and normalised. What is important to underscore is that when either real or synthetic image-based abuse is reported to us, eSafety has a high success rate in getting this content down – and our investigators act quickly. Our mandatory Phase 1 standards - which require the tech industry to do more to tackle the highest-harm online content like child sexual abuse material, will take effect this week, and will help us to force the purveyors and profiteers of these AI-powered nudifying models to prevent them being misused against children. And our second phase of codes will put in place protections for children from harmful material like pornography and will force providers of these AI chatbots to protect children from sexualised content. Ultimately, this new legislation seeks to shift the burden of reducing harm away from parents and back onto the companies who own and run these platforms and profit from Australian children. We are treating Big Tech like the extractive industry it has become. Australia is legitimately asking companies to provide safety guardrails that we expect from almost every other consumer-facing industry. Children have important digital rights - rights to participation, the right to dignity, the right to be free from online violence and of course, the right to privacy". |
|
World Whistleblower Day: The cost of exposing facts in the age of misinformation by Transparency International June 2025 Whistleblowers are a powerful force for integrity and transparency. They bring to light hidden wrongdoing, expose abuse of power, and help hold institutions and individuals to account. But in today’s world, where misinformation and disinformation campaigns flourish, whistleblowers find themselves doubly vulnerable. They become targets of coordinated attacks designed to undermine their credibility, intimidate them into silence, and manipulate the narrative in the court of public opinion. These attacks come in many forms. Sophisticated online smear campaigns amplify falsehoods and foster doubts about the motives and character of the whistleblower. Defamation lawsuits, often called strategic lawsuits against public participation (SLAPPs), are increasingly used to harass, intimidate, and discredit those who speak up. At the same time, from their unique vantage point within organisations, whistleblowers can cut through this fog of disinformation and expose not only fraud and abuse, but also the mechanisms by which misinformation is manufactured and deployed. On World Whistleblower Day, we celebrate their indispensable role in strengthening integrity and upholding the public’s right to know. We also reflect on the growing need to protect these brave individuals from retaliation, online attacks, legal intimidation and even physical violence, because when whistleblowers are silenced, we all suffer a weakening of accountability, fairness, and trust in our institutions.. http://www.transparency.org/en/news/world-whistleblower-day-cost-of-exposing-facts-age-of-misinformation Visit the related web page |
|
View more stories | |
![]() ![]() ![]() |