Take the recent discourse around the Diddy case. It exposed serious fault lines: how unchecked misogynoir, banter culture, and meme-driven abuse flourish even in spaces created by and for Black people. So we have to ask: are platforms and regulators really ready to deal with that complexity? Are AI tools and human moderators being properly trained to spot harmful narratives without reinforcing bias? Are we building systems that understand Black communities — or simply ones that monitor us? Until we stop treating Black women’s safety as a side note — both online and offline — we’ll keep mistaking control for care.
What does joy and justice look like online?
Joy and justice online happen when trust and safety are treated as business fundamentals — not afterthoughts. When users feel safe to express themselves, to be fully human, to make mistakes, to be inspired by healthy role models and uplifting conversation — they stay longer, spend more, and become brand advocates.
Platforms that optimise for time well spent rather than raw watch-hours tend to see higher retention and lower churn. Pinterest, for example, reduced self-harm searches by 80% after shifting to wellbeing-first metrics — and daily active users increased. The takeaway? Healthier feeds mean healthier audiences that stay.
In 2021, as Founder of Glitch, I partnered with BT Sport and EE to launch the award-winning Draw the Line campaign. We combined real-time abuse detection with practical user actions: Spot. Report. Support. The campaign generated nearly £1 million in earned media and led to a 25% drop in abusive tweets within 48 hours of launch.
Justice online means moderating both context and ideology. AI models trained with dialect-rich datasets can reduce false positives for Black British vernacular — safeguarding freedom of expression while still tackling harmful content and negative stereotypes. I personally long for the day algorithms stop perpetuating misogynoir and misogynistic content — either by failing to take it down or, worse, amplifying the “strong, angry Black woman” trope. Balanced systems avoid over-policing marginalised communities and help brands steer clear of the reputational crises that follow when safety misfires.
How do we protect wellbeing?
Shift the success metrics. Track “positive engagement” like saves and meaningful comments, not just rage clicks.
Budget for care. Ring-fence at least 5% of campaign spend for safety tooling, moderator training, and community education.
Design deliberate pauses. Introduce friction points like daily scroll limits and wellbeing nudges that encourage rest without harming revenue.
Co-create with users. Bring those most affected by online harms into your product labs. Their insight reduces rework and reputational risk.
Audit. Publish. Improve. Treat wellbeing metrics like carbon data: disclose them, benchmark progress, and iterate transparently.
When brands treat digital wellbeing as a growth lever, joy and justice stop being nice-to-haves. They become strategic advantages — powering more resilient platforms, stronger user communities, and a healthier internet for the next generation.
What needs to change for women — especially Black and marginalised women — to participate as freely as men?
First, I think we need to be curious about the question itself. If the benchmark is to be as free as men, we’re aiming too low. Patriarchy doesn’t just restrict women, it also distorts how masculinity and gender-expansive identities are expressed online. It punishes vulnerability, narrows self-expression, and fuels harmful dynamics across the board.