Influence and inform…

The Misogyny & AI Network is demanding that harmful outputs from AI systems are addressed at their source. We want to see fair, representative datasets, clear and robust labelling frameworks, and models that undergo rigorous bias and harm testing before they reach the public. We recognise that AI mirrors societal attitudes, so we are also championing efforts to build inclusive, safe environments across childhood, education, and adulthood.

AI offers a powerful opportunity: for the first time, it allows us to quantify societal bias.

But this ability comes with serious consequences, particularly for people who are already marginalised or discriminated against.

That is why we advocate for AI technologies and companies that promote equity through inclusive policies and inclusive model design. Products must be safe by design, especially as AI becomes deeply embedded in everyday life across the UK. Addressing and patching harms reactively is not enough; these risks must be anticipated and designed against from the outset, with diverse voices guiding the process and with the most vulnerable at the centre.

We want AI that is safe by design, properly regulated, and held to a high standard of accountability. To achieve this, we support the work of organisations and experts across our network while also advancing our own legislative, regulatory, and advocacy efforts.

You can find our campaign’s public-facing briefings and articles below.

The Misogyny & AI Network logo. The symbol for woman, intertwined with a circuit against a purple circle; representing technology and feminism.
  • Briefing - Misogyny & AI Campaign

    Click below to view our campaign briefing

  • Briefing - "Nudification" software

    Click below to view our policy briefing on nudification software

  • Briefing - Biases in datasets and algorithms

    Click below to view our briefing on biases in datasets and algorithms

  • Briefing - AI legislation & regulation

    Click below to view our policy briefing on AI legislation and regulation

Inform

Research and scholarly articles

  • Benítez-Hidalgo et al. (2026)BMC Public Health
    A systematic review synthesising global evidence on how technology-facilitated sexual violence (including digital abuse) impacts women’s mental health, highlighting prevalence and long-term harms.

    https://link.springer.com/article/10.1186/s12889-026-26260-4

  • Academic paper mapping an ecosystem of AI and related technologies that enable non-consensual intimate imagery (deepfake-style abuse), with case studies and intervention recommendations.

    https://arxiv.org/abs/2602.04759

  • He et al. (2026 preprint)arXiv
    Research using participatory threat modelling to explore how digitally facilitated risks, including harassment, surveillance and AI-enabled threats, intertwine with women’s safety and privacy.

    https://arxiv.org/abs/2602.09256

  • Graham et al. (2026)ScienceDirect (Elsevier)
    A study investigating AI-generated intimate image abuse (AI-IBSA), its perpetration patterns, and how non-consensual image creation and sharing function in digital spaces.

    https://www.sciencedirect.com/science/article/pii/S0747563226000324

  • Vine.org (Jan 26 2026)
    A reputable evidence brief documenting how generative AI has amplified technology-facilitated violence (deepfakes, impersonation, harassment) against women and girls and offers data on impacts and responses.

    https://vine.org.nz/news/new-evidence-brief-on-the-escalation-of-online-violence-against-women-in-the-public-sphere

  • UN Women knowledge paper
    Authoritative analysis exploring how AI technologies (deepfakes, automated abuse) intensify tech-facilitated violence and what systemic challenges this creates.

    https://www.unwomen.org/en/digital-library/publications/2025/12/how-ai-is-exacerbating-technology-facilitated-violence-against-women-and-girls

  • Sun et al. (2025)arXiv preprint
    Systematically detects explicit and implicit biases in foundation models across intersecting social attributes (gender, race, age), revealing how societal stereotypes are encoded in large AI models and proposing mitigation via adaptive logit adjustment.

    https://arxiv.org/abs/2501.10453

  • Duong & Conrad (2024)arXiv preprint
    Introduces novel discrimination measures and mitigation strategies for complex datasets with multiple protected attributes (e.g., sex, age, race), offering pathway to de-bias datasets used in high-stakes models.

    https://arxiv.org/abs/2405.19300

  • Wyer & Black (2025)AI & Ethics
    Analyses how large language models like GPT-3 can perpetuate sexualized violence against women via biased outputs and harmful associations, framing AI discrimination as a form of technology-facilitated gendered harm.

    https://link.springer.com/article/10.1007/s43681-024-00641-0

  • Sony et al. (2025)SSAHO / ScienceDirect
    Investigates how AI recruitment tools embedded bias that disproportionately affects women, non-binary individuals, racial minorities, and disabled candidates, and highlights regulatory gaps.

    https://www.sciencedirect.com/science/article/pii/S2590291125008113

  • Frontiers in Big Data
    Discusses how machine learning in healthcare can reinforce systemic sex and gender biases—impacting diagnostics, personalized care, and health outcomes—without robust fairness interventions.

    https://www.frontiersin.org/journals/big-data/articles/10.3389/fdata.2024.1436019/full

  • Sideri & Gritzalis (2025)AI & Ethics
    Proposes policy frameworks to integrate gender equality into the EU AI Act, aiming to proactively prevent AI-driven discrimination before harms occur.

    https://link.springer.com/article/10.1007/s44206-025-00173-y

  • Sam Rickman — Springer

    Study evaluating gender bias in summaries of long-term care records generated with two state-of-the-art, open-source LLMs released in 2024: Meta’s Llama 3 and Google Gemma.

    https://link.springer.com/article/10.1186/s12911-025-03118-0

In the news

  • The domestic abuse charity Refuge reports a sharp rise in perpetrators using AI and everyday digital technology, such as smartwatches, fitness trackers, smart home devices and impersonation apps—to stalk, monitor and control women, often from a distance, creating persistent fear and intrusion into victims’ lives. The charity says this type of technology-facilitated abuse is increasing rapidly and calls on tech companies to design products with safety in mind and for greater awareness and support for survivors.

    https://www.theguardian.com/society/2026/jan/30/abusers-using-ai-and-digital-tech-to-attack-and-control-women-charity-warns

  • A Guardian investigation found that major tech companies’ AI moderation tools, used by platforms like Instagram and LinkedIn, routinely label ordinary images of women as “sexually suggestive” or “explicit,” even when the photos show everyday scenes such as exercising, pregnancy, breastfeeding, or medical demonstrations. Comparable images of men are rated far less “racy.”

    These biased AI systems often trigger shadowbanning, reducing the visibility of posts without warning users. As a result, women’s content is disproportionately suppressed, harming female creators, small businesses, and professionals who rely on social media for income and visibility.

    Experts say the problem stems from biased training data and subjective concepts like “raciness,” which are often defined by narrow cultural perspectives. Creators report lost income, censorship of artistic work, and a lack of transparency from platforms.

    The investigation highlights a broader issue: AI tools meant to keep users safe are reinforcing gender stereotypes and marginalizing women’s voices online.

    https://www.theguardian.com/technology/2023/feb/08/biased-ai-algorithms-racy-women-bodies

  • A survey of 4,000 adults aged 18–34 found that young people rely heavily on social media for health information, yet many say women’s health content is harder to access. Over a third of 18–24-year-olds report difficulty finding information on women’s health, and many believe platforms like Instagram and Facebook frequently shadowban posts about periods, menopause, and female anatomy by mistakenly flagging them as adult content.

    Although most adults support restricting genuinely harmful or explicit material, nearly half say medically accurate terms, like “vagina” or “period”, should never be censored. Campaigners, including Essity, CensHERship, and the Period Equity Alliance, are calling for social media companies to stop automated suppression of educational women’s health posts and to work with government on solutions.

    Creators and brands describe widespread censorship, reduced visibility, and lost engagement when posting about menstruation or female health issues. Advocates argue that this “broken system” reinforces taboos and deprives young people of essential, potentially lifesaving information.

    https://www.essity.com/company/essity-in-the-world/uk-roi/news/shadow-banning/ 

  • Does it pass the test? As we await the VAWG Strategy, a coalition of VAWG organisations and experts have set out 5 key tests.

    1. Primary prevention; addressing societal and institutional attitudes and practices.

    2. All forms of VAWG; address and provide funding to ensure all forms of VAWG are reduced using measurable metrics.

    3. Inequalities and discrimination against marginalised survivors; policies and targets that address the disproportionate victimisation of minority groups across the UK.

    4. Increased multi-year funding for specialist VAWG services; secure and sustainable national infrastructure of specialist VAWG organisations to provide adequate support for women and girls in all areas of VAWG.

    5. Cross-departmental commitments with oversight and evaluation; measured and implemented collaboration between departments to ensure all policies and areas aim to reduce VAWG.

    https://www.endviolenceagainstwomen.org.uk/wp-content/uploads/2025/09/5-key-tests-for-the-VAWG-strategy-080925.pdf?utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-8bXLQ9Uml4zzef-o8cf-9BPjtk1F-0nYA8VY5XI-9dW_PJWqokjbyTuL1dj4uzG29w5PHn 

  • In a discussion with UN Women, responsible AI expert Zinnya del Villar explains how AI systems trained on biased data can reinforce discrimination against women and girls—from unfair hiring tools to misdiagnoses in healthcare and misidentification of women of colour.

    Del Villar highlights that fixing these issues starts with diverse training data, inclusive development teams, and greater transparency. She also notes that AI can be used to uncover gender pay gaps, improve fair credit scoring, detect online abuse, and support survivors through safety tools and chatbots.

    To build inclusive AI, she recommends five steps: diverse data, transparent algorithms, diverse teams, strong ethical frameworks, and gender-responsive policies.

    https://www.unwomen.org/en/news-stories/interview/2025/02/how-ai-reinforces-gender-bias-and-what-we-can-do-about-it#:~:text=This%20is%20called%20AI%20gender,data%20it%20was%20trained%20on

  • The Regulations to Protect Against Employment Discrimination Related to Artificial Intelligence went into effect in California on October 1, 2025. California’s new regulations make it unlawful for employers to use AI systems that discriminate in hiring or employment decisions. They also require transparency and record-keeping for automated decision tools.

    In the UK, similar protections could be introduced through employment and equality legislation to:

    - Prevent discrimination arising from AI-driven recruitment and management tools.

    - Require transparency when automated systems are used in hiring or promotion.

    - Mandate regular bias testing and record-keeping to ensure accountability.

    As AI becomes more embedded in workplaces, clear regulation would help ensure its use is fair, transparent, and inclusive for all workers.

    https://www.dataguidance.com/news/california-regulations-protecting-against-ai?utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-8bXLQ9Uml4zzef-o8cf-9BPjtk1F-0nYA8VY5XI-9dW_PJWqokjbyTuL1dj4uzG29w5PHn 

  • In her new book Man Up: The New Misogyny and the Rise of Violent Extremism, sociologist Cynthia Miller-Idriss exposes how online misogyny is fuelling real-world violence across the West. In conversation with PEN America, she explains how digital abuse—including doxing, AI-generated sexual images, and deepfakes—causes real psychological and physical harm, normalizes hatred, and suppresses free expression, especially for women, LGBTQ+ people, and other marginalized groups.

    Miller-Idriss warns that social media platforms’ rollback of protections against misogyny and anti-LGBTQ+ hate enables harassment to flourish. She also highlights how book bans and attacks on gender- and race-related curricula form part of a broader pattern of “erasure” that reinforces extremist ideologies.

    She outlines warning signs that young men may be engaging with misogynistic or extremist content and stresses the need for parents, educators, and mentors to talk openly with boys about masculinity, online influence, and digital literacy. Positive role models and counterspeech, she argues, are essential tools for resisting the normalization of online hate and preventing its violent offline consequences.

    https://pen.org/cynthia-miller-idriss-the-pen-ten-interview/

  • Former OpenAI product safety lead Steven argues that OpenAI cannot be trusted when it claims ChatGPT is now safe enough to permit erotic content for verified adults. Adler explains that during his time at OpenAI, the company banned erotic use of its models after finding that users were forming intense emotional attachments to chatbots, and that the AI sometimes generated sexually explicit content involving minors or coercion. The company, he says, lacked reliable tools to monitor or control these risks.

    Adler criticises the company’s current approach to safety, pointing to recent incidents where ChatGPT reinforced users’ delusions, showed sycophantic behaviour, and was linked to real-world mental health crises, including suicides. He argues that despite acknowledging “serious mental health issues” among users, OpenAI has offered no evidence that these problems have been solved before lifting restrictions on erotica.

    He calls for OpenAI to publish regular transparency reports on mental-health-related harms, similar to other tech platforms, and warns that competitive pressure in the AI industry is driving companies to weaken safety standards. Adler concludes that if OpenAI wants to be trusted with increasingly powerful AI, it must demonstrate responsible behaviour today rather than promise safety later.

    https://www.nytimes.com/2025/10/28/opinion/openai-chatgpt-safety.html?utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz-8bXLQ9Uml4zzef-o8cf-9BPjtk1F-0nYA8VY5XI-9dW_PJWqokjbyTuL1dj4uzG29w5PHn