Thorn’s cover photo
Thorn

Thorn

Non-profit Organizations

Manhattan Beach, CA 33,759 followers

About us

Our children are growing up in a digital world. We are experiencing a dramatic shift in how and where young people connect, learn, and play has left parents and our communities feeling helpless and uncertain about how to protect them. Predators are taking advantage of this new reality, exploiting children in digital spaces and creating new vulnerabilities for them. This global public health crisis has devastating impacts on children everywhere, while parents and caregivers struggle to navigate this new digital landscape and keep their kids safe. At Thorn, we envision a world where cutting-edge technology is proactively developed to defend children. We are an innovative nonprofit that transforms how children are protected from sexual abuse and exploitation in the digital age through cutting-edge technology, original research, and collaborative partnerships. We empower platforms and people who can protect children with the tools and resources to do so, using the power of technology to secure every kid’s right to childhood joy.

Industry
Non-profit Organizations
Company size
51-200 employees
Headquarters
Manhattan Beach, CA
Type
Nonprofit
Founded
2012
Specialties
technology innovation and child sexual exploitation

Locations

Employees at Thorn

Updates

  • View organization page for Thorn

    33,759 followers

    We’re seeing important steps forward in the EU AI Act Code of Practice, and still, there’s more work to do. The finalized Code is here, and we’re proud that our own Dr. Rebecca Portnoff contributed to this document. It marks key progress in protecting children from AI-generated child sexual abuse material (CSAM), although some gaps remain. What’s strong: ✅ CSAM flagged as a possible systemic risk ✅ Model docs must address CSAM in training data ✅ Adversarial fine-tuning risks acknowledged ✅ Clear examples of safety practices ✅ Reporting covers serious harms to physical and mental health, like sexual extortion Where we’d like to see more iteration: ⚠️ Data cleaning still optional ⚠️ Developers set their own risk thresholds ⚠️ GPAI model definition may miss real-world threats  ⚠️ Training data filtering info not shared downstream* (*EU AI Act requires that companies publicly disclose their high-level strategy for filtering illegal material from training datasets, specifically flagging CSAM.) It’s a meaningful milestone, yet also a foundation we must keep building on.

    EU AI Providers, Take Note. EU AI Deployers, Pay Attention. From 2 August 2025, a new transparency rule under the EU AI Act goes live. If you're releasing a general-purpose AI model (like GPT-4, LLaMA, Claude, Gemini, or Mistral) in Europe, you’ll need to publish a public summary of your training data. The European Commission has issued a mandatory template, and it spells out exactly what’s required. Here’s what providers are expected to disclose: The model name, version, and training cut-off date The type and size of data by modality (text, image, audio, video) The major datasets used: public, private, licensed, scraped, synthetic The top 10% of websites scraped (or 1,000 domains for SMEs) Whether user data was used, and from which services Whether synthetic data was generated by other models Measures taken to exclude illegal content How copyright opt-outs (like robots.txt) were respected You don’t have to name every file. But your summary must be comprehensive enough for rightsholders, regulators, and researchers to trace where your model’s "knowledge" came from. And if you’re building on top of someone else’s foundation model, fine-tuning, aligning, distilling, it doesn’t matter if you’re an established tech firm or a scrappy startup. If you’re putting that system on the EU market, congratulations: you’re a provider now. You must disclose the content used in your modification, and link to the original model’s summary if it exists. The AI Office expects clarity at every layer of the stack. If you’re a deployer (a school, company, or public sector body purchasing or using AI) you should be able to: Ask your AI vendor: Where’s your Article 53 summary? Expect it to be publicly posted on their official website and distribution channels Check for disclosures on scraped domains, synthetic data, and copyright opt-out compliance Demand clarity, and accountable AI The era of “don’t ask, don’t tell” in AI is ending. The European Union just made that official. *** This post is for informational purposes only and does not constitute legal advice. If you are a provider or deployer of AI systems, consult your compliance and legal counsel to ensure compliance with the EU AI Act and related obligations.

  • View organization page for Thorn

    33,759 followers

    Empowering, not adversarial. That's how Del Harvey recommends trust and safety teams approach working with product teams. This great clip from her Safe Space interview shows how Del applies this collaborative mindset. Instead of immediately saying 'no' to something like the hypothetical 'Chainsaw Face Ripper 3000,' she asks: 'What are you hoping to achieve?' This simple reframe turns potential conflicts into collaborative problem-solving sessions. Del's experience and philosophy make her an essential voice in trust and safety, and we're so glad she took the time to chat with us.

  • View organization page for Thorn

    33,759 followers

    Grateful to Camille François, Juliet Shen, Yoel Roth, Samantha Lai, and Mariel Povolny for their insightful work on Four Functional Quadrants for Trust & Safety Tools: Detection, Investigation, Review & Enforcement (DIRE). This paper underscores an often-overlooked truth that rules alone don’t make the internet safer. We also need the right tools to implement and enforce them effectively. The DIRE framework is a powerful way to demystify the trust & safety tech stack, and makes a strong case for why sustained investment in robust, interoperable tooling is essential. We’re honored that Safer, Built by Thorn was included as an example of purpose-built detection technology for CSAM. At Thorn, we’re proud to contribute to the broader ecosystem working to equip platforms with the tools they need to protect children online. 🔗 Read the paper: https://lnkd.in/ecP3jWtC

  • View organization page for Thorn

    33,759 followers

    We’re excited to share that Dr. Rebecca Portnoff, our VP of Data Science, will be speaking at The AI Conference 2025 in San Francisco this September. Rebecca will join leaders across the AI ecosystem to discuss how Safety by Design principles can prevent AI from being weaponized to harm children. From tackling AI-generated child sexual abuse material (AIG-CSAM) to building safeguards that stop harm before it happens, her work at Thorn is helping shape a safer digital future. Learn more and join the conversation: https://aiconference.com/

    • No alternative text description for this image
  • View organization page for Thorn

    33,759 followers

    Grateful to The Patrick J. McGovern Foundation for spotlighting the growing movement to use AI responsibly to protect vulnerable communities. In its latest piece, the foundation highlights powerful, community-driven innovation. This includes Thorn’s Safety by Design work, which focuses on embedding safety and prevention into technology before harm occurs. This moment calls for cross-sector collaboration, shared responsibility, and bold action to build a digital world where all children are safe. Thank you to Yolanda Botti-Lodovico and the PJMF team for amplifying this important work. Read the piece below.

    View organization page for The Patrick J. McGovern Foundation

    27,070 followers

    Women and girls deserve digital spaces that nourish their creativity, support their aspirations, and protect their rights. But technology-facilitated gender-based violence (TFGBV) can make these spaces unsafe for many, dissuading women and girls from engaging with technology and the broader AI ecosystem. In our latest blog post, PJMF Storytelling and Advocacy Lead, Yolanda Botti-Lodovico, shares how our partners are stepping up to tackle these challenges head-on and transforming the fight against all forms of gender-based violence in the process. These organizations, many of them women-led, are moving the dial forward in three areas: 1. Building user-facing mobile applications to support survivors of violence 2. Establishing voluntary commitments with private sector organizations for safe design standards 3. Advocating for global AI governance frameworks that protect women and girls from TFGBV PJMF President, Vilas Dhar, notes that enforcement is key, with practices and policies ensuring “certification before deployment, continuous monitoring during operation, and meaningful consequences for violations.” When AI is designed for inclusivity and deployed with respect, it will benefit all of us. Check out the full piece here: https://loom.ly/HZCpAJ4 Ivana Feldfeber DataGénero - Observatorio Leonora Tima GRIT - Gender Rights in Tech Dr. Rebecca Portnoff Thorn Merve Hickok Center for AI and Digital Policy

    • No alternative text description for this image
  • View organization page for Thorn

    33,759 followers

    1 in 7 young sextortion victims are driven to self-harm. For LGBTQ+ victims, that figure doubles. Our latest research reveals that sextortion abuse among young people (ages 13-20) shares a common thread—technology is almost always involved. 81% of sextortion threats happen exclusively online, with only 6% occurring entirely in person.  In nearly three-fourths of cases, perpetrators threaten to share sexual images or personal details with friends, family, or classmates—weaponizing shame to maintain control. Demands made by perpetrators vary:  🔹 39% demanded more sexual images  🔹 31% demanded in-person meetings  🔹 25% made relationship demands 🔹 22% demanded money Our new report shines a light on the evolving nature of sextortion in 2025—how it happens, who’s most at risk, and what we can all do to stop it. Because every child deserves a digital world they can explore without fear. Read our new report to learn how sextortion affects young people and what we can do to protect them: https://lnkd.in/g2et43Xi

  • View organization page for Thorn

    33,759 followers

    A company running deepfake porn sites has agreed to shut them down for good. Briver LLC recently settled with the San Francisco City Attorney’s office. Their sites, which allowed users to create fake nude images of real people—including minors—are now permanently offline. They've also been fined $100,000. This is a crucial step. But it’s important to remember that this is just one site operator. There are many, many more. Nonconsensual explicit deepfakes are abuse. They’re used to harass, extort, and humiliate. Victims include celebrities, students, and everyday people who never consented to being turned into pornography. This is a rapidly growing threat. Deepfake tech is powerful. The demand is high. And the tools are easier to access than ever. We need to work with platforms that produce or use generative AI content to commit to safety by design principles that prevent this misuse of technology to sexually harm children. This settlement sends a message. The next step is making sure the message sticks. Source: https://lnkd.in/gXHjrNkZ

    • No alternative text description for this image
  • View organization page for Thorn

    33,759 followers

    Sextortion is on the rise, and it’s targeting young people in spaces meant for connection, such as social media, games, and messaging apps. Our latest research reveals the patterns behind this abuse. With better understanding, caregivers, educators, and platforms can step in sooner—and protect kids before harm escalates. Read our latest report to learn how sextortion affects young people and what we can do to protect them: https://lnkd.in/g2et43Xi

  • View organization page for Thorn

    33,759 followers

    The rapid advancement of technology, including AI, is reshaping every corner of the internet. Growing just as quickly are the threats children face. At #AIPortland’s “AI for Public Good” forum, Thorn’s Cassie Scyphers took the stage to show how AI is being used to fight back against online child sexual abuse and exploitation. Her presentation reveals the hidden harms threatening kids online, and how we’re turning AI-powered tech into powerful tools for detection, intervention, and real-time support for investigators. The future of AI is still being written. We can help shape it for the better. The right tools in the right hands can change everything.

Affiliated pages

Similar pages

Browse jobs

Funding

Thorn 2 total rounds

Last Round

Grant

US$ 345.0K

See more info on crunchbase