Mental Health Track - Table 7

inspiration

it's undeniable that short form content can be harmful. reels and tiktoks are designed to keep feeding people content that is addictive and low quality, and over time that impacts focus, productivity, and most importantly, mental health. but it's also undeniable that short form content is the future of social media. people have not and are not going to stop using it.

a 2025 apa meta-analysis across 71 studies and ~100,000 participants found that higher short-form video consumption is consistently linked to worse attention spans, increased depression and anxiety, greater loneliness, and lower emotional well-being. the u.s. surgeon general reported that adolescents spending more than 3 hours a day on social media face double the risk of depression and anxiety, and a third of teens use it "almost constantly." "brain rot" was oxford's 2024 word of the year. "rage bait" was 2025's. but short form content isn't going anywhere. people are not going to stop using it.

there are even parents out there watching their kids speak in a language they can't even comprehend. skibidi, gyatt, sigma, fanum tax. words that signal hours spent inside algorithmically optimized feeds. teachers say it's making classrooms harder to manage. the vocabulary itself is a symptom: the algorithm is replacing how kids actually talk and think.

so instead of trying to eliminate it, we wanted to rethink how people interact with it. we didn't want another screen time tracker that tells you about the time you already wasted. we wanted something that actively steps in and changes the experience itself, fighting the algorithm with its own weapons. dialed gives people a way to take control of the content they consume. not by pulling them off the platform, but by deploying ai agents that sit between the user and the feed, detecting manipulation in real time and intervening before the damage is done.

what it does

dialed is an autonomous social media defense system. it logs into your instagram through a real cloud browser, scrolls your feed, and runs every piece of content through a coordinated swarm of five fetch.ai uagents. when manipulative content is detected, like rage bait, fomo hooks, outrage amplification, parasocial traps, the system fights back: blocking accounts, overlaying warnings, and generating a confessional "letter from the algorithm" that exposes exactly how the content was designed to exploit you.

before any session starts, users make an account, then go through a five-step onboarding that builds your intent profile. why do you open instagram? what content do you get stuck on? how aggressive should the system be? how long should the session last? where should it redirect you when it pulls you out? the agents use all of this to personalize detection, your triggers, your tolerance, your goals. dialed doesn't just scan for generic brain rot, it defends based on what you're specifically vulnerable to.

the pipeline:

  • browser-use cloud sdk opens a real headless chromium session → you watch your actual feed live in the dashboard
  • agent extracts each reel (caption, creator, engagement, visual description) → sends to the boss agent
  • boss fans out to classifier (claude sonnet evaluates 7 manipulation dimensions) + context agent (adaptive session state machine) in parallel
  • boss aggregates the verdict → fans out to strategist (intervention planning) + synthesis (writes as "the algorithm" confessing its tactics) in parallel
  • if brain rot → agent blocks the account live. if clean → agent likes it and moves on.
  • the whole thing loops: extract → analyze → act → scroll → repeat

you watch it all happen in a dashboard, live instagram feed on the left, real-time agent communication ticker in the center, and a five-agent fleet panel on the right. each agent has its own elevenlabs voice. you can chat with any agent mid-session (@classifier why did you flag that?). you can hear them speak. at the end, you get a full session report with stats, flagged content, and the complete letter from the algorithm.

the system adapts in real time. the context agent tracks the last 10 verdicts and shifts between four states, normal, elevated, alert, cooldown, dynamically adjusting detection thresholds based on how dense the manipulation is in your feed and whether intervention fatigue is setting in.

how we built it

  • frontend: react 18 + vite + framer motion for route transitions. agent fleet panel with per-agent activity dots, expandable message logs, and css audio visualizer bars synced to elevenlabs tts playback.

  • intelligence layer: five fetch.ai uagents running in a single bureau on port 8019 inside the same python process as fastapi. boss orchestrates everything, classifier and context run in parallel via asyncio.gather + ctx.send_and_receive, then strategist and synthesis run in parallel. all agent-to-agent communication is typed pydantic models. agents implement the fetch.ai chat protocol for agentverse compatibility and user-facing @mention chat.

  • browser automation: browser-use cloud sdk running headless chromium at mobile viewport (390×844). the browser agent is completely idle while the five agents analyze. each reel gets exactly one action: block or like. then scroll. explicit stop instructions on every client.run() call to prevent autonomous behavior.

  • backend: fastapi with websocket endpoint for real-time updates, ticker messages, stats, 2fa prompts, state changes, intervention overlays, chat, and session events all stream to the frontend. tts endpoint proxies elevenlabs api with five distinct voice ids mapped to agents.

  • auth + storage: supabase for auth (email + google oauth) and profile storage. five-step onboarding captures intent profile, purpose, triggers, aggressiveness level, session duration, redirect preference, which the agents use to personalize detection.

challenges we ran into

  • testing was harder than expected. we needed an instagram account with enough "brainrot" to reliably trigger the system, but feeds are highly personalized so getting consistent test conditions took time.

  • ran into issues with instagram's privacy and automation protections. keeping the browser session stable, logged in, and usable for interaction required a lot of trial and error.

  • fetch.ai learning curve was steep. first time using uagents and the bureau/agent/protocol pattern. figuring out ctx.send_and_receive for parallel fan-out, typed pydantic models for every message, and getting the chat protocol (ChatMessage/ChatAcknowledgement) wired up for agentverse compatibility took real trial and error. documentation only gets you so far when you're chaining five agents together in a single bureau.

accomplishments that we're proud of

  • five fetch.ai uagents running in a coordinated swarm, parallel fan-out, typed models, real-time websocket feedback, autonomously processing every reel with no human input.

  • this isn't a mockup. a real cloud browser logs into your real instagram, and you watch agents block accounts and like content live through the dashboard.

  • first time using fetch.ai and we built a five-agent bureau with chat protocol and agentverse registration from scratch in a hackathon.

what we learned

  • websockets are essential for autonomous pipelines. real-time ticker updates, agent status, intervention overlays, and 2fa prompts all needed to stream instantly.

  • browser automation against real platforms is nothing like working with apis. instagram's 2fa, session instability, and anti-automation protections forced us to build fallbacks we never anticipated.

what's next for dialed

  • cross-platform, extend to tiktok, youtube shorts, and x. the agent swarm is platform-agnostic; it just needs new extraction logic per platform.

  • workforce and enterprise, the same swarm that detects manipulation in a personal feed can audit a brand's social presence. only 12% of small business owners say their content gives them any edge over competitors. dialed's engine could help teams understand what manipulation tactics are trending, avoid publishing engagement bait, and audit competitor feeds. understanding how algorithms push content is a strategic advantage, not just a mental health one.

Built With

Share this project:

Updates