This is a submission for the Postmark Challenge: Inbox Innovators.
What I Built
A webapp that parse incoming inbox via webhook from Postmark and use AI to:
📧 Summarize emails
📦 Categorize them
📊 Rank them based on the sentiment value
🥷 Calculate the indicator for fraud, scam, phising, blackmail, etc
🗑️ Calculate the indicator for spam
Demo
How to test:
- Go to the demo website.
- Fill username (lower case, digits, and -_ only) and password, then click register
- Once you register a new account, follow the instructions on the main page.
Code Repository
SnipMail
A simple web app that summarize and categorize email inbox with the help of LLM like ChatGPT, Gemini, or Claude, and rank them based on the sentiment value, fraud indicator, and spam indicator.
Prequisite
This project uses Bun to develop and building the source code.
Backend Setup
The backend uses SQLite as the database with Drizzle as ORM. By default it uses better-sqlite3
under the hood but you can make adjustment it on drizzle.config.ts
config file and src/lib/server/db/index.ts
.
# Generate SQLite .db file and push DB tables and schema
bun run db:generate
bun run db:push
Env variables
DATABASE_URL=local.db # database if the db driver needs it
POSTMARK_INBOUND_EMAIL_ADDRESS=yourinboundhash@inbound.postmarkapp.com
# NOTE: only fill in API keys for LLM you're intending to use,
# otherwise, leave them blank so the unused LLM will be filtered out
# in the server runtime.
ANTHROPIC_API_KEY=XXXXXXXXXXXXX
OPENAI_API_KEY=XXXXXXXXXXXx
GEMINI_API_KEY= # Blank API key will be filtered
…How I Built It
This project is built with Svelte 5 + SvelteKit as the backend, developed in Bun runtime. Most code written in Typescript and for the frontend UI I mostly use Tailwind and Skeleton.
As the backend, the codebase is pretty flexible due to using Drizzle ORM. By default, it uses SQLite as the database, and in the demo it uses Cloudflare Durable Objects as the SQLite driver and Cloudflare Workers as platform of choice.
However, SvelteKit and DrizzleORM is pretty flexible as to how it can setup the backend stack. SvelteKit allows the project to be flexible shipping to platforms just by changing its adapters. Netlify, Vercel, dedicated VPS with Node or Nun, you name it. And certainly Drizzle allows more room to tweak the database as how we see fit. It is quite trivial to change the database driver, allowing more options such as LibSQL, Turso, Cloudflare D1, Bun SQL, Node better-sqlite3, and so on. It is also quite possible to change the database stack such as PostgreSQL and MySQL - at the moment the Drizzle schemas are pretty simple and there's only one view table that needed an extra attention in order to migrate from SQLite database.
Going further, I'm also leveraging Vercel's AI SDK to make the stack flexible not just in the backend stack, but also the AI side of things. Just take a look on how I setup my LLM, it is possible to use several LLMs at the same time!
export const LLMs = {
structuredOutputs: [
!!env.OPENAI_API_KEY && openai("gpt-4.1", { structuredOutputs: true }),
!!env.GEMINI_API_KEY && gemini("gemini-2.5-flash-preview-04-17", { structuredOutputs: true }),
].filter((llm) => !!llm),
text: [
!!env.ANTHROPIC_API_KEY && anthropic("claude-4-opus-20250514"),
!!env.OPENAI_API_KEY && openai("gpt-4.1"),
!!env.GEMINI_API_KEY && gemini("gemini-2.5-flash-preview-04-17"),
].filter((llm) => !!llm),
};
interface LLMOpts {
structuredOutputs?: boolean;
}
export const getRandomLLM = (opts?: LLMOpts): LanguageModelV1 => {
if (opts?.structuredOutputs)
return LLMs.structuredOutputs[Math.floor(Math.random() * LLMs.structuredOutputs.length)];
return LLMs.text[Math.floor(Math.random() * LLMs.text.length)];
};
Speaking of LLM, exploring AI in typescript turned out quite a fun experience for me. As per 2 weeks of building during the Postmark challenge duration I explore mostly the structured object outputs capability of some of the LLM. However during that time, using zod with LLM is such a blast. Composing and instructing LLM right inside the schema feels just right, the level of fine-grained control for the system instruction within zod is making me want to work with structured object all the time. Would love to start another project with them!
Working with Postmark's API also went pretty smooth during the development. First, I used ChatGPT to generate zod schema and typescript types for the webhook data coming from email inbound. It was pretty simple, just copy paste'd the webhook example from Postmark's website and told ChatGPT to write the zod schema. After that I use Requestbin to mock the POST webhook data, trying to figure out how effectively fingerprint the webhook to tell which mail goes to which user. It took me a while to realize that "plus addressing" doesn't really affect the destination of the email, which turn out what was I needed (yes I'm an email noob).
All in all, it has been a fun 2 weeks working on this challenge. Many thanks for Dev.to and Postmark teams for organizing this!
💪💪💪
Top comments (2)
been cool seeing steady progress - it adds up. you think most learning comes from shipping or more from debugging and getting stuck?
Certainly. Learning from documentation is definitely helpful, but I often find that diving into a problem and solving it hands-on gives me a much better learning experience.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.