DEV Community

Cover image for Building a Self-Destructing File Uploader with AWS Lambda, API Gateway, S3, and React
Jan Alfred Violanta
Jan Alfred Violanta

Posted on

Building a Self-Destructing File Uploader with AWS Lambda, API Gateway, S3, and React

Do you want to share a file to someone—ensuring that it disappears(or get deleted) after a short time maybe for privacy or for temporary access?

In this blog, I’ll walk you through how I built a Self-Destructing File Uploader app using Amazon S3, AWS Lambda, API Gateway, and React.

What Each AWS Service Does in This Project

🪣 Amazon S3 – Used to store the uploaded files temporarily. S3 Lifecycle Rules are configured to automatically delete files after a specified time, making them "self-destructing."

🛠️ AWS Lambda – Powers the backend logic. It generates pre-signed URLs that allow secure file uploads to S3 without exposing direct credentials.

🌐 API Gateway – Acts as a bridge between the frontend and Lambda. It exposes an HTTP endpoint that React can call to get the pre-signed URL.

Frontend UI In This project

⚛️ React – Provides the frontend interface for users to upload files and generate shareable self-destructing links.


Now that we know what each component will do in this project—let's get to building.

Step-by-Step Guide

S3 Setup

1.) Navigate to Simple Storage Service (S3) in your AWS Management Console.

Image description

2.) Click on Create bucket then set its bucket name (this will be the storage of your temporary files) >> scroll down and make sure to have Block all public access on then click Create bucket

Image description

3.) Once your bucket is created navigate to General purpose buckets and click the bucket you just created

Image description

4.) Go to Permissions tab then set this as your bucket policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyUnSignedRequests",
            "Effect": "Deny",
            "Principal": "*",
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:::name-of-your-bucket/*",
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

This bucket policy is designed to enhance security by denying all access to your S3 bucket if the request is not using HTTPS (SecureTransport).

This policy is like saying: "Only talk to me if you're using a secure line."

If someone tries to upload or access files from the bucket without using a safe, encrypted connection (HTTPS), the request gets automatically blocked. It’s just a way to make sure all data stays private and protected.

5.) After setting up the bucket policies navigate to Management and click on Create a lifecycle rule.

6.) Enter your lifecycle rule name >> Apply to all objects in the bucket (or limit it) >> Expire current versions of objects.

Image description

7.) Set the number of days for the object to be "self-destructing" or automatically deleted—then finally create the rule.

Now that your storage is done—let's move on to creating the Lambda function ^_^.


Lambda Setup

1.) Open your AWS Management console and look for Lambda.

Image description

2.) Click on Create function >> Enter the function name >> Select the latest version of python as runtime >> create a role and just put s3 read only for now >> scroll down then create the function.

Image description

Image description

3.) Click your created function then scroll down to code and input your lambda function for handling files and generating the presigned url >> here is the code I used as the lambda function.

import json
import base64
import uuid
import boto3

s3 = boto3.client('s3')
BUCKET = 'bucket-name-here'

def lambda_handler(event, context):
    try:
        body = json.loads(event['body'])

        file_name = body.get('fileName', f"{uuid.uuid4()}.png")
        file_data = body.get('fileData')

        if not file_data:
            return {
                'statusCode': 400,
                'body': json.dumps({'error': 'No file data provided'})
            }

        file_extension = file_name.split('.')[-1].lower()
        content_types = {
            'png': 'image/png',
            'jpg': 'image/jpeg',
            'jpeg': 'image/jpeg',
            'gif': 'image/gif',
            'pdf': 'application/pdf'
        }

        if file_extension not in content_types:
            return {
                'statusCode': 400,
                'body': json.dumps({'error': 'Unsupported file type'})
            }

        file_data_decoded = base64.b64decode(file_data)
        file_key = f"uploads/{uuid.uuid4()}.{file_extension}"

        # Upload the file to S3
        s3.put_object(
            Bucket=BUCKET,
            Key=file_key,
            Body=file_data_decoded,
            ContentType=content_types[file_extension]
        )

        # Generate the presigned URL
        url = s3.generate_presigned_url(
            'get_object',
            Params={'Bucket': BUCKET, 'Key': file_key},
            ExpiresIn=3600  # Expiry time in seconds (Adjust this based on your preferred time)
        )

        return {
            'statusCode': 200,
            'headers': {
                'Access-Control-Allow-Origin': '*', #restrict this part to your specific domain (just used this for testing purposes)
                'Access-Control-Allow-Headers': 'Content-Type',
                'Access-Control-Allow-Methods': 'OPTIONS, POST'
            },
            'body': json.dumps({
                'message': 'File uploaded successfully',
                'fileKey': file_key,
                'presignedURL': url,
                'expiresIn': 3600
            })
        }
    except Exception as e:
        return {
            'statusCode': 500,
            'body': json.dumps({'error': str(e)})
        }

Enter fullscreen mode Exit fullscreen mode

This AWS Lambda function lets users upload a file (like an image or PDF) from your frontend, stores it temporarily in an S3 bucket, and then gives back a special link for accessing it or downloading it.

4.) Go back up then click on Add Trigger >> Select S3 as source >> then choose the bucket your created earlier

Image description

This will make the lambda function trigger whenever an object has been uploaded/created in the S3 bucket you created and returns the presigned url that will be sent to the frontend later on.

5.) Finally, go to Configuration >> Permissions >> Click the role name >> Add Permissions >> Create Inline Policies and attach this policy to ensure that the lambda function can put objects in the s3.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3-object-lambda:PutObject"
            ],
            "Resource": "arn:aws:s3:::your-bucket-name-here/*"
        }
    ]
}
Enter fullscreen mode Exit fullscreen mode

Now that your lambda function is ready—let's move to the creation of the API using the API Gateway 🙌


Api Gateway Setup

1.) Look for the API Gateway in your AWS Management Console

Image description

2.) Create an API >> Build with HTTP API >> then set its name

Image description

3.) Click Add Integration and choose Lambda

Image description

4.) Add a Route for uploading using the API >> review then create the API

Image description

5.) On the left side of the screen of your selected API navigate to stages and look for the $default and click it. It should contain your Invoke url copy that and just put '/upload' in the end of the url

It should somewhat look like this

'https://qwerty1234.execute-api.us-east-1.amazonaws.com/upload'
Enter fullscreen mode Exit fullscreen mode

6.) Navigate to CORS and configure it like the following:

Access-Control-Allow-Origin: http://localhost:5173 << Modify this to your specific domain
Access-Control-Allow-Methods: POST & OPTIONS
Access-Control-Allow-Headers: content-type

Image description

Your API and the rest of the required services are ready! time to test it using your frontend 😁.


You may create your own frontend it just needs to contain a file uploader, a way to trigger the api and retrieve the generated url of the lambda function

Checkout my frontend code here: github link

Sample Output
Image description


Why Create This?

Building a self-destructing file uploader helps to address the issue of temporary file sharing without the burden of manual cleanup where privacy, minimalism, and automation are important.

This project gives you the opportunity to:

  • 🔐 Share files temporarily and securely without leaving traces.
  • 🧹 Using S3 Lifecycle rules, automate file destruction to conserve storage.
  • ⚙️ Use serverless AWS technologies for low maintenance and scalability.

Possible Applications

Such setups could develop into simple tools and integrations with practical applications such as:

  • 📂 One-time file distribution solutions for sensitive documents or links.
  • 📩 Event-based or feedback file drops can be uploaded via temporary upload gateways.
  • 🔒 Services focusing on privacy where users are sure that their data will not last longer than intended.

Top comments (0)