Rejection from Amazon

One of the most important skills I’ve had to learn, the hard way, is how to deal with rejection.

It’s not a great start of the year, but I recently applied and was rejected from Amazon. It’s difficult not to associate “assessment” with “worth” as the two are highly correlated. However, there’s an important point to be made:

Your value as a person is not determined by the outcome of a job application.

The value you can bring to a role is the outcome of the craft you’ve honed over the years.

FAANG is not the reason your “job sucks”, happiness in your job and life is not found in a company or a pay check.

While I would have loved to start the year with a bang! I haven’t been a software engineer for that long, and even for the time that I’ve been one the experience I’ve had was varied and not as deep as I would have liked.

All to say that I need to push forward with the craftsman mentality and keep working hard on improving.

For example, one of the most important lessons of the experience is the work I had to put in preparing for the assessment as well as the experience of going through the process.

I did the Amazon leetcode path which showed me how much I enjoy problem-solving and optimizing. It showed me how much I enjoy and need to improve in C/C++. I also watched a few videos on the work simulation and culture fit assessment that introduced to some valuable life lessons.

Here’s what I decided to do next:

  1. Try and probe the recruiter for feedback on the rejection to help me understand what I could have done better.
  2. Keep doing leetcode while keeping a log of the problems I’ve solved to improve my DS&A skills.
  3. Keep applying, keep blogging, and keep learning.

Craftsman Guide

A Life Engineered lists So Good They Can’t Ignore You as a great inspiration in his video description.

Having read some of it, I’ve been inspired to take a craftsman’s approach to a career in software engineering.

The “craftsman’s approach” focuses on maximizing the value you can offer to the world as opposed to following your passion in your job.

Having rare value to offer will lead to valuable job opportunities and the prototypical elements of what makes “dream jobs”: autonomy, creativity, impactful, and rewarding (financially and emotionally).

The 4 areas of a software engineer

To complement this, I am going to categorize my blog posts into the 4 main areas I believe a software engineer operates in.

  1. Research: this includes any posts related to how to become a better learner, researcher, etc in software engineering and life.
  2. Design: design and architecture problems and posts.
  3. Implementation: posts related to implementations e.g. DS&A, libraries, etc.
  4. Testing and debugging: posts related to testing, CI/CD, debugging, and tooling related to that.
  5. Stakeholder management: posts related to communication, project management, and other soft skills.

Reproducible builds with npm

Background

Previously, we’ve been using a combination of git submodule, npm install, and npm install --force to build the blog. This has led to non-reproducible builds and a lot of confusion. Here’s how we can fix it.

One: switch from git submodule to npm for theme package

Previously, we used git submodule to add the theme to the blog.
This caused some issues, as the theme was also a npm package with its own dependencies and build process. hexo generate would build them concurrently.

We added the theme git repo to package.json.

Two: switch from npm install to npm ci in base target

Current blog is packaged with npm [1]: https://docs.npmjs.com/about-packages-and-modules. The base needs to provide the necessary dependencies for the blog to build.

Two files are important here:

  1. package.json: specifies the package name, version, and dependencies.
  2. package-lock.json

We do not want to use npm install here because:

  1. it does not delete node_modules tree.
  2. it modifies package.json if a new package is added.
  3. it generates package-lock.json to describe the exact tree that was generated.

Leading to non-reproducible builds.

For instance, consider calling the following problematic Dockerfile

1
2
3
4
5
6
7
8
9
FROM node:22.5.1-alpine3.19 AS base

COPY package.json package-lock.json /workspace/
RUN npm install

FROM base AS build

COPY . /workspace # package*.json overwrite
RUN npm run build

A correct base Dockerfile target is:

1
2
3
4
5
6
FROM node:22.5.1-alpine3.19 AS base

RUN apk add --no-cache git
WORKDIR /workspace
COPY package.json /workspace/
RUN npm ci

npm ci installs the exact dependencies listed in package-lock.json and does not modify package.json or package-lock.json.

Just build

The build target should not contain any npm install or npm ci commands. It should only contain the build command as everything should be installed in the base target.

1
2
3
4
FROM base AS build

COPY . /workspace
RUN npm run build

Three: handling updates to package.json and package-lock.json

This required a little bit of creativity. Here’s what we want to accomplish:

  1. A dev env a.k.a “change, build, test” workflow in “real-time”.
  2. Avoid maintain two separate Dockerfiles.
  3. Allow changes to package.json and package-lock.json to git-commit.

The dev service

From the base target, mount the source tree (instead of COPY). Ergo, the dev can/must run npm install / npm ci to update dependencies or node modules.

Then npm run build to build the site with any new changes.

The service runs a persistent shell process.

[!NOTE]

If you wanted to use docker to re-build then you should probably use the prod target of the main Dockerfile instead.

The serve service

Since we cannot now serve (i.e. run nginx) from the same container we are goingto use a separate service for that.

The serice as such binds to the source tree where public/ is generated by dev service and runs nginx with default args.

Notes:

  1. npm ci disregards node_modules. Hence, using devcontainer will not affect production builds unless package*.json is/are updated.

Hello World

Welcome to my website! This is my first post, and it’s going to be about how I set up this website: how I got the dependencies and how I was able to publish to github pages.

Getting this website up and running required one night’s worth of work. So, if you want to dot the same for your github page, here’s how you can do it:

Getting dependencies and development container

  1. hexo-icarus-theme is the main dependency of this site. To avoid accidentally breaking your website should the theme be updated AND for future customization-ility, we are going to fork the theme repo on Github and pull it locally.

    1
    2
    # dont forget to fork on GH first
    gh repo clone <your-gh-username>/hexo-icarus-theme
  2. Let’s create a new repo to hold the website source tree. This will be the repo that you will push to Github Pages.

    1
    2
    3
    mkdir $USER.github.io
    cd $USER.github.io
    git init

    You might want to change $USER to your Github user or organization name.

  3. Next is a Dockerfile that will build your site locally in a “controlled environment”: no polluting your machine with node stuff, development isolation, etc

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    # Dockerfile

    ## Base and dependencies
    FROM node:22.5.1-alpine3.19 AS base
    WORKDIR /workspace
    RUN npm install -g hexo-cli

    ## Copy sources for build
    FROM base AS build
    COPY . /workspace
    RUN hexo generate

    ## Deploy
    FROM nginx:1.21.3-alpine
    COPY --from=build /workspace/dist /usr/share/nginx/html
    EXPOSE 80
    CMD ["nginx", "-g", "daemon off;"]
  4. Add a .devcontainer for development purposes (things might break and you might want to debug why)

1
2
3
4
5
6
7
8
9
# .devcontainer/compose.yml
services:
web:
build:
context: .
dockerfile: Dockerfile
image: my-website-server
ports:
- "80:80"
  1. You can now init the site, add icarus theme as a submodule, and start the server
1
2
3
4
5
6
docker compose -f ./devcontainer/compose.yml up -d --build
docker compose -f ./devcontainer/compose.yml exec hexo init .
git submodule add https://github.com/<your-gh-username>/hexo-icarus-theme themes/icarus
# you might need to run `npm install <some-packages>` to get things working
# dont forget to set icarus as the theme in _config.yml
docker compose -f ./devcontainer/compose.yml exec hexo server
  1. Get the experimental dark theme and apply it
1
2
3
4
5
cd themes/icarus
git checkout night4
git remote add imaegoo https://github.com/imaegoo/hexo-theme-icarus.git
git fetch imaegoo
git merge imaegoo/night4
  1. add .gitignore for things like node_modules etc

  2. fix the _config.yml and _config.icarus.yml to your liking.

Make sure to follow icarus docs. For more information.

Deploying to Github Pages

The easiest way to deploy to Github pages is to follow the example from deploy-pages action.
Checkout my workflow for this website.


Semaphores

Introduction

Semaphores are a synchronization primitive that can be used to protect shared resources. They are used to control access to shared resources by multiple threads or processes.

Application

A resource shared across threads or across processes, e.g. a shared memory space, a file (descriptor), or a network socket, and you need to ensure that no more
than 1-n threads (or processes) should access at a time. This is where semaphores come in.

Mutex or Semaphore?

A mutex on the other hand is there to ensure that no more than one thread can access (to read or write) a shared resource at a time. It is akin to a binary semaphore.

Here’s a comparison between the two as provided by ChatGPT:

Feature Semaphore Mutex
Purpose Controls access to resources, allowing multiple threads to access (counting semaphore) or one (binary semaphore). Ensures mutual exclusion, allowing only one thread to access a critical section.
Counter Maintains a count to track available resources. Binary state: locked or unlocked (no counter).
Ownership No ownership; any thread can signal (release) a semaphore. Ownership is enforced; only the thread that locks it can unlock it.
Concurrency Allows multiple threads to proceed if the counter is greater than 1 (in a counting semaphore). Only one thread can proceed at a time.
Blocking Behavior Threads block if the counter is zero (no available resources). Threads block if the mutex is locked.
Use Cases - Managing a pool of resources (e.g., thread pools, connection pools).
- Synchronizing producer-consumer workflows.
- Protecting critical sections.
- Ensuring exclusive access to shared data.
Types - Binary semaphore (similar to a mutex).
- Counting semaphore (allows multiple threads).
Only one type (binary lock).
Risk of Deadlock Higher risk if not used carefully (e.g., incorrect signaling order). Lower risk due to strict ownership and locking rules.
Performance Slightly slower due to additional counter operations and flexibility. Slightly faster because it enforces strict mutual exclusion.
Platform Support Available in most operating systems and threading libraries. Available in most operating systems and threading libraries.

For a very nice introduction to semaphores and their usage, this post by Vikram Shukla is a must-read.

POSIX Semaphores

Semaphore routines

An overview of POSIX semaphore routines is available via linux man pages.

Linux vs MacOS

Interestingly, macOS does not support unnamed semaphores, quora.
So, if you are writing code that wants to comply with POSIX portability, you will need to use named semaphores.