Automating self-hosted deployments
December 13, 2024 (8 days ago)There are a lot of deployment tools available in the market. But when it comes to self-hosted deployments, it becomes a bit tricky. Tools like coolify & dokploy exists but themsevles eat up a lot of resources and are complex to setup. These tools are great but they turn out to be an overkill for me :)
The main key points I wanted to achieve were:
- Automated deployments: automate the deployment process as much as possible.
- Resource-efficient: Use as little resources as possible.
- Self-hosted: host the all deployment tools on my own server.
- Simple: keep the setup simple and easy to maintain, manage, and scale.
Architecture
The architecture I came up with is as follows:
Automating the deployment
I used GitHub Actions to automate the deployment process. Here is the workflow I used:
This workflow builds the Docker image and pushes it to GHCR on every push to the main
branch and tags the image as latest
.
Now, on the server, the initial setup is to install Docker and Docker Compose. Once that is done. Create a stack to run Nginx Proxy Manager
.
Next we will add all our apps in this stack. Also make sure to login to GHCR on the server to pull the images. To login you can use the personal tokens
to authenticate with GHCR.
Now create a docker-compose.yaml
file to run the apps. Here is an example:
The culture I follow is to organize the environment variables in a <service>.env
file. This makes it easier to manage and maintain the environment variables.
In this way its easy to add identity secrets for the each service.
Next go the proxy manager and add a new proxy host. Add the domain name and the port where the service is running. The host will be the name of the service along with the desired port of yours.
Now whenever you push to the main branch, the GitHub Actions workflow will build the Docker image and push it to GHCR. But this needs to be pulled on the server and run with Docker Compose. To automate this, I created a simple script that pulls the image and runs it with Docker Compose.
Now you need to have yq
installed on your server to run this script. Also, make sure to make the script executable by running chmod +x deploy.sh
.
You can also customize it to your needs
Tip: Set the docker compose path in the script to the path where your
docker-compose.yaml
file is located and move the script to the/usr/local/bin
directory to run it from anywhere.
Now, whenever you want to deploy a new service, just add it to the docker-compose.yaml
file and run the script. It will pull the image and run it with Docker Compose.
And to update the service, just push the changes to the main branch and the GitHub Actions workflow will build the new image and push it to GHCR and you then only need to run the script to update the service.
Further improvements
In the future, I plan to add move to a k8s
setup. But for now, this setup is working great for me. It's simple, easy to maintain, and resource-efficient.
You can also use github webhooks
to run the script whenever a new image is pushed to GHCR. This will automate the deployment process even further.
For this, you can use a simple REST API server that listens for the webhook and runs the script maybe using exec or something similar. And if you keep
the server in the stack (which would be a good idea), you can use ssh2
to run the script on the server.