Deployment.. Don’t know about you, but no matter how many times I deployed my apps, I always keep a stash of notes somewhere on stickies to remember what to do, why, and when.
So, finally, I decided to put together this practical step-by-step guide for deploying a React + Node app on an EC2 virtual machine using Docker, Docker Compose, and Nginx.
Although beginner-friendly, this guide is suitable for production-grade applications. We’ll cover setting up SSL and HTTPS, restricting inbound traffic, and using Docker Compose for easier instance management.
What We’ll Do in This Guide
- Create a React frontend app
- Create a Node backend app
- Create a Dockerfile for the frontend and backend
- Create a Docker Compose file
- Launch our server / EC2 instance
- Copy project files to the server
- Configure DNS settings
- Configure the server
- Configure Nginx
- Setup SSL and HTTPS
Prerequisites
- AWS Account with necessary permissions
- Node.js and npm installed on your machine
1. Create a React Frontend App
Let’s create our boilerplate React app. I’m usually using Vite, since CRA started to give me troubles after the React 19 release.
But also, Vite’s server is faster, and it has smaller, better-optimized bundles.
Next, we’ll create a root project directory (I called it my-sample-app) and in this directory, create your React app.
When prompted, provide:
- The name for your frontend React app (in my case, it’s called client)
- Framework → React
- Variant → JS or TS as you prefer
Create a Node Backend App
In the root project directory, create a directory for your backend project (I called mine backend).
In the backend folder, initiate npm and install the Express server:
npm init -y
npm i express
In the backend folder, create an index.js file with basic server configurations and a sample API endpoint (GET /api):
This is how your folder structure should look, at this point:
Create a Dockerfile for the Frontend and Backend
Inside the client folder (where your React app is), create a Dockerfile:
Create a Dockerfile in your backend folder as well:
Create Docker Compose File
In the root project directory, create a docker-compose.yml file:
Launch Server / EC2 instance
Now it’s time to create our server, i.e., launch the EC2 instance.
Log in to AWS Console → Search for EC2 in the search bar → Go to EC2 Console:
Click on the “Launch instance” button.
If you don’t already have a .pem key, create one here. Click on “Create new key pair”:
Provide a name for your key pair, select RSA key pair type, .pem format and click on “Create key pair”:
In the network settings section, select the following options (can be tweaked later to for “tighter” access):
Leave the rest as default and finally, click “Launch instance” and wait a few minutes until the instance is in running state and has all checks passed in status check columns:
Copy Project Files to the Server
Now that we have created our server, let’s connect to it using SSH and a .pem key pair.
First, please note the public IP of the server. Click on the created EC2 instance ID and copy its public IPv4 address:
Open the terminal on your local machine, go to the directory where your .pem file is stored, and connect to the server as shown in Picture 15. Enter “yes” when prompted:
If you have an error related to your key having too open permissions, you’ll need to run a chmod command to restrict permissions of the key chmod 400 <path_to_your_key>
Now that you have ensured a successful SSH connection to the server, exit the server by typing “exit”.
We now have to copy our project files to the server.
You can do this in numerous ways. Usually, you would establish an SSH connection between EC2 and where your code is stored (i.e., BitBucket, GitHub, Azure DevOps, etc.).
In our case, we will just copy the project from our local machine directly to the server, and we already have an established SSH connection for this, so in your terminal run:
scp -i <path_to_your_key> -r <path_to_your_project_folder> ec2-user@<public_ip_of_your_ec2>:~/
Replace the path to your key, the path to your project, and the public IP of your EC2 instance as required.
For example:
scp -i my-sample-app-key-pair.pem -r /Users/marinka/Desktop/my-sample-app [email protected]:~/
The above command will copy your project to the server.
Now SSH back to your server (ref. Picture 15) and check that your project files are now available on the server by running ls
Configure DNS settings
To make your app available on a chosen domain or, in our case, subdomain, you need to log into your DNS provider.
For the sake of this article, I will deploy my sample app to sampleapp.catbytes.io.
Log in to your DNS provider (in my case, IONOS) and find where you can add a new DNS record.
If you want to deploy to a subdomain, add a new “A record” with the following values:
-
Host name: your preferred subdomain name, the part that comes before the domain itself
-
Points to: the public IP of your EC2 instance
-
TTL: time to live, i.e., time that specifies how long a DNS record can be cached before it needs to be refreshed, and display your deployed changes
After you add a new DNS record, it will take some time to propagate.
You can use a website like dnschecker.org to check the propagation status.
After full propagation, your subdomain/domain should point to the public IP of your EC2 instance:
If you want to deploy to a domain and not a subdomain, you will need to provide @
as a value for host name
Configure the Server
Since we’re using Amazon Linux 2023, we will proceed with dnf
for installing the required packages.
Connect to your server using SSH (ref. Picture 15).
Update packages
sudo dnf update -y
Install Docker
sudo dnf install docker -y
Start and enable Docker
sudo systemctl start docker
sudo systemctl enable docker
Add your user to the Docker group
sudo usermod -aG docker ec2-user
After this step, exit the server and connect back to it for the configurations to apply
Download the Docker Compose binary:
`sudo curl -L "https://github.com/docker/compose/releases/download/v2.24.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose`
Run the below command to make it executable:
sudo chmod +x /usr/local/bin/docker-compose
Run the below command to verify Docker Compose installation:
docker-compose version
Run your project
docker-compose up --build
Result
You should now have both frontend and backend containers running:
Configure Nginx
Now let’s install Nginx.
Nginx is an open-source software used as a web server, reverse proxy, load balancer, and more.
We will use it to serve our static build files from the React frontend, and also to act as a reverse proxy that forwards API requests to our Node backend. This allows us to expose a single, secure HTTPS endpoint for the entire application while keeping our internal services hidden and modular.
Install Nginx:
sudo dnf install nginx -y
Start and enable Nginx:
sudo systemctl start nginx
sudo systemctl enable nginx
Update the Nginx config file
First, open the main Nginx config file:
sudo nano /etc/nginx/nginx.conf
Then you need to update the server section of the nginx.conf file to include:
- server name (your subdomain name)
- proxy configurations for frontend (in our case deployed on http://localhost:3000)
- proxy configurations for backend (in our case deployed on http://localhost:4000)
See example below:
To save changes in nginx.conf file, press Control + X
and Y
You can check the nginx.conf syntax for errors by running sudo nginx -t
After saving the configuration changes, reload Nginx by running sudo systemctl reload nginx
Set up SSL and HTTPS
Finally, let’s set up SSL certificates and HTTPS.
Install Certbot + Nginx plugin
sudo dnf install -y certbot python3-certbot-nginx
Issue the SSL certificate for your domain/subdomain
sudo certbot --nginx -d <your_subdomain_or_domain_name>
For example,e in our case:
sudo certbot --nginx -d sampleapp.catbytes.io
Deployment Complete!
Ta-daa!
Congratulations, you have successfully deployed your application!
You should now see your React app available at your domain/subdomain and API available at your domain/subdomain/api, for example:
- React is available at https://sampleapp.catbytes.io
- Node available at https://sampleapp.catbytes.io/api
Security Tips
Depending on your app and requirements, you might want to consider a few security best practices to make your app and/or API more rigid and secure.
Consider the tips below:
- Restrict SSH access as required, currently, you can SSH from anywhere (0.0.0.0/0). You might want to restrict that to, for example, only your local machine IP or your company VPN. Configure this in AWS EC2 Console → your EC2 instance → Security Group → Inbound rules
- Set up a cron task on your server to automatically renew the SSL certificates. This is important, as when they expire (and they do expire in 90 days), your app will be down. You can use certbot for this
- In Dockerfile, try not to usethe root user; better create a non-root user to run the commands
- Consider hardening the Nginx configurations to limit the possibility of DoS attacks, e.g. add basic rate limiting and security headers
Conclusion
In this guide, we covered the following:
- Docker basics
- AWS EC2 setup and traffic security rules
- Nginx reverse proxy setup
- HTTPS certificates with Certbot
- Full-stack deployment
Hope you enjoyed it and found it easy to follow.
If you encountered any issues, please reach out.
Would also love to hear what you’re building and deploying 🚀