Docker has become the de‑facto standard for packaging, shipping, and running applications. Whether you’re a seasoned developer or a curious hobbyist, learning how to deploy your first Docker container is a foundational skill that opens the door to microservices, CI/CD pipelines, and cloud‑native architectures. This guide walks you through every step—from installing Docker to running a simple web app—so you can hit the ground running.

Introduction
Imagine being able to run a web application on any machine, from your laptop to a cloud server, without worrying about dependencies, library versions, or environment quirks. Docker turns that vision into reality by encapsulating your app and its runtime into a lightweight, portable container. In this tutorial, we’ll:
- Install Docker on your local machine (Linux, macOS, or Windows).
- Create a minimal “Hello World” application.
- Write a Dockerfile to build an image.
- Build and run the container.
- Expose the app to the outside world.
- Push the image to Docker Hub (optional).
By the end, you’ll have a fully functional container running on your machine and a solid understanding of Docker’s core concepts.
1. Prerequisites & Environment Setup
Before we dive into Docker, let’s make sure your environment is ready.
1.1 Operating System
Docker supports:
- Linux (Ubuntu, Debian, Fedora, CentOS)
- macOS (Catalina or newer)
- Windows (10 Pro/Enterprise or newer, with Hyper‑V)
Choose the installation method that matches your OS. The commands below assume a Ubuntu 20.04 machine; adapt them if you’re on macOS or Windows.
1.2 System Requirements
- CPU: 64‑bit architecture
- RAM: Minimum 4 GB (Docker Desktop on macOS/Windows uses a virtual machine)
- Disk Space: At least 10 GB free for images and containers
1.3 Install Docker Engine
Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y \
ca-certificates \
curl \
gnupg \
lsb-release
# Add Docker’s official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Set up the stable repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
macOS
Download Docker Desktop for Mac from the Docker website, run the installer, and follow the on‑screen prompts. Once installed, launch Docker Desktop and wait for the whale icon to turn green.
Windows
Download Docker Desktop for Windows. During installation, enable the “Use WSL 2 instead of Hyper‑V” option if you’re on Windows 10 Home. After installation, start Docker Desktop and confirm the whale icon is green.
1.4 Verify Installation
docker --version
You should see something like Docker version 20.10.12, build e91ed57
If you encounter errors, consult Docker’s troubleshooting guide.
2. Create a Simple “Hello World” App
Let’s build a tiny Node.js web server that responds with “Hello, Docker!” This will serve as our first containerized application.
2.1 Project Directory
mkdir hello-docker
cd hello-docker
2.2 app.js
Create a file named app.js
// app.js
const http = require('http');
const hostname = '0.0.0.0';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello, Docker!\n');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
2.3 package.json
Create a minimal package.json to manage the Node.js dependency:
{
"name": "hello-docker",
"version": "1.0.0",
"description": "A simple Node.js app for Docker demo",
"main": "app.js",
"scripts": {
"start": "node app.js"
},
"dependencies": {
"express": "^4.17.1"
}
}
Tip: Even though we’re not using Express in this example, including it demonstrates how to install dependencies in a Docker image.
3. Write the Dockerfile
A Dockerfile is a set of instructions that Docker uses to build an image. Let’s craft a minimal Dockerfile for our Node.js app.
# Dockerfile
# 1️⃣ Use an official Node runtime as a parent image
FROM node:18-alpine
# 2️⃣ Set the working directory inside the container
WORKDIR /usr/src/app
# 3️⃣ Copy package.json and package-lock.json (if present)
COPY package*.json ./
# 4️⃣ Install app dependencies
RUN npm install --only=production
# 5️⃣ Copy the rest of the application code
COPY . .
# 6️⃣ Expose the port the app listens on
EXPOSE 3000
# 7️⃣ Define the command to run the app
CMD ["npm", "start"]
3.1 Why Alpine?
node:18-alpine is a lightweight image based on Alpine Linux. It’s only ~70 MB, which keeps our final image small and fast to download.
3.2 Build Context
When you run docker build, Docker sends the build context (the directory you’re in) to the Docker daemon. That’s why we copy package*.json first—Docker can cache the npm install step if those files haven’t changed.
4. Build the Docker Image
Now that we have a Dockerfile, let’s build the image.
docker build -t hello-docker .
-t hello-dockertags the image with the namehello-docker..tells Docker to use the current directory as the build context.
You’ll see output similar to:
Sending build context to Docker daemon 2.048kB
Step 1/7 : FROM node:18-alpine
---> 3a5c3b5f6f3b
...
Successfully built 5f3e2c1a9b4d
Successfully tagged hello-docker:latest
If you run docker images, you should see hello-docker listed.
5. Run the Container
With the image built, we can run it.
docker run -d --name hello-container -p 8080:3000 hello-docker
-druns the container in detached mode (in the background).--name hello-containergives the container a friendly name.-p 8080:3000maps port 8080 on your host to port 3000 inside the container (the port our app listens on).
5.1 Verify the App
Open a browser or use curl:
curl http://localhost:8080
You should see:
Hello, Docker!
5.2 Inspect Container Logs
docker logs hello-container
You’ll see the Node.js startup message:
Server running at http://0.0.0.0:3000/
5.3 Stop & Remove the Container
docker stop hello-container
docker rm hello-container
6. Advanced: Persisting Data & Using Volumes
Containers are stateless by design. If your application writes data to the filesystem, you’ll want to persist it outside the container.
6.1 Create a Volume
docker volume create hello-data
6.2 Run with Volume Mount
docker run -d --name hello-container \
-p 8080:3000 \
-v hello-data:/usr/src/app/data \
hello-docker
Now, any files written to /usr/src/app/data inside the container will be stored in the Docker volume hello-data, surviving container restarts.
7. (Optional) Push to Docker Hub
If you want to share your image or deploy it to a cloud provider, push it to Docker Hub.
7.1 Create a Docker Hub Account
Go to hub.docker.com and sign up.
7.2 Tag the Image
docker tag hello-docker yourusername/hello-docker:latest
Replace yourusername with your Docker Hub username.
7.3 Log In & Push
docker login
docker push yourusername/hello-docker:latest
You’ll now see the image in your Docker Hub repository, ready to pull from any machine.
8. Common Pitfalls & Troubleshooting
| Symptom | Likely Cause | Fix |
|---|---|---|
docker: command not found |
Docker not installed or not in PATH | Re‑install Docker or add /usr/bin/docker to PATH |
Cannot connect to the Docker daemon |
Docker daemon not running | Start Docker (sudo systemctl start docker on Linux, launch Docker Desktop on macOS/Windows) |
Error response from daemon: port is already allocated |
Port conflict | Use a different host port (-p 9090:3000) or stop the conflicting service |
Permission denied when writing to a volume |
User inside container lacks write permission | Run container as root (--user 0) or adjust volume ownership |
Conclusion
Deploying your first Docker container is a surprisingly straightforward process once you understand the key concepts: images, containers, Dockerfiles, and ports. By following this guide, you’ve:
- Installed Docker on your machine.
- Created a simple Node.js app.
- Built a Docker image from a Dockerfile.
- Ran the container and exposed it to the host network.
- Optionally pushed the image to Docker Hub.
These fundamentals will serve as the building blocks for more complex deployments—microservices, multi‑container Docker Compose setups, and cloud‑native CI/CD pipelines. Keep experimenting: try adding a database container, use Docker Compose to orchestrate multiple services, or deploy your image to a cloud provider like AWS ECS or Google Cloud Run. Happy containerizing!