Docker networking is a critical aspect of Docker's functionality that enables seamless communication between containers and the outside world. It provides isolation, security, and ease of management for containerized applications.
Docker allows you to run multiple containers on a single host. Networking facilitates communication between these containers, enabling them to work together as a cohesive application. Containers can interact with each other using IP addresses or container names, making it easy to build complex, distributed systems.
Generally, there are 7 types of networking in Docker:
Default Bridge: The default bridge network in Docker is used to provide networking between Docker containers. It is named Bridge and is created automatically when Docker is installed. It uses a software bridge (provided by Linux), which is a virtual switch to connect containers. Containers can communicate with each other using their container names as hostnames on the bridge network. Containers can expose ports and access services running in other containers on the bridge network. Containers can access external networks and the Internet via port mapping on the Docker host. The bridge network isolates containers as they have their network namespace. Containers can be connected to multiple networks at the same time. Let us get some hands-on:
User-defined Bridge: User-defined bridge networks in Docker allow you to create custom bridge networks with specific configurations. Here are some key points:
You can create multiple bridge networks with the docker network create command.
You specify the network type as a bridge when creating a custom bridge network.
You can specify network-specific options like subnet, gateway, IP range, etc. This allows you to customize the IP addresses and network configuration for containers on that network.
You can connect containers to a specific bridge network using the --network flag with docker run or docker network connect.
Containers on the same user-defined bridge network can communicate and access each other's services using container names.
Containers connected to a user-defined bridge network are isolated from containers on other networks, including the default bridge network.
You have more control over the network configuration compared to the default bridge network.
User-defined bridge networks are useful when you want to group related containers on a separate network with specific configurations.
MACVLAN: Macvlan or the host is a Docker network driver that assigns each container a MAC address of its own. This allows containers to appear as separate hosts on the network. Some key points about Macvlan:
It uses the Linux kernel's Macvlan driver to assign each container a MAC address.
Each container gets its own MAC address and IP address.
Containers connected to a Macvlan network can communicate with each other and with the Docker host.
Containers can also communicate directly with the external network, bypassing NAT.
It is useful when containers need to appear as separate hosts on the network, for example in LBaaS (Load Balancer as a Service) scenarios.
IPVLAN L2: It can be used as a network driver for Docker containers to provide network isolation. Some key points:
Each Docker container can be assigned its own IPVLAN interface.
Containers within the same IPVLAN can communicate on Layer 2, providing subnet-like networking.
Containers in different IPVLANs are isolated from each other, acting like separate subnets.
IPVLAN provides a more lightweight and efficient alternative to VLANs for Docker networking.
To use IPVLAN as the network driver for Docker, you need to:
- Enable the ipvlan kernel module:
sudo modprobe ipvlany
- Create the IPVLAN interfaces:
ip link add link <parent_interface> name <ipvlan_name> type ipvlan mode l2
- Expose the IPVLAN interface to Docker:
sudo docker network create --driver=ipvlan --subnet=<CIDR> --gateway=<IP> --ipvlan-mode l2 --aux-address=<IP> <network name>
- When running containers, attach them to the IPVLAN network:
sudo docker run --network <network name> ...
The containers will then be assigned IPs within the IPVLAN subnet and will be able to communicate on Layer 2.
So in summary, IPVLAN provides an efficient Docker networking solution, allowing for network isolation between containers while reusing the parent interface's IP address.
IPVLAN L3: To use IPVLAN in Layer 3 mode for Docker networking:
Enable the plan kernel module:
sudo modprobe ipvlany
Create the IPVLAN interfaces in Layer 3 mode:
ip link add link <parent_interface> name <ipvlan_name> type ipvlan mode l3
Assign an IP address to the IPVLAN interface:
ip addr add <IP>/<netmask> dev <ipvlan_name>
Expose the IPVLAN interface to Docker:
sudo docker network create --driver=ipvlan --subnet=<CIDR> --gateway=<IP> --ipvlan-mode l3 <network name>
When running containers, attach them to the IPVLAN network:
sudo docker run --network <network name> ...
The containers will be assigned IPs within the IPVLAN subnet and can only communicate at Layer 3.
The key differences between L2 and L3 IPVLAN modes for Docker are:
L2 mode: Containers can communicate at Layer 2, acting like a single broadcast domain.
L3 mode: Containers can only communicate at Layer 3, acting like separate subnets.
Other than that, the setup and benefits of using IPVLAN for Docker networking remain the same:
Network isolation
More efficient than VLANs
Reuse of the parent interface's IP address
Overlay network: Overlay networking allows Docker containers running on different hosts to communicate with each other. Some key points:
By default, Docker containers can only communicate with other containers on the same host.
Overlay networking uses an encapsulation technique to transport container traffic between hosts.
It creates a virtual network across multiple hosts, with each host acting as a separate subnet.
Containers on different hosts but part of the same overlay network can communicate seamlessly.
Docker uses two main overlay networking drivers:
Docker native driver (default): Uses VXLAN encapsulation.
Third-party drivers: Flannel, Weave Netwrok, etc.
To configure overlay networking in Docker:
Enable IP forwarding on all hosts:
sysctl -w net.ipv4.ip_forward=1
Create the overlay network:
docker network create --driver overlay <network name>
- When running containers, attach them to the overlay network:
docker run --network <network name> ...
- Containers attached to the same overlay network, but running on different hosts, can now communicate using their container IPs.
Overlay networking allows Docker to scale beyond a single host, providing interconnectivity between containers running in a cluster of Docker daemons.
The main benefits are:
Seamless communication between inter-host containers
Simplified networking model
No need for configuring routing between hosts
None: The none
network driver in Docker provides network isolation for containers. Some key points:
By default, containers can communicate with each other if they are connected to the same network.
The
none
network driver disconnects a container from all networking.Containers using the
none
network cannot communicate with any other containers or the host machine. They are fully isolated.The
none
network can be useful for:Security testing containers
Running containers that do not require any network access
To run a container in the none
network:
docker run --network none <image>
Or you can create a none
network:
docker network create --driver none isolation_network
And then connect containers to it:
docker run --network isolation_network <image>
The main benefits of the none
network is:
Provide full network isolation for containers
Useful for security testing
Ensures containers cannot make outbound connections
The drawbacks are:
Containers cannot access any external resources
Limited use cases