Docker Bridge Network

Reading Time: 3 minutes

Hello readers,
This blog will tell you about the docker bridge network. Its use with some basic use cases also how to bridge networks different from the network.

One of the reasons Docker containers and services are so powerful is that you can connect them together, or connect them to non-Docker workloads. 

Docker Network

Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default, and provide core networking functionality:

Type of docker Network :
  • bridge: The default network driver. If you don’t specify a driver, this is the type of network you are creating. Bridge networks allow your applications to run in standalone containers that need to communicate.
  • host: For standalone containers, remove network isolation between the container and the Docker host, and use the host’s networking directly. The host is available for swarm services on Docker 17.06 and higher.
  • overlay: Overlay networks connect multiple Docker daemons together and enable swarm services to communicate with each other. You can also use overlay networks to facilitate communication between a swarm service and a standalone container, or between two standalone containers on different Docker daemons.
  • macvlan: Macvlan networks allow you to assign a MAC address to a container, making it appear as a physical device on your network. The Docker daemon routes traffic to containers by their MAC addresses.
  • none: For this container, disable all networking. Usually used in conjunction with a custom network driver.

Let create a bridge network with name alpine-net . Start two alpine container and ping between them to check network connectivity.

$ docker network create --driver bridge alpine-net
962f373dd1cc9b9d532287b898e56d51fe1e5cf09fe90208ccbf34e51ea4511c

Now, lets find out that our network is created or not.Inspect the bridge network and the alpine-net network.

$ docker network ls
NETWORK ID          NAME                    DRIVER              SCOPE
962f373dd1cc        alpine-net              bridge              local
fc2fdc49b00e        bridge                  bridge              local
0ef0faccbd7c        composetest_default     bridge              local
01d13666890b        docker_gwbridge         bridge              local
1aacb6d03c87        host                    host                local
o3elp2pjl4ts        ingress                 overlay             swarm
06bb3e7d1791        mukesh_default          bridge              local
a931e1725a7c        multi-compose_default   bridge              local
189383597a65        my-bridge-network       bridge              local
212ce0d9e3e6        my-net                  bridge              local
d56c73e70952        none                    null                local
we4mfvh0mx1u        overlay                 overlay             swarm

Inspect the alpine-net network.

$ docker network inspect alpine-net
[
    {
        "Name": "alpine-net",
        "Id": "962f373dd1cc9b9d532287b898e56d51fe1e5cf09fe90208ccbf34e51ea4511c",
        "Created": "2019-12-26T00:22:57.833279811+05:30",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.25.0.0/16",
                    "Gateway": "172.25.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

Start two alpine containers running ash, which is Alpine’s default shell rather than bash. The -dit flags mean to start the container detached (in the background), interactive (with the ability to type into it), and with a TTY (so you can see the input and output). Because you have not specified any --network flags, the containers connect to the default bridge network.

Starting alpine1
$ docker run -dit --name alpine1 --network alpine-net alpine ash
c4f8e45cff703a74b627e6e955c1938c5e85adc0a925019b87f313e23e82e5ef

Starting alpine2

$ docker run -dit --name alpine2 --network alpine-net alpine ash
dae29f3b4a80073ba42cb9b9581172a69149c1b3e879a14399f2ba6d14866d91

Inspect the alpine-net network again:

$  docker network inspect alpine-net
[
    {
        "Name": "alpine-net",
        "Id": "962f373dd1cc9b9d532287b898e56d51fe1e5cf09fe90208ccbf34e51ea4511c",
        "Created": "2019-12-26T00:22:57.833279811+05:30",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.25.0.0/16",
                    "Gateway": "172.25.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "c4f8e45cff703a74b627e6e955c1938c5e85adc0a925019b87f313e23e82e5ef": {
                "Name": "alpine1",
                "EndpointID": "20e78ac34394f90afa577995f9e4a9a1dacfc3d7c5358592ba9e419b0cabad47",
                "MacAddress": "02:42:ac:19:00:02",
                "IPv4Address": "172.25.0.2/16",
                "IPv6Address": ""
            },
            "dae29f3b4a80073ba42cb9b9581172a69149c1b3e879a14399f2ba6d14866d91": {
                "Name": "alpine2",
                "EndpointID": "76617fd6b6912b1011bd7812848e8a2d2ae861ca7b0656deadb8f85ad1675f5d",
                "MacAddress": "02:42:ac:19:00:03",
                "IPv4Address": "172.25.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

Containers alpine1 and alpine2 are connected to the alpine-net network.

Networking between container

On user-defined networks like alpine-net, containers can not only communicate by IP address but can also resolve a container name to an IP address this capability is called automatic service discovery.

Let’s connect to alpine1 and test this out. alpine1 should be able to resolve alpine2 (and alpine1, itself) to IP addresses.

From alpine1, you should not be able to connect to alpine3 at all, since it is not on the alpine-net network.

~$ docker container attach alpine1
/ # ping -c 5 alpine2
PING alpine2 (172.25.0.3): 56 data bytes
64 bytes from 172.25.0.3: seq=0 ttl=64 time=0.369 ms
64 bytes from 172.25.0.3: seq=1 ttl=64 time=0.232 ms
64 bytes from 172.25.0.3: seq=2 ttl=64 time=0.233 ms
64 bytes from 172.25.0.3: seq=3 ttl=64 time=0.238 ms
64 bytes from 172.25.0.3: seq=4 ttl=64 time=0.235 ms

--- alpine2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.232/0.261/0.369 ms

Conclusion

Docker bridge network uses a software bridge that allows containers connected to the same bridge network to communicate while providing isolation from containers which are not connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other.

References

https://docs.docker.com/network/network-tutorial-standalone/

Written by 

I always love to learn and explore new technologies. Having working skills in Linux, AWS, DevOps tools Jenkins, Git, Maven, CI-CD, Ansible, Scripting language (shell/bash), Docker as well as ELK stack and Grafana for implementing logging and visualization technology.

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading