Weird problem with macvlan/ipvlan networks in docker

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

piranha32

Active Member
Mar 4, 2023
240
177
43
I have a weird problem with network in my docker containers, and I ran out of ideas on debugging it:
I have a pre-defined network instantiated as follows:
Code:
docker network create -d macvlan --opt parent=enp6s18 --subnet 192.168.48.0/22 --gateway 192.168.48.1 --ip-range 192.168.49.0/24 net48
This is the network-related configuration in my docker-compose file:
Code:
version: '3'
services:
  server:
    ...
    networks:
      net48:
        ipv4_address: 192.168.49.87

networks:
  net48:
    external: true
Neither the host nor other contains on the host can access this container. Arp table on the host contains an incomplete record for the address:
Code:
$ arp -a
...
? (192.168.49.87) at <incomplete> on enp6s18
...
The unexpected twist is, that the container is accessible from all machines on the network, except the host with the container and other containers on the same host

Starting a container using plain docker yields identical results:
Code:
docker run --rm -d --name nginx --network net48 --ip 192.168.49.89 nginx
...
# ping 192.168.49.89
PING 192.168.49.89 (192.168.49.89) 56(84) bytes of data.
From 192.168.49.27 icmp_seq=1 Destination Host Unreachable
From 192.168.49.27 icmp_seq=2 Destination Host Unreachable
From 192.168.49.27 icmp_seq=3 Destination Host Unreachable
^C
--- 192.168.49.89 ping statistics --


What I tried:
- deleting iptables rules defined by docker (in case it was a problem with the firewall)
- adding another ethernet switch between the host and the main switch
- changing network type from macvlan to ipvlan

I have a feeling that I'm missing something trivial.
Your advice will be highly appreciated.
 

zac1

Well-Known Member
Oct 1, 2022
432
358
63
Related?
 
Last edited:
  • Like
Reactions: piranha32

piranha32

Active Member
Mar 4, 2023
240
177
43
Crap, looks like the same problem. Thanks for digging it out.
To add insult to injury, it sometimes works. In fact, it worked for a long time, services talked to each other, and suddenly stopped.
And, of course, in all tutorials on YT macvlans work perfectly fine :/

EDIT:
According to Macvlan and container to container communication containers on the same network should be able to talk to each other, and this is all what I care about.
I have no idea why sometimes it fails.
 
Last edited:

PigLover

Moderator
Jan 26, 2011
3,186
1,545
113
Blocking the host's physical interfaces from communicating with a MacVLAN interface on the same machine is a feature, not a bug. It is designed to protect against a rather obscure network security issue.

The method linked by @casperghst42 is the "normal" way to allow the Host and MacVLAN interfaces to communicate. But I find this somewhat cumbersome, forces packets to take an extra loop through the kernel's IP stack, and it also "wastes" an IP address. I worked out a different method that seems simpler and works for me. YMMV.

I define another MacVLAN interface on the host and assign it as the host's primary IP address. The host's physical interface is up but has no address assigned. This means the "native" host interface does not have any IP assigned and every service - the host itself and all of my docker containers - is actually a MacVLAN port that can communicate together freely, like this:

Code:
ip link add eno0_macvlan link vmbr1 type macvlan mode bridge
ip addr add 172.16.xxx.xxx/16 dev eno0_macvlan
ip link set eno0_macvlan up
ip route add default via 172.16.0.1 dev vmbr0_macvlan
If you are doing this on Proxmox (as many of us do) you modify the vmbr device in /etc/net/interfaces. Note that normally the vmbr0 device gets the hosts's address and default gateway. Instead you have put them on a MacVLAN device sharing the same physical interface. Note also that your VMs can still communicate with the MacVLAN devices because, apparently, bridge ports off of the host's physical interface are not blocked by this "feature".

Code:
auto vmbr0
iface vmbr0 inet static
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        mtu 9000
        post-up ip link add vmbr0_macvlan link vmbr0 type macvlan mode bridge
        post-up ip addr add 172.16.xxx.xxx/16 dev vmbr0_macvlan
        post-up ip link set vmbr0_macvlan up
        post-up ip route add default via 172.16.0.1 dev vmbr0_macvlan