Wireguard tunnel slow and intermittent

2

After asking this question I've gotten a wireguard vpn set-up that forwards all traffic from my local lan to a remote server. Connecting from the wireguard client host is fast. However, the connection from clients on the lan is much slower and drops a lot of connections. Traceroutes show that the client and the LAN clients are all connecting through the VPN and exiting correctly

On the wireguard client host I get a bad ping, but decent speed

curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python -
Retrieving speedtest.net configuration...
Testing from Spectrum (68.187.109.97)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Bertram Communications (Iron Ridge, WI) [185.33 km]: 598.9 ms
Testing download speed................................................................................
Download: 4.65 Mbit/s
Testing upload speed................................................................................................
Upload: 4.97 Mbit/s

But this code just hangs on a LAN client, it can't even download the script needed to run. A few simple websites will load, but anything substantial times out.

How do I begin to debug this? My first though is my iptables rules are potentially misconfigured


# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether b8:27:eb:84:56:f5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.104/24 brd 192.168.1.255 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ba27:ebff:fe84:56f5/64 scope link 
       valid_lft forever preferred_lft forever
3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether b8:27:eb:d1:03:a0 brd ff:ff:ff:ff:ff:ff
4: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b8:27:eb:84:56:f5 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global noprefixroute eth0.2
       valid_lft forever preferred_lft forever
    inet6 fe80::ba27:ebff:fe84:56f5/64 scope link 
       valid_lft forever preferred_lft forever
5: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1120 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none 
    inet 192.168.99.17/24 scope global wg0
       valid_lft forever preferred_lft forever

# ip -4 route show table all
default dev wg0 table 51820 scope link 
default via 192.168.1.1 dev eth0 src 192.168.1.104 metric 202 mtu 1200 
10.0.0.0/24 dev eth0.2 proto dhcp scope link src 10.0.0.1 metric 204 mtu 1200 
192.168.1.0/24 dev eth0 proto dhcp scope link src 192.168.1.104 metric 202 mtu 1200 
192.168.99.0/24 dev wg0 proto kernel scope link src 192.168.99.17 
broadcast 10.0.0.0 dev eth0.2 table local proto kernel scope link src 10.0.0.1 
local 10.0.0.1 dev eth0.2 table local proto kernel scope host src 10.0.0.1 
broadcast 10.0.0.255 dev eth0.2 table local proto kernel scope link src 10.0.0.1 
broadcast 127.0.0.0 dev lo table local proto kernel scope link src 127.0.0.1 
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1 
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1 
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1 
broadcast 192.168.1.0 dev eth0 table local proto kernel scope link src 192.168.1.104 
local 192.168.1.104 dev eth0 table local proto kernel scope host src 192.168.1.104 
broadcast 192.168.1.255 dev eth0 table local proto kernel scope link src 192.168.1.104 
broadcast 192.168.99.0 dev wg0 table local proto kernel scope link src 192.168.99.17 
local 192.168.99.17 dev wg0 table local proto kernel scope host src 192.168.99.17 
broadcast 192.168.99.255 dev wg0 table local proto kernel scope link src 192.168.99.17 

# ip -4 rule show
0:  from all lookup local 
32764:  from all lookup main suppress_prefixlength 0 
32765:  not from all fwmark 0xca6c lookup 51820 
32766:  from all lookup main 
32767:  from all lookup default 

# ip -6 route show table all
::1 dev lo proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev eth0.2 proto kernel metric 256 pref medium
local ::1 dev lo table local proto kernel metric 0 pref medium
local fe80::ba27:ebff:fe84:56f5 dev eth0.2 table local proto kernel metric 0 pref medium
local fe80::ba27:ebff:fe84:56f5 dev eth0 table local proto kernel metric 0 pref medium
ff00::/8 dev eth0 table local metric 256 pref medium
ff00::/8 dev eth0.2 table local metric 256 pref medium

# ip -6 rule show
0:  from all lookup local 
32766:  from all lookup main 

# wg
interface: wg0
  public key: XR9UASLZXCjRZKa9MnmBxebfP6jxfBaaQOa5BJEFsX8=
  private key: (hidden)
  listening port: 48767
  fwmark: 0xca6c

peer: M37O/lE0ZWZ0uzYVGu17ZAZmdbnLyd5RuiAVvF/bqwE=
  endpoint: 68.187.109.97:51820
  allowed ips: 0.0.0.0/0
  latest handshake: 2 minutes, 20 seconds ago
  transfer: 2.42 MiB received, 8.45 MiB sent

# ip netconf
inet lo forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet eth0 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet wlan0 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet eth0.2 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet wg0 forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet all forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet default forwarding on rp_filter off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 lo forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 eth0 forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 wlan0 forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 eth0.2 forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 all forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 
inet6 default forwarding off mc_forwarding off proxy_neigh off ignore_routes_with_linkdown off 

# iptables-save
# Generated by xtables-save v1.8.2 on Thu Apr  2 19:11:02 2020
*raw
:PREROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A PREROUTING -d 192.168.99.17/32 ! -i wg0 -m addrtype ! --src-type LOCAL -m comment --comment "wg-quick(8) rule for wg0" -j DROP
COMMIT
# Completed on Thu Apr  2 19:11:02 2020
# Generated by xtables-save v1.8.2 on Thu Apr  2 19:11:02 2020
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
-A PREROUTING -p udp -m comment --comment "wg-quick(8) rule for wg0" -j CONNMARK --restore-mark --nfmask 0xffffffff --ctmask 0xffffffff
-A POSTROUTING -p udp -m mark --mark 0xca6c -m comment --comment "wg-quick(8) rule for wg0" -j CONNMARK --save-mark --nfmask 0xffffffff --ctmask 0xffffffff
COMMIT
# Completed on Thu Apr  2 19:11:02 2020
# Generated by xtables-save v1.8.2 on Thu Apr  2 19:11:02 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A FORWARD -i wg0 -o eth0.2 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i eth0.2 -o wg0 -j ACCEPT
COMMIT
# Completed on Thu Apr  2 19:11:02 2020
# Generated by xtables-save v1.8.2 on Thu Apr  2 19:11:02 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A POSTROUTING -o wg0 -j MASQUERADE
COMMIT
# Completed on Thu Apr  2 19:11:02 2020
vpn
wireguard
asked on Super User Mar 31, 2020 by pgcudahy • edited Apr 2, 2020 by pgcudahy

2 Answers

2

The default MTU of WireGuard is 1420, compared with other devices where the usual size is 1492 or 1500.

This will cause any device that thinks that it is sending a full packet to the WireGuard, to actually send more than one WireGuard packet because the packet will be broken into two, the second one almost empty.

As the dominant factor in TCP/IP is the number of packets, because each requires synchronization and acknowledgement, this will slow down all communication.

The solution is to set the WireGuard to an MTU size that is the same as the rest of the network.

For more information, see:

answered on Super User Apr 3, 2020 by harrymc • edited Apr 3, 2020 by harrymc
0

For me it turned out that I had to set my MTU even lower (to 1400).

The command I used was:

sudo ip link set dev wg0 mtu 1400

Also, if you want to check whether you have an MTU error and you are connected through a dual-stack connection (= IPv4 + IPv6) connect using IPv6 instead of IPv4, then - if its MTU related - the problem should not appear anymore.

answered on Super User Sep 14, 2020 by T-Dawg

User contributions licensed under CC BY-SA 3.0