Lede : wireguard+gretap ergebnisse unter livebedingungen


#1

getestet 842v3 mit wireguard und gretap und batv14,
tests mit lede und 4.4 Kernel

irgendwas mach ich da noch nicht so ganz richtig, hier vergleichsergebnisse unter live test bedingungen. Freu mich über anregungen, nachmacher*innen und so.
edit: backend lahmte … siehe Post3

841v9 550Mhz 100Mbyte 2:30 - 5,4Mbit - mit Lede testbuild, fastd
842v3 650Mhz 100Mbyte 4:40 - 2,9Mbit - mit Lede testbuild, wireguard + gretap
842v3 650Mhz 100Mbyte 0:54 - 15 Mbit - mit Lede testbuild, wireguard + gretap
842v3 650Mhz 100Mbyte 0:30 - 26,7Mbit- mit Lede testbuild, wireguard

im Vergleich dazu Gluon v2016.2.2 noch mit Chaos Calmer OpenWRT:
(dort auch Hardware CPU infos zu den Geräten)

Setup wg0 WireGuard Interface

Wireguard - Server

# wg genkey > wg_privatekey
# wg pubkey < wg_privatekey > wg_publickey
# cat wg.sh
ip link del dev wg0 2>/dev/null || true
ip link add dev wg0 type wireguard
wg set wg0 private-key /home/freifunk/wg_privatekey
wg addconf wg0 /home/freifunk/wg_conf
ip addr add fe80::$(cat /sys/class/net/eth0/address)/64 dev wg0
ip addr add fdf1::$(cat /sys/class/net/eth0/address)/64 dev wg0
# <me> peer <the other>
ip address add 192.168.99.1/24 dev wg0
ip link set up dev wg0

Wireguard - peer - 842v3

# wg genkey > wg_privatekey
# wg pubkey < wg_privatekey > wg_publickey
# cat wg.sh
ip link del dev wg0 2>/dev/null || true
ip link add dev wg0 type wireguard
wg set wg0 private-key /home/freifunk/wg_privatekey
wg addconf wg0 /home/freifunk/wg_conf
ip addr add fe80::$(cat /sys/class/net/eth0/address)/64 dev wg0
ip addr add fdf1::$(cat /sys/class/net/eth0/address)/64 dev wg0
# <me> peer <the other>
ip address add 192.168.99.2/32 peer 192.168.99.1/32 dev wg0
ip link set up dev wg0
# it seems we need initializer like this 
ping 192.168.99.1 -c2

Setup gretap tunnel

gretap: Server

# these are the ips from wg0 if
ip link add gre1 type gretap remote 192.168.99.2 local 192.168.99.1
sleep 2
ip link set up dev gre1

gretap Peer 842v3

# these are the ips from wg0 if
ip link add gre1 type gretap remote 192.168.99.1 local 192.168.99.2
sleep 2
ip link set up dev gre1

ontop Batman

batv14: Server und Peers

batctl if add gre1

related :
gre-tunnel-interface-in-batman-adv-einhaengen
fastd-841nv9-vs-940nv3-vs-842nv3-live-test-poor-mans-speed-test
wireguard-als-zukuenftige-vpn-loesung
lede-test-wireguard-und-blanko-durchsatz-tp841nv11-1-und-bug


WireGuard als zukünftige VPN-Lösung?
500+ gretap tunnel Interfaces in batman einhängen (für wireguard)
Wireguard 0.0.20161230 linuxkernel 3.18+ gluon v2016.2.2
#2

connect direct to backbone

26Mbit only wireguard tunnel

time -v wget http://192.168.99.1:80/random100M -O /dev/null
Downloading 'http://192.168.99.1:80/random100M'
Connecting to 192.168.99.1:80
Writing to '/dev/null'
/dev/null            100% |*******************************|   100M  0:00:00 ETA
Download completed (104857600 bytes)
	Command being timed: "wget http://192.168.99.1:80/random100M -O /dev/null"
	User time (seconds): 2.45
	System time (seconds): 5.31
	Percent of CPU this job got: 26%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 0m 29.49s
	Average shared text size (kbytes): 0
	Average unshared data size (kbytes): 0
	Average stack size (kbytes): 0
	Average total size (kbytes): 0
	Maximum resident set size (kbytes): 2912
	Average resident set size (kbytes): 0
	Major (requiring I/O) page faults: 0
	Minor (reclaiming a frame) page faults: 44
	Voluntary context switches: 4088
	Involuntary context switches: 256
	Swaps: 0
	File system inputs: 0
	File system outputs: 0
	Socket messages sent: 0
	Socket messages received: 0
	Signals delivered: 0
	Page size (bytes): 4096
	Exit status: 0

connect backbone

### fw server via batman - and gretap (lame backend)

fw server via batman - and gretap

# time -v wget http://[fdf0:9bb:7814:a630:1c61:19ff:fefd:3ed2]/random100M -O /dev/null
Downloading 'http://[fdf0:9bb:7814:a630:1c61:19ff:fefd:3ed2]/random100M'
Connecting to fdf0:9bb:7814:a630:1c61:19ff:fefd:3ed2:80
Writing to '/dev/null'
/dev/null            100% |*******************************|   100M  0:00:00 ETA
Download completed (104857600 bytes)
	Command being timed: "wget http://[fdf0:9bb:7814:a630:1c61:19ff:fefd:3ed2]/random100M -O /dev/null"
	User time (seconds): 2.51
	System time (seconds): 6.10
	Percent of CPU this job got: 15%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 0m 53.87s
	Average shared text size (kbytes): 0
	Average unshared data size (kbytes): 0
	Average stack size (kbytes): 0
	Average total size (kbytes): 0
	Maximum resident set size (kbytes): 2912
	Average resident set size (kbytes): 0
	Major (requiring I/O) page faults: 0
	Minor (reclaiming a frame) page faults: 45
	Voluntary context switches: 4637
	Involuntary context switches: 229
	Swaps: 0
	File system inputs: 0
	File system outputs: 0
	Socket messages sent: 0
	Socket messages received: 0
	Signals delivered: 0
	Page size (bytes): 4096
	Exit status: 0

compare fastd (lede build)

just as comparison

## time -v wget http://[fdf0:9bb:7814:a630::7]/random100M -O /dev/null
Downloading 'http://[fdf0:9bb:7814:a630::7]/random100M'
Connecting to fdf0:9bb:7814:a630::7:80
Writing to '/dev/null'
/dev/null            100% |*******************************|   100M  0:00:00 ETA
Download completed (104857600 bytes)
	Command being timed: "wget http://[fdf0:9bb:7814:a630::7]/random100M -O /dev/null"
	User time (seconds): 5.47
	System time (seconds): 9.78
	Percent of CPU this job got: 9%
	Elapsed (wall clock) time (h:mm:ss or m:ss): 2m 39.03s
	Average shared text size (kbytes): 0
	Average unshared data size (kbytes): 0
	Average stack size (kbytes): 0
	Average total size (kbytes): 0
	Maximum resident set size (kbytes): 2512
	Average resident set size (kbytes): 0
	Major (requiring I/O) page faults: 0
	Minor (reclaiming a frame) page faults: 46
	Voluntary context switches: 76215
	Involuntary context switches: 55
	Swaps: 0
	File system inputs: 0
	File system outputs: 0
	Socket messages sent: 0
	Socket messages received: 0
	Signals delivered: 0
	Page size (bytes): 4096
	Exit status: 0

#3

jetzt hatte ich mich selber genatzt.
unser backend lahmt an dieser speziellen Stelle zwischen den Servern, leg ich mir die 100Mbyte Random Testdatei direkt auf den Uplinkserver mess ich folgendes.
(immer mehrere Messungen , hier ein mittleres Ergebnis)

reines routing direkt - 0:25 (time -v wget http://136.243.153.228:10080/random100M -O /dev/null)
wg0 IF - 0:30 (time -v wget http://192.168.99.1/random100M -O /dev/null)
gre1 IF - 0:54 (time -v wget http://[fdf0:9bb:7814:a630:1c61:19ff:fefd:3ed2]/random100M -O /dev/null)

(00:26:03) cccfr_fuzzle: yay ... irgendwas stimmt mit der backendanbindung des servers nicht : 
(00:26:03) cccfr_fuzzle: ipv4 - nur wg - ca 30 sek.
(00:26:03) cccfr_fuzzle: ipv6 - wg+gre - ca 54 sek.
...
(00:27:00) cccfr_fuzzle: 54 sek sind etwa 15 Mbit (unter livebbedingungen (wie ich die gerade immer nenne))

btw. Voluntary context switches: 4637 (dropped from60k++)