Changing MTU in EVE-NG (allowing Jumbo frames!)

EVE-NG rules. As far as network simulation software goes, it’s the best.

When studying or otherwise, EVE-NG is the way I prefer to try things out. One thing that happens, however, when using virtualised networks, is you obscure some underlaying things – one of them being MTU. In a previous post, I went through how the base OS that EVE-NG runs on virtualises the links between routers and switches, here I will show a way to boost the MTU these virtual network links use, so that we can throw proper jumbos across the network.

In this topology, I have 2 routers, connected with dual Ethernet links, configured in a LAG. This doesn’t affect MTU at all, I just thought I’d mention it so it’s not confusing.

Topology of this little lab

The link between these routers (ae0) is set to a layer-2 MTU of 9192, which is the maximum for the platform (Juniper vSRX 3.0). This means that we should be able to send an IP packet (like a ping) of over 9000 bytes.. And yet – we can’t:

root@R1> ping size 9001 do-not-fragment
PING ( 9001 data bytes

--- ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss

So – let’s first see why, then fix it.

If we SSH onto our EVE-NG server, and take a look at the virtual network that makes up the link between ge-0/0/1 on R1 and ge-0/0/1 on R2, as well as ge-0/0/2 on both as well (since we’re in a 2 member LAG). See previous post for more detail on how to track down which link is which and what they’re called inside EVE-NG’s OS:

root@eve-ng-2:~# ifconfig vunl0_26_2
vunl0_26_2 Link encap:Ethernet  HWaddr d6:41:90:79:cb:60
          RX packets:405 errors:0 dropped:0 overruns:0 frame:0
          TX packets:868 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:66286 (66.2 KB)  TX bytes:88080 (88.0 KB)

There it is. At layer-2, we have MTU 9000. As the virtual routers are assuming a piece of wire with no MTU at all connecting them to the other side, there is a problem. I think I’m MTU 9126, but in reality I’m down to 9000 on the wire. Which explains why I can send a ping that’s nearly 9000 bytes:

root@R1> ping size 8976 do-not-fragment
PING ( 8976 data bytes
8984 bytes from icmp_seq=0 ttl=64 time=1.150 ms
8984 bytes from icmp_seq=1 ttl=64 time=1.028 ms
8984 bytes from icmp_seq=2 ttl=64 time=1.066 ms
8984 bytes from icmp_seq=3 ttl=64 time=1.073 ms

Oh yeah. 8976 bytes at layer 3, plus IP and ethernet headers = 9000.

So – let’s isolate the 4 network interfaces that EVE-NG has generated to be my ge-0/0/1’s and ge-0/0/2’s, then set the MTUs to something high – like 9200:

root@eve-ng-2:~# ifconfig vunl0_26_2 mtu 9200
root@eve-ng-2:~# ifconfig vunl0_26_3 mtu 9200
root@eve-ng-2:~# ifconfig vunl0_27_2 mtu 9200
root@eve-ng-2:~# ifconfig vunl0_27_3 mtu 9200

When EVE-NG creates a ‘wire’ between two network devices, it really creates 2 virtual network interfaces and assigns each of them to a Linux bridge. In my case, with 4 total interfaces, I have two bridges. They inherit the MTU of the smallest member, so both of my bridges now should be 9200:

root@eve-ng-2:~# ifconfig vnet0_37
vnet0_37  Link encap:Ethernet  HWaddr 0a:c8:48:ad:34:71
          RX packets:30596 errors:0 dropped:28163 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3289806 (3.2 MB)  TX bytes:0 (0.0 B)

Alright, we can see MTU on one of the bridges is now 9200 (it was 9000 before as both contributing interfaces were 9000) and we can also see a lot of dropped packets (because all my test pings at above 8976 bytes were crashing on the digital rocks).

So.. Can we send big pings now? Man. I hope so.

root@R1> ping size 9100 do-not-fragment
PING ( 9100 data bytes
9108 bytes from icmp_seq=0 ttl=64 time=1.374 ms
9108 bytes from icmp_seq=1 ttl=64 time=0.764 ms
9108 bytes from icmp_seq=2 ttl=64 time=0.910 ms
9108 bytes from icmp_seq=3 ttl=64 time=1.022 ms

Oh yes we can.


Note – this is not a permanent fix – it will last only as long as the virtual interfaces (or a server reboot) – and you need to do it for every link that is created. Perhaps it’s something to submit as a feature request to the EVE-NG team – but for now this will let us live in jumbo-harmony. Network links are ephemeral in EVE-NG, so statically setting this with a config file isn’t a great solution – but on the other hand we don’t drop our labs all that often – do we?


Step by step method for updating BIOS on an 11th gen Dell Server (R510, R610, R710 etc)

Dell has removed the ability of 11th Gen servers to download BIOS updates from – damn. This is a problem if you care about security, but then I guess if you cared that much you’d be using a newer (read: supported) server. Unlike me, the cheapskate.

So – using Dell’s Lifecycle Controller (press F10 when rebooting) – we can mount an ISO image from Dell with all the nice updates bundled in. This is very handy.

First up, boot your server into Lifecycle Controller and tell it you want to do a ‘platform update’:


This is where normally you could tell the server to hijack a NIC and connect to and away you go – but no more.

Next, you will want to either attach a burned CD or USB stick, or as I will do, attach virtual media of the ‘SUU’ ISO to the server via iDRAC. This basically means heading here and downloading a SUU ISO for your server. They’re big, between 8 and 9 GiB but they have all the goods inside. Dell themselves note you can use either the Linux or Windows variant when using Lifecycle Controller to do the upgrade – I picked Windows as it was slightly more up to date.

If the link above to the downloads fails in future, all I can suggest is you search for “Dell server SUU ISO” and hope for the best. Update 29/3/20 – here is a link that works as of today.

My server is remote (in the other room) and so I’ll be using the Virtual Media function of iDRAC, something you can google if it sounds cool.

With the SUU image attached, you can select “Local Drive (CD/DVD/USB)” and hit next:

Neat. Hopefully your ISO name will show up and you can hit next, then play the waiting game (around 2 mins for me).

Now, you can see the list of things that need upgrading. If you expand the pic, you’ll see I’m going from BIOS version 3.0.0 to 6.4.0.. Um, whoops!

At this point, it fails, claiming non-Dell authorised updates. FSCK. Anyway, it turns out you need to be on Lifecycle Controller 1.5.2 or higher, so I need to go and grab and older version and start again, a  bit lower. I ended up using a file called OM_710_SUU_FULL_ISO_A00.ISO…. This is taking me to BIOS 6.2.3… Fast forward…

So, it’s working! I hope this is useful to the precisely 2 people on Earth who might need this.

A simple guide to EVE-NG Networking

EVE-NG is a really neat way to virtualise networking stuff – routers, switches, load balancers etc.. I’m not going to harp on about it. However, when you start to run some more demanding labs (like the JNCIE/CCIE labs) – you might want to start running it on a server of your own (or that you rent from a bare-metal server provider “in the cloud”)…

You’re a network engineer, or at least a wannabe one. You know how to stick a BGP peering on a thing or make a devastating broadcast storm – but do you know enough about Linux networking to save the president set up EVE-NG in your own LAN – and maybe build a simple lab topology? Well.. If not, you can follow along with my garbled process below!

DISCLAIMER: I am a plum duff and don’t really understand everything about everything, so please use this as a guide, ask questions and try stuff in a safe environment.

Stuff we’ll need:

  • An EVE-NG install on a server (a bare metal install). I recommend you follow the instructions. If that’s still a problem, let me know and I’ll write up a more clear guide!
  • An SSH client (terminal on a Mac/Unix, PuTTY on Windows etc)
  • A good understanding of our LAN environment (i.e. the IPv4 address of our router, the size of the subnet etc)
  • Heart

I’m going to break this post into chunks covering the server itself, the Ubuntu operating system and the EVE-NG software itself. It’s not exhaustive by any means – just a taster!

Part 1: The server itself

Typically, when we have a server – it will tend to have a bunch of NICs (which is how server-dweebs refer to network ports/interfaces). One of them will be to the BMC (baseboard management controller), a dedicated RJ45 (ethernet plug!) connecting to its own little computer on board. This is also called the iDRAC (integrated Dell Remote Access Controller!) or IPMI (Intelligent Platform Management Interface – phew!) connection for branding reasons, or “out-of-band” controller, for the simple reason that the Rules of Computing require us to have at least 4 conflicting names for the same thing. 

Put simply, this allows us to have an ‘always on’ connection to manage our server, regardless of what the installed operating system is doing (even if it’s shut down!). This is super useful when tooling around with stuff we don’t understand (hi!) – because it’s our get out of jail free card when things go wrong. In my case, it means I can ruin the server setup and fix it without having to get off the couch. Go through the setup for your server’s particular BMC and give it an IP address. Once it’s done, you can get to a GUI for managing the server, seeing a remote console (like plugging in a screen/keyboard for accessing the host operating system, but without moving from your chair!). It also lets you boot and reboot the server, mess with disks, RAID config, CPU and RAM settings..

Wow, I guess I should just marry a BMC, since I love them so much?

Part 2: Ubuntu – or – our host OS

The non-BMC NICs will be for the host operating system, which in our EVE-NG case, will be Ubuntu. In my server, I have a 4 port 1Gbit/s Ethernet card, so I have 4 “non BMC” interfaces at my disposal. Ubuntu (at least version 16) labels the NICs it has available as ethX, where X is a number starting from 0. My server has 4 NICs dedicated to Ubuntu, so I have eth0-eth3. Go me.

In a “normal” setup, you would set an IP address on an eth interface, plug that into your switch or router and away you go. EVE-NG ratchets the insanity up to another level, by binding the ethX interfaces on your server to a virtual switch inside the Ubuntu OS itself (known as a bridge). This lets you get tricky by having more than a single eth interface share an IP, gives you some form of fault tolerance. In my case, it’s not worth doing anything with the bridge except configuring an IP on it and accessing my EVE-NG GUI.

Check out this cool example of what I’m talking about here:

# The primary network interface
iface eth0 inet manual
auto pnet0
iface pnet0 inet static
    bridge_ports eth0
    bridge_stp off

What we see there is a pnet0 (bridge) interface, that is using eth0, and has the IP, which is statically set (i.e. not DHCP, but you can totally do that too if you’d rather). Is having a bridge with a single physical interface actually useful? We’ll see a bit later on (spoiler: yes, see Part 4).

Part 3: EVE-NG

Before we started, we set up EVE-NG according to their documentation – in my case with the IP But EVE-NG is a network simulation platform – so we’re going to expect to see a lot more networks cropping up. How EVE-NG does this isn’t magic – fairly broadly it simulates wires with virtual switches (more bridges!).

Here’s a simple topology in EVE-NG with 2 routers “back to back” as I like to say. Most people dislike when I do, but hey this is my website.

An aside: The “routers” I am using here are Juniper vSRX security devices, running in “packet mode”. This is a Junos configuration that lets firewalls be routers, which makes for excellent light-weight lab machines. I got the image from Juniper here – and followed the instructions to add them to EVE-NG here. If you can’t make it through with those instructions alone please let me know, and I can do a post on it.

Creating a new device on our topology is very simple, adding two devices with the default settings of 4 Gig Ethernet interfaces, each consuming 4GB RAM and 2 CPU cores (these resources belong to the Ubuntu server remember, so build out your lab to suit your needs and if you run out – buy a bigger server!)

You can use the add node feature to add as many devices as you like at once, set parameters and tweak them – but I generally just use the defaults and only adjust the number of network interfaces I want each one to have. You can always edit them later!

Here is my full config for this test for vSRX1:

root@vSRX1> show configuration
system {
    host-name R1;
    root-authentication {
        encrypted-password "$zzz"; ## SECRET-DATA
interfaces {
    ge-0/0/0 {
        unit 0 {
            family inet {
security {
    forwarding-options {
        family {
            inet6 {
                mode packet-based;
            mpls {
                mode packet-based;
            iso {
                mode packet-based;

I have set up the routers to have an IP address each on ge-0/0/0:

root@vSRX1> show configuration interfaces ge-0/0/0
unit 0 {
    family inet {

They can ping each other (yeesh that is crummy latency, this isn’t a great router image and it’s struggling, ignore that):

root@vSRX1> ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=64 time=9.516 ms
64 bytes from icmp_seq=1 ttl=64 time=10.558 ms

Now – that’s great – but how is EVE-NG doing this? These two routers don’t *really* exist, nor does the cable between them.. We’re using more bridges here, which EVE-NG is calling ‘vunl’ interfaces. From what I can tell, this is a throwback to when EVE-NG was called UNETLAB, and these are “Virtual UNet Lab” interfaces – but this is just a guess (please correct me if you are in the know).

  We can find out which virtual Linux interface EVE-NG has assigned to the “cable” by selecting a router and telling EVE we would like to do a capture. This will launch a packet capture utility if we have set it up. If not (like lazy, lazy me) – we can do it ourselves inside Ubuntu AND learn something (gross!).

Clicking on the capture for ge-0/0/0 on vSRX1

That directs our browser to this URL, which can be caught by a pcap utility (if EVE-NG is setup to handle that) – or just give away which bridge is in use:


Now – we can SSH into EVE-NG (you will have already done this during setup, but if not, you wanna use terminal or PuTTY if on Windows) – “ssh james@” in my case. We can poke around in the nuts and bolts of Ubuntu all we like, or take a peek at “vunl0_1_0“!

james@eve-ng:~$ ifconfig vunl0_1_0
vunl0_1_0 Link encap:Ethernet  HWaddr 1a:05:e3:0c:2e:5f
          RX packets:142 errors:0 dropped:0 overruns:0 frame:0
          TX packets:142 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:13804 (13.8 KB)  TX bytes:13804 (13.8 KB)

Using the “ifconfig” command, we can see the “wire”. Don’t believe me? Well, let’s crack out tcpdump and have a peek into said wire:

james@eve-ng:~$ sudo tcpdump -i vunl0_1_0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vunl0_1_0, link-type EN10MB (Ethernet), capture size 262144 bytes
22:55:28.378802 IP > ICMP echo request, id 41476, seq 4, length 64
22:55:28.379292 IP > ICMP echo reply, id 41476, seq 4, length 64
22:55:29.383831 IP > ICMP echo request, id 41476, seq 5, length 64
22:55:29.384281 IP > ICMP echo reply, id 41476, seq 5, length 64

Blimey. There’s the echo request/reply pairs that make up ping, between our two hosts.

For the curious – here are some more details about our “wire” – vunl0_1_0. It’s not a ‘real’ switch or bridge, simply a logical construct inside Ubuntu. It doesn’t bind to any physical interface on the server, it just exists to patch the interfaces of our virtual devices together in their own broadcast domain. You can see some more info on the way Ubuntu sees this with ethtool:

james@eve-ng:~$ sudo ethtool vunl0_1_0
Settings for vunl0_1_0:
        Supported ports: [ ]
        Supported link modes:   Not reported
        Supported pause frame use: No
        Supports auto-negotiation: No
        Advertised link modes:  Not reported
        Advertised pause frame use: No
        Advertised auto-negotiation: No
        Speed: 10Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        MDI-X: Unknown
        Current message level: 0xffffffa1 (-95)
                               drv ifup tx_err tx_queued intr tx_done rx_status pktdata hw wol 0xffff8000
        Link detected: yes

Here we can see a few interesting tidbits. For one, Ubuntu sees this as a “real” interface, nothing special. It thinks it’s a twisted pair cable, and is running as 10Mbit/s full duplex. That might help explain the crummy ping!

Part 4: EVE-NG to the real world

So – now we can see the way that EVE-NG uses bridges to achieve connectivity between virtual network elements. Some other cool things would be the ability to have management IPs for our virtual clouds using IP ranges in our home LANs – that would let us ‘natively’ SSH into them from our desktops, without needing to jump off another host inside EVE-NG. Still with me? God I hope so.

I’ve added a new “Network” from the EVE-NG menu. I set it to use Management (Cloud0) and gave it the name “My LAN”.

Then I connected it to vSRX2 like so:

Now, I configure an interface on vSRX2 (it should be loopback0, but that’s a job for another day – so now it’s ge-0/0/1) and give it an address in my home LAN (… Next thing you know, I can SSH directly into the vSRX from my laptop. This majorly speeds up lab work, automation etc and makes life a LOT easier.

“Management(Cloud0)” is something that EVE-NG provides out of the box – it’s a bridge that is connected to our pnet0 connection – which I alluded to before. This bridge, with a physical connection (eth0 in my case) allows virtual devices inside EVE-NG to communicate with the LAN outside. If they have the correct routes, they could even communicate with the internet herself. The little cloud icon and associated cloud network don’t really tell us much – but that’s where the brctl or “bridge control” command comes in (again, running this on the EVE-NG machine, via SSH):

james@eve-ng:~$ brctl show pnet0
bridge name     bridge id               STP enabled     interfaces
pnet0           8000.90b11c23aafa       no              eth0

What this does is show the bridges on our system, and which interfaces are bridged together. It’s a super powerful command, you should look it up (or type “man brctl” in your EVE-NG shell!) – but here it clearly shows that our physical, real link – eth0, is bridged to our vSRX ge-0/0/1 on vunl0_2_1. Lovely.

Note – be a bit careful when bridging the virtual world to the real world – you are potentially exposing your lab topology to the internet.. If you’re doing this anywhere near a production network, or somewhere that won’t appreciate potentially disruptive protocols leaking out (BPDU flood anyone?) – then be careful, use appropriate ACLs etc.

Have fun!


Self-hosted WordPress – FTP for updates/plugins

If you are a cheapskate, and want to host WordPress yourself, on a Raspberry Pi sitting on your bookshelf – then good for you (me?). What you might find, when installing WordPress on your own LAMP box, is that installing plugins/themes etc requires an FTP account – credentials WordPress asks you for. This isn’t ideal, as in my self-hosted case I don’t have a Linux ‘user’ specifically for my WordPress instance, and I can’t figure out the arcane magic required to set up a specific FTP user that also shares access with my /var/www/ directories, messing with groups and permissions and whatnot..

All of that out of the way – I had that problem. I tried a variety of complex methods to get the thing working, and I couldn’t find an elegant solution – until I noticed that the root directory for my Apache install’s webpages (/var/www) wasn’t owned by my web-server user (www-data on Debian 9). A quick “sudo chown -R www-data:www-data /var/www” and hey-presto, no more asking for FTP details – WordPress just works.


One of my favourite apps

Way back in the day, I had a need to browse a web service I had running at home – from work. As I didn’t really want to open the service up to every man and his dog (i.e with DST-NAT and Masq on my router) – I decided to run a SOCKS proxy on … Continue reading “One of my favourite apps”

Way back in the day, I had a need to browse a web service I had running at home – from work. As I didn’t really want to open the service up to every man and his dog (i.e with DST-NAT and Masq on my router) – I decided to run a SOCKS proxy on my work machine, which connects to a VM in my home network via SSH, and can then let me access this stuff – sort of like a budget version VPN. (Note – this was allowed by my employer! Don’t do it if you haven’t asked!)

The best way I could find to manage this in a painless way under Windows was called ssh-tunnel-manager. It’s a bit of open-source software you can find archived on Google Code – here. It’s a simple and elegant program that is written in C# that lets you do a bunch of things, one of which is manage SSH connections to remote hosts and treat them as SOCKS proxies.

This shows the GUI and how to add a tunnel

My use case, as shown above, is to SSH into a VM (, using an SSH key (can also use passwords, but boo!), and create a tunnel I can point my browser at to hit things in my home network (obviously you could also use this to access a dev box, cloud VM, anything really!). In the screenshot I have created a tunnel called “my_tun”, dynamic destinations and on local (to my windows machine) port 9090.

Now – make your OS/browser/whatever point at the tunne

Once the tunnel is configured and ‘up’, you can point things in your OS at the SOCKS proxy localhost:9090 (here I’m showing Firefox). Neat.

The problem I have faced recently is my home VM has upgraded from Debian 8 to Debian 9. In the process, OpenSSH has been upgraded and no longer supports the out of date Key Exchange algorithms that come bundled with ssh-tunnel-manager – so you get an error message saying the SSH connection can’t be stood up.

Lucky for us – the devs of ssh-tunnel-manager simply bundled a bunch of PuTTY executables that their code uses. Simply open up the directory “SSH Tunnel Manager \ Tools” and replace the 4 .exe files that come shipped with the software with recent ones from the PuTTY website – easy peasy – and it all just works again.

The replaced exes in all their glory

It’s a bit of a shame that the developers of ssh-tunnel-manager aren’t keeping their great software current – but lucky for us we can keep it going all by ourselves – for now!

Adding a VLAN in Ubuntu 18.04

  For some reason, networking in Linux keeps on changing. Not only changing the well known naming scheme for ethernet interfaces (why), but now the way to manually set up IP addressing, VLANs etc in Ubuntu 18 has changed. Gone is the simple to use /etc/networking/interfaces file, and in its place some YAML and a … Continue reading “Adding a VLAN in Ubuntu 18.04”


For some reason, networking in Linux keeps on changing. Not only changing the well known naming scheme for ethernet interfaces (why), but now the way to manually set up IP addressing, VLANs etc in Ubuntu 18 has changed. Gone is the simple to use /etc/networking/interfaces file, and in its place some YAML and a new tool, netplan. Fine..

I needed to add a VLAN tagged interface to a physical NIC, which I used to call eth1.. So what I ended up doing was creating this YAML file in the /etc/netplan directory and putting in the following config:

james@james:/etc/netplan$ cat 1-eth1-vlan.yaml
  version: 2
    ens192: {}
          id: 999
          link: ens192
          addresses: [""]
    - to:

What this does is:

  • Define a network (version 2 seems to be a requirement but I haven’t looked it up)
  • Binding the VLAN to physical NIC ens192
  • Defining a VLAN, with a VLAN-ID (or “id”), an IP address
  • Putting in a static route


Using iDRAC with a gen 11 Dell Server (on a Mac) – phew

This post is really a persistent note for me. Every now and then I end up going down the road where I need to administer a Dell server (typically one I can afford for home use, like a Dell R610) – only to find that everything I rely on at work (like having windows/java/etc) is … Continue reading “Using iDRAC with a gen 11 Dell Server (on a Mac) – phew”

This post is really a persistent note for me. Every now and then I end up going down the road where I need to administer a Dell server (typically one I can afford for home use, like a Dell R610) – only to find that everything I rely on at work (like having windows/java/etc) is out the door. Here are some steps to allow access to the iDRAC on Dell Rx10 server from a Mac, using Chrome as a browser.

1: Install Java SRE –

2: Log into the web front-end of your iDRAC (mine is at for future reference)

3: Go to the Console/Media tab and select ‘Configuration’

4: Change the plugin type from Native to Java, and disable video encryption.

5: Open System Preferences on your Mac, and find Java. Go to the ‘Security’ tab and add the https address of your iDRAC to the list of excepted sites.

6: You need to edit this file

/Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/lib/security/

And comment out the line that starts with “jdk.tls.disabledAlgorithms”

7: Back in the iDRAC web front-end, you can click ‘Launch’ on virtual console. This will download a .jnlp file (or a hideously renamed one, in my case). Rename this file viewer.jnlp (and accept OSX complaining about changing file extension).

8: Edit viewer.jnlp with a text editor (TextEdit or Nano will do) – and replace the ‘user’ and ‘passwd’ fields (which will be hashed numbers/text) with your iDRAC login details. Note – this step is optional, but it means you can open the console without having to log into iDRAC every time.

Should be good to go!

Juniper VMX Trial and Error

I have spent some time scratching my head on ESXi-based VMX and I thought I would share some experience. This isn’t meant to be a guide, or replace Juniper’s own docs, but to supplement (and help me remember stuff 2 years later). My setup: Dell server, 10 Core Xeon E5-2640 (20 thread), 48GB RAM, ESXi … Continue reading “Juniper VMX Trial and Error”

I have spent some time scratching my head on ESXi-based VMX and I thought I would share some experience. This isn’t meant to be a guide, or replace Juniper’s own docs, but to supplement (and help me remember stuff 2 years later).

My setup:

Dell server, 10 Core Xeon E5-2640 (20 thread), 48GB RAM, ESXi 6.5

I have deployed the OVAs from Juniper for VMX 17.4R1.16.

vCP: 1CPU, 4GB ram, 2x e1000 NIC (br-ext and br-int port groups)

vFCP: 14 CPU, 16 GB RAM, 2x e1000 NIC (br-ext and br-int) plus 2x e1000 NIC (to be my ge-0/0/0 and ge-0/0/1)

The br-ext port group is just on an existing DHCP enabled vSwitch, and I can SSH into the VMX components fine. It seems that in Junos 17.4, the vFPC also gets a DHCP address for its ext bridge interface, which is nice.

The br-int port group is on its own dedicated vSwitch. All my vSwitches have MTU 9000, all security options enabled (promiscuous mode, mac forging etc.. All on).

My two ‘WAN’ interfaces, which are vNIC 3 and 4 under ESXi are there to prove things are working (I have a Linux VM attached to each, via a dedicated vSwitch/port group each). I run simple iperf tests across them, no routing protocols involved at this stage. In this lab/test I am using no physical NIC, so there is no bottleneck – nor is this a particularly realistic test for the real world deployment of VMX.

My topology is:

VM1 — VMX — VM2

Confusingly for you, my VM1 and VM2 are actually called Bird and Space Host. Don’t ask. Again, I am using a vSwitch as a cable between VM and VMX, with no physical cabling required. The br-ext link connects vCP, vFPC and an external network for management.

Lite-mode Vs Performance mode:

By default, VMX runs in performance mode. I find on ESXi (due to dpdk polling), that performance mode absolutely kills my allocated CPU threads. My ESXi reports running around 95% CPU load when a performance mode FPC is sitting idle. I find this has a major impact on TCP throughput, as well as making the ESXi box hopeless for doing other tasks. I am not a kernel expert, so I don’t really understand the implications of this CPU load.. I will leave it alone.

The real issue I had with VMX was before I even got off the ground. I was using the vFPC with 4 NIC (2 for bridges, 2 for ge- ports). By default, I assigned e1000 virtio NICs to the VM. This ended with me being stuck in ‘Present Absent’, which is what ‘show chassis fpc’ would show me for FPC 0. By default, you are in performance mode – and that doesn’t like e1000 NICs. Change the two “ge-” interfaces, in my case vmnic3 and vmnic4 to ‘VMXNET3’ and it fires up and starts passing packets. This appears to be a bug specific to Junos 17.4R1 – according to a phone-call I had with JTAC.

As I have 1Gbit/s licenses for the VMX, lite-mode is fine.

Detailed ESXi Setup

One of the things I find painful with VMX is the quality of the documentation, particularly for VMWare. Juniper releases OVAs for this platform, but shrinks away from documenting the nuts and bolts sufficiently.

Starting with the vCP OVA:

VMWare details of vCP VM

I’ve set the machine to have 1 CPU, 4GB of RAM and I’m using two port-groups for the NICs, br-ext and br-int, as described earlier in this post.

I also upgraded the VM hardware version to 13 (the OVA comes as version 10). This was based on a blog post I read in the middle of the night. I wish I could say why this mattered (JTAC suggested this only improves things when using KVM-based VMX and SR-IOV, but hey).

Summary of vCP

Now onto the vFPC VM:

VMWare details of vFPC VM

As you can see in the screenshot, I have set the 16GB of memory to be reserved. This helped with performance, particularly of my testing VMs running on the same host. I have also expanded one of my ‘WAN’ interfaces to show that it’s an E1000 NIC connecting to one of my Linux hosts.

The VM hardware version of my working vFPC is version 10.

Summary of vFPC

It’s best to set up all of this hardware in advance of switching either of the VMs on. Once you do, your vFPC should pull down a DHCP address from your br-ext bridge (mine is set up as a port group on my vSwitch0, which also shares kernel management for the ESXi itself). The vCP won’t get a DHCP address by default, as that’s not supported on fxp interfaces. I configure mine via the ESXi console.

Is it working?

Once you’ve booted both VMs, you will need to give them about 4-5 minutes. From my own bashing around in the log files, it seems that the vFPC pulls down some config from the vCP and then starts up RIOT, the process which is meant to emulate the MX series’ Trio chipset.

Note – under 17.4R1.16, the vFPC won’t work correctly by default (we set our interfaces to e1000) – so you will need to do the following to enable lite-mode, from the vCP CLU (login as root, no password. Then enter ‘cli’)

james@ch-vmx-1> edit private
james@ch-vmx-1# set chassis fpc 0 lite-mode
james@ch-vmx-1# commit and-quit

james@ch-vmx-1> request system reboot [Y]

This (plus a reboot of the vFPC VM for good measure) will put you into lite-mode. Once this reboot (~5mins) process has finished, you can check 2 important things from the vCP CLI. First, check the chassis hardware and see if we’re in lite-mode for real:

james@ch-vmx-1> show chassis hardware
Hardware inventory:
Item             Version  Part number  Serial number     Description
Chassis                                VM5ACC9ED832      VMX
Routing Engine 0                                         RE-VMX
CB 0                                                     VMX SCB
FPC 0                                                    Virtual FPC
  CPU            Rev. 1.0 RIOT-LITE    BUILTIN
  MIC 0                                                  Virtual
    PIC 0                 BUILTIN      BUILTIN           Virtual

From here, you can see FPC 0’s CPU is listed as RIOT-LITE. That’s what we wanna see.

Next, you can check the status of the FPC itself:

james@ch-vmx-1> show chassis fpc 0
                     Temp  CPU Utilization (%)   CPU Utilization (%)  Memory    Utilization (%)
Slot State            (C)  Total  Interrupt      1min   5min   15min  DRAM (MB) Heap     Buffer
  0  Online           Testing   4         0        3      4      4    2047        7          0

This garbled-by-my-wordpress-theme output shows the FPC in slot 0 is up and running. The temperatire will never move on from ‘testing’ as it’s not a real probe (but it is on a real Trio-based FPC!)

To test the performance (another post on that one day, perhaps) – I fire some packets from VM1 to VM2. They rely on the VMX to do the routing, as they are in different subnets. I’m using some quite expensive hardware/software here to send a few packets around a pretend network – but it proves the thing works:

[client - sender]
james@VM1:~$ iperf -c -i 1
Client connecting to, TCP port 5001
TCP window size:  325 KByte (default)
[  3] local port 49127 connected with
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   128 MBytes  1.08 Gbits/sec
[  3]  1.0- 2.0 sec   119 MBytes   997 Mbits/sec
[  3]  2.0- 3.0 sec   118 MBytes   992 Mbits/sec
[  3]  3.0- 4.0 sec   118 MBytes   988 Mbits/sec
[  3]  4.0- 5.0 sec   119 MBytes   996 Mbits/sec
[  3]  5.0- 6.0 sec   118 MBytes   992 Mbits/sec
[  3]  6.0- 7.0 sec   118 MBytes   994 Mbits/sec
[  3]  7.0- 8.0 sec   118 MBytes   993 Mbits/sec
[  3]  8.0- 9.0 sec   119 MBytes   995 Mbits/sec
[  3]  9.0-10.0 sec   118 MBytes   991 Mbits/sec
[  3]  0.0-10.0 sec  1.17 GBytes  1.00 Gbits/sec
[server - receiver]
james@VM2:~$ iperf -s -i 1
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
[  4] local port 5001 connected with
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 1.0 sec   127 MBytes  1.06 Gbits/sec
[  4]  1.0- 2.0 sec   119 MBytes   996 Mbits/sec
[  4]  2.0- 3.0 sec   118 MBytes   991 Mbits/sec
[  4]  3.0- 4.0 sec   118 MBytes   990 Mbits/sec
[  4]  4.0- 5.0 sec   118 MBytes   994 Mbits/sec
[  4]  5.0- 6.0 sec   118 MBytes   992 Mbits/sec
[  4]  6.0- 7.0 sec   118 MBytes   993 Mbits/sec
[  4]  7.0- 8.0 sec   119 MBytes   995 Mbits/sec
[  4]  8.0- 9.0 sec   118 MBytes   993 Mbits/sec
[  4]  9.0-10.0 sec   118 MBytes   991 Mbits/sec
[  4]  0.0-10.0 sec  1.17 GBytes   999 Mbits/sec

So there we go, a VMX in lite-mode, throwing 1Gbit/s of iperf traffic around.

Things that might be going wrong

Getting to this stage took me a while, so here are some things you might be finding are going wrong trying to use ESXi and VMX together.

1- Can’t access vFPC

This might be caused by a fairly random problem I’ve seen in 17.4R1 where 2 of the 3 NICs that the vFPC automatically stands up don’t show. You will be left with ‘int’ only. Console into the vFPC and have a look (root/root will get you in):

root@localhost:~# ifconfig| grep Link
ext       Link encap:Ethernet  HWaddr 00:50:56:9f:94:8b
int       Link encap:Ethernet  HWaddr 00:50:56:9f:03:28
lo        Link encap:Local Loopback

That shows 3, so in my case it’s working as you’d hope

2 – Throughput sucks

Check your VMX license is applied. Even the trial license is good enough for most lab cases.

james@ch-vmx-1> show system license
License usage:
                                 Licenses     Licenses    Licenses    Expiry
  Feature name                       used    installed      needed
  scale-subscriber                      0           10           0    permanent
  scale-l2tp                            0         1000           0    permanent
  scale-mobile-ip                       0         1000           0    permanent
  VMX-BANDWIDTH                         0         1000           0    permanent
  VMX-SCALE                             1            1           0    permanent

Licenses installed:
  License identifier: xxxx
  License version: 4
  Software Serial Number: xxxx
  Customer ID: xxxx.
    vmx-bandwidth-1g - vmx-bandwidth-1g
    vmx-feature-base - vmx-feature-base

You can see here I have a 1000Mbit license for bandwidth. Go me.

If you have a license applied and throughput still sucks, you might have a resource problem or some other issue. These can maybe be discussed in the comments below, but you might do better running up a thread in the Juniper official VMX support forum. Good luck!



Step by step guide: Preparing a Debian VM for Junos Automation

This is a bit specific, and, like most of my posts – a cheap way for me to remember something next time I need to do it 🙂 I am currently obsessed with network automation. My favourite ‘stack’ at the moment is Ansible, git and the Juniper Ansible libraries. There are a thousand ways to … Continue reading “Step by step guide: Preparing a Debian VM for Junos Automation”

This is a bit specific, and, like most of my posts – a cheap way for me to remember something next time I need to do it 🙂

I am currently obsessed with network automation. My favourite ‘stack’ at the moment is Ansible, git and the Juniper Ansible libraries. There are a thousand ways to skin this particular cat, but for my current project (enforcing ‘golden config’ across a large number of devices) – this limited number of tools does the job.

As with most cool new tech, there are hundreds of posts and docs, most of which are similar enough to give the illusion of cohesion, but all critically different when it comes to the nitty-gritty, causing confusion and angst. At least, that’s my impression.

So – if you want a Junos Automation machine, ready to attack your network with Python and Ansible, follow along.

I’m using Debian 8, a fresh install. Splat these commands in to set up the bits you will need. I have tested these and find they work, resolving the dependencies and resulting in no errors.

sudo apt-get update && sudo apt-get upgrade -y
sudo apt-get install python-pip ansible git software-properties-common
sudo pip install -U pip setuptools
sudo ansible-galaxy install Juniper.junos
sudo pip2 install git+

You’ll end up with Ansible installed in /etc/ansible, the freshest Juniper library for Python (version 2.7) interaction via Ansible. You’ll also have git installed, one for installing the Junos EZNC package and for future use.

My end goal here is to use this system to completely automate my network, but for the time being – we’re good to start using Ansible to take baby steps towards that goal.

I have created a directory called lab-automation, and in it three sub-directories. One called scripts (for my playbooks and shell scripts), one called logs and the other called configs, for my configs! I have created a basic Ansible playbook, which uses the previously installed Juniper.junos role, and connects to my lab routers (mx1-mx4, as defined in the /etc/ansible/hosts file and my /etc/hosts file).


- name: config getter
  hosts: all
  gather_facts: no
  connection: local
    - Juniper.junos

    - name: Pull Down The Configs
        host: "{{ inventory_hostname }}"
        user: "neteam"
        logfile: ../logs/get_config.log
        format: text
        dest: "../configs/{{ inventory_hostname }}.jconf"

This will run through all my defined hosts, and using the Juniper role (installed previously, living in /etc/ansible/roles) – grab my router configs and store them in ‘configs’ directory. Note, I am using a username (same as my Linux user) and SSH key authentication, because I hate passwords and refuse to learn how to use them in Ansible 🙂

If I run this playbook, what happens?

neteam@lab-ansible:~/lab-automation$ ansible-playbook scripts/config_getter.yml

PLAY [config getter] **********************************************************

TASK: [Pull Down The Configs] *************************************************
ok: [mx3]
ok: [mx1]
ok: [mx2]
ok: [mx4]

PLAY RECAP ********************************************************************
mx1                        : ok=1    changed=0    unreachable=0    failed=0
mx2                        : ok=1    changed=0    unreachable=0    failed=0
mx3                        : ok=1    changed=0    unreachable=0    failed=0
mx4                        : ok=1    changed=0    unreachable=0    failed=0

Great. My files are pulled down from the network. I can do all kinds of fun things with the Juniper Ansible library – and so can you. Check it out here.

Juniper to Fortinet ISIS configuration

Hoo boy. I have been trying to configure a small mesh network for a fault-resilient office setup. In my network, I have a ‘square’ setup, two VMX routers, two Fortigate virtual firewall appliances, all running on top of ESXi 6.5 (two physical hypervisors). It looks like this: Anyway. In order to redistribute the default route(s) … Continue reading “Juniper to Fortinet ISIS configuration”

Hoo boy. I have been trying to configure a small mesh network for a fault-resilient office setup. In my network, I have a ‘square’ setup, two VMX routers, two Fortigate virtual firewall appliances, all running on top of ESXi 6.5 (two physical hypervisors). It looks like this:

Anyway. In order to redistribute the default route(s) received from the upstreams, I wanted to use iBGP inside the ‘square’ of devices.. iBGP relies on an IGP, so I chose the coolest one available, ISIS.

This is a very simple setup, but there was no way I could get an adjacency to form between the router and firewall (green to black in the diagram). I tried 100 things (changing hello intervals (pointless!), LSP generation times, MTU, MTU, MTU and several other desperate things like disabling hello-padding, enabling and disabling ‘adjacency checking’ on the Forti-devices).. Nothing.

Eventually, I enabled trace-options on the Junos side of things – I could see my adjacencies with the Forti-devices stuck in the ‘Initializing’ phase, implying the three-way-handshake was busted.. The traceoptions showed some guff, but nothing that pointed to an easily solvable problem (i.e. not MTU)..

Finally, using the debug features of the Fortigate box, I found:

id=20301 logdesc="Routing log" msg="IS-IS: PDU[RECV]: P2P-Hello IS-

Neighbor(port2-0192.1681.0020) IPv6 protocols supported mismatch

Bearing in mind, there isn’t a single bit of IPv6 config on any of these devices (yet, it’s going to be fully dual stack, don’t worry!) – so what was up.. Turns out, the Fortigate devices were a bit sensitive, and needed the following knob in my Juniper ISIS config:

james@vmx-2> show configuration protocols isis | display set
set protocols isis no-ipv6-routing

All of a sudden.. My ISIS adjacencies are up and solid.

Hopefully this will be useful to some sucker in future who chooses to use ISIS in their corporate network 🙂