RPN-v2 is the new version of our RPN Network (Real Private Network).

It allows much more configuration, such as:

  • Custom address plan inside a private VLAN
  • Layer 2 Network
  • No MAC address restriction
  • Multicast
  • etc …

When you create a RPNv2 group, a VLAN ID is automatically assigned.
You can edit this VLAN directly from your console, if needed.
You can use VLAN ID from 1 to 3967 in NORMAL mode, up to 10 different VLAN.

A Q-in-Q mode is also available, to provide more VLANs for the biggest infrastructures.

However, for the sake of compatibility with some new exciting services we are planning to introduce in the RPNv2, we'll advise you to always use the NORMAL mode when possible.

Here are some restrictions:

  • A RPNv2 Group can only use 1 VLAN (except for Q-in-Q)
  • Each server can be integrated into 10 groups maximum
  • Each group can only have 255 servers

If you need multiple VLANs on the same servers, you can create multiple groups, with a specific VLAN per group.

Configuration Examples

Let's start with a simple one.
Here I'll configure 2 servers in a specific VLAN with a custom address plan.

On Debian/Ubuntu, you need the vlan package to configure them.

On both servers, you need to:

  • Configure the VLAN: vconfig add $NIC $VLANID
  • Configure the address: ifconfig $NIC.$VLANID inet $CIP/$NETMASK

Make sure you customize $NIC which is your RPN NIC, $VLANID which is the VLAN Number for your group, available in your console, $CIP which is the custom IP you choose, and $NETMASK, the NetMask for your private Network.

Then you should be able to make everything communicate inside that network.

Permanent Configuration on Debian/Ubuntu

First install the vlan package: sudo apt-get install vlan

Then you'll have to edit your /etc/network/interfaces file as following (assuming your RPN NIC is eth1 and your VLAN ID is 3900):

auto eth1 eth1.3900
iface eth1.3900 inet static
    address my.pri.vate.address
    netmask my.custom.net.mask

Configuration on CentOS 7

Let's assume your RPN NIC is eth1.

You need a file for the parent interface, /etc/sysconfig/network-scripts/ifcfg-eth1:


And another file for the vlan interface (assuming the VLAN ID is 3900), /etc/sysconfig/network-scripts/ifcfg-eth1.3900:


Configuration on FreeBSD

Edit your /etc/rc.conf file, assuming ix0 is your RPN NIC, and 3900 your VLAN ID:

ifconfig_vlan3900="inet my.pri.vate.IP/CIDR.NETMASK vlan 3900 vlandev ix0"

Of course, adapt the VLAN ID, Interface Name, and Network configuration to your needs.

Configuration on ESXi

On ESXi you can create virtual Switches that'll talk directly on the corresponding VLAN ID.

First, check your NICs, to see what's your RPN interface. On my example, it'll be vmnic2.

Then, go in the vSwitch management interface, and add a new one:

Give it a name, and select the proper vmnic:

Once done, go directly on the Port Group management, and add a new group:

Give it a name, and select the proper ID VLAN. Select also the proper vSwitch.

Once done, on your Virtual Machine, create a NIC directly linked on your new Group, here RPN VM Network.
And configure your VM on your already-defined address plan.

No VLAN Configuration is required in your VM, everything is handled in your vSwitch directly.

Configuration on Proxmox

On Proxmox, as well as on ESXi, you can create a bridge (vSwitch) directly on the proper VLAN. This way, you won't have to make a configuration VLAN-aware on your VMs.

In the Network Section, add a Linux Bridge.

Create your bridge with a bridge_port which will be named after your RPN NIC, and your VLAN ID.
For instance, if your RPN NIC is eth2 and your VLAN ID 3900, name it eth2.3900.

Once created, you might need to reboot your host.
Then when you create a new VM, on your new vmbr, you'll just have to configure the Network on the already-defined address plan you have chosen!

This way, your configuration will resist a reboot.

A first real-world usage could be 2 Hypervisors with Virtual Machines that needs to communicate between them.
With this, all of them can be on the same VLAN, directly on the same address plan.

Jumbo Frame (MTU 9000)

The RPN network supports by default Jumbo Frames, this allows you to configure your network interfaces with a MTU 9000.

This technology allows you to significantly minimize the interruptions and processor latencies needed for data transfer. The performance gain can reach up to + 20% on resource intensive applications such as iSCSI, NFS and DRDB.

To know your actual MTU settings, you need to type:

  ifconfig eth2

  ifconfig eth2 | grep MTU

Changing the MTU to 9000


Debian / Ubuntu

In the file /etc/network/interfaces you have to add:

mtu 9000

Insert it in your configuration file below the line iface ethX inet static.


In the file /etc/sysconfig/network-scripts/ifcfg-ethX, where X is the number of your interface, you have to add:


Then restart the network service:

service network restart


To change the MTU settings in Windows, we recommend to use a tool that allows you to modify the setting with a simple click. http://www.clubic.com/telecharger-fiche305576-tcp-optimizer.html

> Start the tool
> On the botton right corner click on ''Custom''
> Go to''Network adaptater selection' and select the concerned network interface
> Put the value ''9000'' in the MTU settings
> Now click on ''Apply change''

RPNv1 Compatibility

If you have some services that are available only on the RPNv1 (RPN-SAN, server not compatible with RPNv2, etc …), you can add a compatibility layer through your console.

This does not work with the RPN-VPN at the moment, and could cause troubles on your whole RPN Group if you add it.

In the RPNv2 management console, click on the corresponding button, and select a RPNv1 group that'll be able to access it.
Once you've done that, you can either restart your DHCLIENT on your RPNv1 servers for them to get the new routes, or manually add a route to through your already existent RPN Gateway.

You can do it with the following snippet:

ip route add via $RPNGW

Of course, make sure to replace $RPNGW with your actual RPN Gateway.

On the RPNv2 servers, you'll be provided with a RPNv1 subnet usable on your VLAN ID. If, for example, you get the following block: /28

  • will be your gateway on the RPNv1 network for the RPNv2 server
  • can be used on your RPNv2 servers

You can configure your interfaces as following in /etc/network/interfaces:

iface eth1.3900:0 inet static

and add the route for the whole network:

ip route add via

Here is a scheme of how it's working for all your RPN services (v1 and v2):

Q-in-Q mode

Important: Q-in-Q mode is not available on all offers. If you add a server that is not compatible with Q-in-Q mode, an error message will appear. Do not hesitate to contact the technical assistance if you have any question regarding the Q-in-Q compatibility of a server.

The Q-in-Q mode let you use more VLANs than you can use in Normal Mode.

Q-in-Q is rather simple, it'll take the packets you send, with the tags, and add it's special own tag in the packet's header.
This way, we encapsule your tags inside our tag, and make the usage of numerous VLANs possible on your side (up to 4096 per server!).
However, unfortunately, this will render mute the possibility to use our additional services available on the RPNv2.

The configuration is rather simple.

You'll be able to configure multiple VLAN on each server, the same way you can configure one on a single server in NORMAL mode.
You'll just have to add multiples virtual interfaces.

You can use the RPNv1 compatibility the same way in Q-in-Q mode as well! Note that you'll have to configure the RPNv1 addresses on a non-tagged interface.