BQN Documentation
Close icon

General Requirements

The BQN uses dedicated commercial off-the-shelf servers or virtual machines, configured according to the network capacity and connectivity requirements.

Supported CPUs

  • Intel Xeon and Core CPUs (Nehalem or later)
  • AMD Epyc CPUs

Dual-CPU servers are supported. See the hardware dimensioning section for details.

Currently, the server maximum number of CPU cores supported is 256. It requires bqn R4.18 or more, with bqnkernel-R3.0.13 or later. For up to 128 cores, previous bqnkernel and bqn releases can be used.

Supported Hard Disks

[.p-highlight-blue]SSDs (Solid State Drive) are recommended for performance and reliability reasons.[.p-highlight-blue]

The following disk types are supported:

  • SATA
  • SAS
  • NMVe

Supported Network Interfaces

[.p-highlight-blue]A BQN server needs at least three network ports: one for management and another two for packet processing.[.p-highlight-blue]

Ports for packet processing should be one of the following controllers:

Controller Port Speeds Observations
Intel I210 1 Gbps
Intel I350 1 Gbps
Intel i226-V 2.5 Gbps Requires bqnkernel-R3.0.13 or later. DPDK not supported (may be limited in throughput).
Intel X520 10 Gbps
Intel X540 10 Gbps
Intel X550 10 Gbps
Intel X553 10 Gbps
Intel X710 10 Gbps
Intel XL710 10 / 40 Gbps With PPPoE traffic, the XL710 negotiates 40 Gbps link, but the capacity is limited to 10 Gbps because the card only distributes traffic to one core Load balancing of PPPoE traffic requires a card firmware update and requires bqnkernel-R3.0.19 or later (see here for details)
Intel XXV710 10 / 25 Gbps With PPPoE traffic, the XXVL710 negotiates 25 Gbps link, but the capacity is limited to 10 Gbps because the card only distributes traffic to one core.
Intel E810-XXV 10 / 25 Gbps Requires bqnkernel-R3.0.14 or later.
Intel E810-C 10/25/50/100 Gbps Requires bqnkernel-R3.0.14 or later.
They are limited in general to 100 Gbps total throughput (uplink and downlink combined). Model examples: E810-CQDA2, E810-CQDA2T, E810-2CQDA2.
In the case of E810-2CQDA2, theoretically it can reach 100 Gbps full duplex if the server has PCIe bifurcation (most servers do not have this). This set up of E810-2CQDA2 with PCIe bifurcation in the server has not been tested and it is not officially supported.
Mellanox ConnectX-5 100 Gbps Requires bqnkernel-R3.0.16 or later. With PPPoE traffic, the card can negotiate speeds over 10 Gbps, but the capacity is limited to 10 Gbps because the card only distributes traffic to one core.
Mellanox ConnectX-6 10/25/40/50/100 Gbps Requires bqnkernel-R3.0.16 or later. With PPPoE traffic, the card can negotiate speeds over 10 Gbps, but the capacity is limited to 10 Gbps because the card only distributes traffic to one core.

Other network interface models can be supported but with much lower throughput capacity (up to 1Gbps).

Supported Network Interface Transceivers

For optical interfaces, the transceivers must be Intel-compatible of one of the following types:

Transceiver Type Subtype
1G SFP1000BASE-SX
1G SFP1000BASE-LX
10G SFP+10GBASE-SR/1000BASE-SX
10G SFP+10GBASE-LR/1000BASE-LX
25G SFP28-
40G QSFP+40GBASE-SR4
40G QSFP+40GBASE-LR4
100G QSFP28-

Hardware Dimensioning

The minimum configuration is for 1Gbps, going up to 400Gbps. The following table summarizes the CPU, RAM and disk needed depending on the network capacity. The processors shown are examples of verified systems. Processors with similar performance characteristics will also work. Older processors (cannot be older than Nehalem architecture), or processors with lower frequency than these, may require more cores to attain the same throughput.

Peak
Throughput
CPU
Vendor
Cores Threads Verified CPUs RAM DIMMs Disk Network Interfaces
1Gbps Intel
AMD
4 4 Intel N100
Intel i5 (4 cores)
Intel i7
Intel Xeon E3-1220
Intel Xeon E-2314
1 or 2 x 8GB 60GB Intel I210
Intel I350
5Gbps Intel
AMD
4 8 Intel Xeon E3-1240
Intel Xeon E-2334
2 x 16GB or
1 x 32GB
2 x 120GB Intel X520
Intel X540
Intel X550
Intel X710
Intel XL710
10Gbps Intel
AMD
4 8 Intel Xeon E3-1240
Intel Xeon E-2334
2 x 16GB 2 x 120GB Intel X520
Intel X540
Intel X550
Intel X710
Intel XL710
20Gbps Intel
AMD
12 24 Intel Xeon Silver 4214
Intel Xeon Silver 4310
4 x 16GB

Also OK:
6 x 16GB
8 x 8GB (Xeon 4310)
2 x 240GB Intel XL710
Intel XXV710
Intel E810
40Gbps Intel
AMD
24 48 2 x Intel Xeon Silver 4214
2 x Intel Xeon Silver 4310
8 x 16GB

Also OK:
12 x 16GB
16 x 8GB (Xeon 4310)
2 x 240GB Intel XL710
Intel XXV710
Intel E810
100Gbps AMD 64 128 2 x AMD Epyc 7523
2 x AMD Epyc 7543
16 x 16GB 2 x 480GB Intel XXV710
Intel E810
200Gbps AMD 128 256 2 x AMD Epyc 7763
2 x AMD Epyc 9554
16 x 32GB (Epyc 7763)
24 x 32GB (Epyc 9554)
2 x 1TB Intel E810
Mellanox Connect X
400Gbps AMD 256 512 2 x AMD Epyc 9754 24 x 32GB

Also OK:
24 x 64GB
2 x 2TB Intel E810
Mellanox Connect X

Comments:

  • Peak Throughput is calculated aggregating both directions (downlink and uplink).
  • Use server on-board 1 GE network interface for server administration.
  • Hard disks are SATA, SAS or NVMe. SSDs are preferred.
  • Use RAM DIMMs all of the same size.
  • If the server has two CPUs, distribute the RAM DIMMs equally between the two.
  • Use at least one DIMM per CPU memory channel, with equal number of DIMMs per channel. In configurations of 100 Gbps or more, this is mandatory. For example, the AMD 7532 CPU has 8 memory channels, so a 100 Gbps system will use 8 DIMMs per CPU (16 DIMMs in total), each DIMM of 16 GB, to reach the required 256 GB of total RAM.

Virtual Platforms

BQN supports:

  • VMware
  • KVM (with Linux kernel of the host machine of v 4.11 or later and QEMU of version 2.9 or later).

All resources will be fully dedicated (pinned) to the virtual machine (no oversubscription). Depending on the traffic load, check with Bequant the required resources needed. As a general guideline, use the following resources:

vCPU* RAM Disk
1 Gbps 2 8 GB 60 GB
5 Gbps 8 16 GB 90 GB
10 Gbps 14 32 GB 120 GB
20 Gbps 28 64 GB 240 GB

* Each vCPU is equivalent to one core of an Intel XeonE5-2630 v4 @ 2.20GHz CPUs, with hyperthreading enabled.

For the data plane interfaces, the supported configuration is using Intel network cards with PCI passthrough, because of performance and reliability reasons.

Docs styling tags
[.p-highlight] Lorem ipsum... [.p-highlight]

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[.p-highlight-blue] Lorem ipsum... [.p-highlight-blue]

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[.p-highlight-red] Lorem ipsum... [.p-highlight-red]

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Preview for the single [.c-highlight]word mono-spaced[.c-highlight] styling.
Preview for the single word mono-spaced styling.
previous
NEXT