BQN Documentation
Close icon

General Requirements

The BQN uses dedicated commercial off-the-shelf servers or virtual machines, configured according to the network capacity and connectivity requirements.

Supported CPUs

  • Intel Xeon and Core CPUs (Nehalem or later)
  • AMD Epyc CPUs

Dual-CPU servers are supported. See the hardware dimensioning section for details.

Currently, the server maximum number of CPU cores supported is 256. It requires bqn R4.18 or more, with bqnkernel-R3.0.13 or later. For up to 128 cores, previous bqnkernel and bqn releases can be used.

Supported Hard Disks

[.p-highlight-blue]SSDs (Solid State Drive) are recommended for performance and reliability reasons.[.p-highlight-blue]

The following disk types are supported:

  • SATA
  • SAS
  • NMVe

Supported Network Interfaces

[.p-highlight-blue]A BQN server needs at least three network ports: one for management and another two for packet processing.[.p-highlight-blue]

Ports for packet processing should be one of the following controllers:

Controller Port Speeds Observations
Intel I210 1 Gbps
Intel I350 1 Gbps
Intel i226-V 2.5 Gbps Requires bqnkernel-R3.0.13 or later
Intel X520 10 Gbps
Intel X540 10 Gbps
Intel X550 10 Gbps
Intel X553 10 Gbps
Intel X710 10 Gbps
Intel XL710 10 / 40 Gbps With PPPoE traffic, the XL710 negotiates 40 Gbps link, but the capacity is limited to 10 Gbps because the card only distributes traffic to one core
Intel XXV710 10 / 25 Gbps With PPPoE traffic, the XXVL710 negotiates 25 Gbps link, but the capacity is limited to 10 Gbps because the card only distributes traffic to one core.
Intel E810-XXV 10 / 25 Gbps Requires bqnkernel-R3.0.14 or later.
Intel E810-C 10/25/50/100 Gbps Requires bqnkernel-R3.0.14 or later. Limited to 100 Gbps total throughput (uplink and downlink combined).
Mellanox ConnectX-4 Lx 10/25/40/50 Gbps Requires bqnkernel-R3.0.16 or later. With PPPoE traffic, the card can negotiate speeds over 10 Gbps, but the capacity is limited to 10 Gbps because the card only distributes traffic to one core.
Mellanox ConnectX-5 100 Gbps Requires bqnkernel-R3.0.16 or later. With PPPoE traffic, the card can negotiate speeds over 10 Gbps, but the capacity is limited to 10 Gbps because the card only distributes traffic to one core.
Mellanox ConnectX-6 10/25/40/50/100 Gbps Requires bqnkernel-R3.0.16 or later. With PPPoE traffic, the card can negotiate speeds over 10 Gbps, but the capacity is limited to 10 Gbps because the card only distributes traffic to one core.

Other network interface models can be supported but with much lower throughput capacity (up to 1Gbps).

Supported Network Interface Transceivers

For optical interfaces, the transceivers must be Intel-compatible of one of the following types:

Transceiver Type Subtype
1G SFP1000BASE-SX
1G SFP1000BASE-LX
10G SFP+10GBASE-SR/1000BASE-SX
10G SFP+10GBASE-LR/1000BASE-LX
25G SFP28-
40G QSFP+40GBASE-SR4
40G QSFP+40GBASE-LR4
100G QSFP28-

Hardware Dimensioning

The minimum configuration is for 1Gbps, going up to 200Gbps. The following table summarizes the CPU, RAM and disk needed depending on the network capacity. The processors shown are examples of verified systems. Processors with similar performance characteristics will also work. Older processors (cannot be older than Nehalem architecture), or processors with lower frequency than these, may require more cores to attain the same throughput.

CPU RAM* Disk
1 Gbps Intel N100, Intel i5 (minimum 4 cores), Intel i7, Intel Xeon E3-1220 and E-2314 8 GB 60 GB
10 Gbps Intel Xeon E3-1240 / E-2334 (4-core CPUs, with hyperthreading) 32 GB 120 GB
20 Gbps Intel Xeon Silver 4214 / 4310 (12-core CPUs, with hyperthreading) 64 GB 240 GB
40 Gbps 2 x Intel Xeon Silver 4214 / 4310 (2x 12-core CPUs, with hyperthreading) 128 GB 240 GB
100 Gbps 2 x AMD Epyc 7532 / 7543 (2x 32-core CPUs) 256 GB 480 GB
200 Gbps 2 x AMD Epyc 7763 / 9554 (2x 64-core CPUs) 512/768 GB 960 GB
400 Gbps** 2 x AMD Epyc 9754 (2x 128-core CPUs) 768/1.536 TB 1.9 TB

* RAM configuration requirements:

  • For configurations below 10 Gbps, use 1 or 2 DIMMs.
  • For a 10 Gbps configuration, use 2 DIMMs.
  • For a 20 Gbps configuration, use 4 or 6 DIMMs, all of the same size. For the Xeon 4310, 8 DIMMs are also OK.
  • For a 40 Gbps configuration, use 8 or 12 DIMMs, all of the same size and equally distributed between the two CPUs. For the Xeon 4310, 16 DIMMs are also OK.
  • For configurations of 100 Gbps, 200 Gbps and 400 Gbps, use at least one DIMM per CPU memory channel, with equal number of DIMMs per channel.  For example, the AMD 7532 CPU has 8 memory channels, so a 100 Gbps system will use 8 DIMMs per CPU (16 DIMMs in total), each DIMM of 16 GB, to reach the required 256 GB of total RAM.

** 400 Gbps platform will be available soon

Virtual Platforms

BQN supports:

  • VMware
  • KVM (with Linux kernel of the host machine of v 4.11 or later and QEMU of version 2.9 or later).

All resources will be fully dedicated (pinned) to the virtual machine (no oversubscription). Depending on the traffic load, check with Bequant the required resources needed. As a general guideline, use the following resources:

vCPU* RAM Disk
1 Gbps 2 8 GB 60 GB
10 Gbps 14 32 GB 120 GB

* Each vCPU is equivalent to one core of an Intel XeonE5-2630 v4 @ 2.20GHz CPUs, with hyperthreading enabled.

For the data plane interfaces, the supported configurationis using Intel network cards with PCI passthrough, because of performance and reliability reasons.

Docs styling tags
[.p-highlight] Lorem ipsum... [.p-highlight]

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[.p-highlight-blue] Lorem ipsum... [.p-highlight-blue]

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

[.p-highlight-red] Lorem ipsum... [.p-highlight-red]

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Preview for the single [.c-highlight]word mono-spaced[.c-highlight] styling.
Preview for the single word mono-spaced styling.
previous
NEXT