In our previous blog posts we explained how we obtain optimized and realistic performance measurements of our Threat Defender software. We illustrated these processes with the results measured on a single-socket reference system.

To find out how these values scale up on a more powerful system we carried out the same set of optimized and realistic tests on a dual-socket system. In the following, we will have a look the two sets of performance results. For the sake of comparability, the dual-socket system has basically the same setup as the single-socket system – just with two CPUs and twice the RAM.

The table shows the configurations of the two test systems:

 

Single-socket system

Dual-socket system

CPU

Intel Xeon E5-2690v4;
14 cores / 28 threads

2x Intel Xeon E5-2690v4;
28 cores / 56 threads

RAM

128 GB ECC

256 GB ECC

Network adapters

Intel 82599ES and X710 (two times 2x10Gbit/s, 40Gbit/s connectivity in total)

Intel 82599ES and X710 (two times 2x10Gbit/s, 40Gbit/s connectivity in total)

Hard drive

960 GB Enterprise SSD

960 GB Enterprise SSD

On these two systems, we tested Threat Defender version 20181206.0 both under optimized and realistic conditions.

To optimize our performance measurements, we disable as many software features as possible and create test traffic that is tailored to the respective performance indicators. See “How to Cheat on Your Performance Tests” if you want to know more about how we optimize the tests.

While the optimized performance results indicate the potential of Threat Defender, they are far from reality with its complex real-world network traffic.

To see how Threat Defender performs in daily use, we measure the performance using the Cisco EMIX 2012 and the BreakingPoint Enterprise Mix. These two traffic mixes distribute traffic over a wide range of different protocols, allowing them to simulate real-world traffic. See “Putting Performance Values into Perspective” to read more on our tests under realistic conditions.

To reflect the impact of the enabled software features on the performance, we take the tests using the default[1]and the full feature set[2].

Now, let’s have a look at the performance measured on the two reference systems and see if two sockets and double RAM mean twice the performance.

 TestSingle-socket systemDual-socket system

Scaling factor

Throughput

Optimized

33.4 Gbit/s

40 Gbit/s[3]

1.2[3]

Cisco, default feature set

6 Gbit/s

12.8 Gbit/s

2.13

Cisco, full feature set

5.4 Gbit/s

11.5 Gbit/s

2.13

BP, default feature set

6.9 Gbit/s

14 Gbit/s

2.03

BP, full feature set

5.85 Gbit/s

8.7 Gbit/s

1.49

Processed packets per second

Optimized

4,940,000

7,430,000

1.5

Cisco, default feature set

760,000

1,700,000

2.24

Cisco, full feature set

665,000

1,500,000

2.26

BP, default feature set

1,820,000

3,730,000

2.05

BP, full feature set

1,600,000

2,300,000

1.44

New sessions per second

Optimized

410,000

620,000

1.51

Cisco, default feature set

12,800

30,000

2.34

Cisco, full feature set

11,000

27,500

2.5

BP, default feature set

71,500

175,000

2.45

BP, full feature set

60,000

90,000

1.5

Minimum latency

Optimized

4.125 μs

4.625 μs

1.12

Cisco, default feature set

7.125 μs

7.5 μs

1.05

Cisco, full feature set

7.625 μs

8.0 μs

1.05

BP, default feature set

8.5 μs

8.375 μs

0.98

BP, full feature set

8.5 μs

7.375 μs

0.87

 

As you can see from the scaling factors in the table above, the increased RAM and CPU capacity improves the performance in almost all tests. Only when it comes to the minimum latency, we do not expect the performance to increase with the number of sockets. Across almost all tests the minimum latency of the dual-socket system is slightly increased compared to the single-socket system. This is the expected result of the additional communication required between the two sockets to keep the global flow state synchronized. But since the effect concerns only fractions of microseconds, it is negligible.

In the tests using the Cisco EMIX, the dual-socket system scales up especially well, often even more than doubling the performance. In these cases, the performance of Threat Defender on the single-socket system is only limited by the RAM and/or processing power.

In the tests using the BP Enterprise mix and the full feature set, the performance of the dual-socket system scales only up by about 1.5 times, however. There seem to be other delimiting factors than just the RAM and CPU capacity here. Possibly, one of the features included only in the full feature set in combination with the traffic composition of the BP Enterprise mix consumes too much processing power. We will perform further research to get to the bottom of this.

Under optimized test conditions, the performance also does not scale up in proportion with the hardware. While we designed the dual-socket system similarly to the single-socket system, there are still deviations in the setup – starting with the second socket and the required interlink between the sockets. Since the optimized tests were designed specifically for the single-socket system, this means they would have to be adapted to the dual-socket system to achieve truly ideal values.
Another reason may possibly be found in limitations of the test infrastructure as it is the case with the optimized throughput: At 40 Gbit/s our test platform is at its maximum load while the CPUs of the dual-socket system are only at two thirds of their capacity, even with the full feature set enabled. This means that the throughput should scale up to about 60 Gbit/s under ideal conditions on the dual socket system resulting in a scaling factor of about 1.5, which is similar to the other optimized measurements.

In the end, there is a limit to all optimizations as you’ll inevitably reach a point where further optimization of one performance indicator means performance losses in another. Likewise, improving performance in one specific usage scenario often comes at the cost of decreasing performance in many other usage scenarios. Finding the best compromise is the goal – and that’s what we’re trying to achieve with Threat Defender and our “install anywhere” approach.

To sum it up, after extensive performance testing we can say that double the processing power does indeed equal double the performance – in some cases, at least. Size does matter after all.

 


[1] The default feature set comprises the basic features you need to make good use of Threat Defender: application detection, URL classification, basic intrusion prevention system (IPS), IPFIX reporting, asset tracking, and the policy engine with behavior-based correlation.

[2] The full feature set additionally includes: full IPS and data leakage protection (DLP) analysis, SSL proxy for all encrypted connections, and extended behavior-based correlation.

[3] Transfer limit of our test infrastructure.

Get our newsletter

Subscribe to our mailing list

* indicates required

cognitix GmbH will use the information you provide on this form to be in touch with you and to provide updates and marketing. Please let us know all the ways you would like to hear from us:

You can change your mind at any time by clicking the unsubscribe link in the footer of any email you receive from us, or by contacting us at noreply@cognitix.de. We will treat your information with respect. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with these terms.

We use MailChimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.