So I am trying to track down what is possibly slowing down my download connection from my Debian server to my devices (streaming box, laptop, other servers, etc).
First let me go over my network infrastructure: OPNsense Firewall (Intel C3558R) <-10gb SFP+ DAC-> Managed Switch <-2.5gb RJ45-> Clients, 2.5gb AX Access Point, and Debian Server (Intel N100).
Under a 5 minute stress test between my laptop (2.5gb adapter plugged into switch) and the Debian Server (2.5gb Intel I226-V NIC), I get the full bandwidth when uploading however when downloading it tops out around 300-400mbps. The download speed does not fair any better when connecting to the AX access point, with upload dropping to around 500mbps. File transfers between the server and my laptop are also approximately 300mbps. And yes, I manually disabled the wifi card when testing over ethernet. Speed tests to the outside servers reflect approximately 800/20mbps (on an 800mbps plan).
Fearing that the traffic may be running through OPNsense and that my firewall was struggling to handle the traffic, I disconnected the DAC cable and reran the test just through the switch. No change in results.
Identified speeds per device:
Server: 2500 Mb/s
Laptop: 2500Base-T
Switch: 2,500Mbps
Firewall: 10Gbase-Twinax
Operating Systems per device:
Server: Debian Bookworm
Laptop: macOS Sonoma (works well for my use case)
Switch: some sort of embedded software
Firewall: OPNsense 24.1.4-amd64
Network Interface per device:
Server: Intel I226-V
Laptop: UGreen Type C to 2.5gb Adapter
Switch: RTL8224-CG
Firewall: Intel X553
edit: Forgot to add that the OpenSpeedTest is being hosted in Docker by my local server.
Fire up a wireshark / tcpdump of a transfer and look to see if the TCP window size is limiting the transfer by the laptop. It might not be able to receive as much data.
Look to another service to test the speed. Your test seems web based, have you tried iperf3? You can also play with options in iperf3 (sending udp, tcp, different payload sizes, etc.)
Used iperf3 and it showed the full bandwidth; however another commenter mentioned that my server's NVMe (that came prebundled) isn't guaranteed to be fast. After looking into it, it seems to be the bottleneck.