I have had issues before with Dell servers and RSS Scaling with hyper-V, so that setting now just gets disabled by default on all broadcom’s that we run into – but i started having a problem with packet loss on some other servers a few weeks ago… first up i assumed it was not with the hyper-V server, but with the clients net connection.
Now we were talking 1% packetloss over large samples…. so a 100 pings might only show 1 or 2 packets being lost… but 1000000 pings would show 1% on average.
After alot of investigation – sure enough, the problem was with the Hyper-V nic’s themselves…. on the hosts, i disabled all TCP and UDP offloading – and things got substantially better… although i still had a little bit of packet loss here and there…. disabling TCP/UDP offloading within the guest OS seemed to fix this as-well.
I’ll be turning off all network-stack offloading on all my hyperV hosts from here on in 🙂