Recently we had a cabling issue in our core infrastructure which caused around 3 to 12% packet loss across few IP streams. One of my colleagues made an interesting observation that when he tried to ping with large packet size (5000 bytes) the packet loss rose up as high as 40%. In his opinion, that meant some applications were experiencing up to 40% packet loss. I seldom do large packet ping tests unless I am troubleshooting MTU issues, so to me this observation was interesting.
At the outset, it may look like an aggravated problem. Yet you know that your network path MTU doesn’t support jumbo frames end-to-end. If so, why is there a difference in packet loss rate when you ping with large datagrams? The answer is not too obvious. The important thing to note is that a ping test result is not a measure of ethernet frame loss but ICMP datagram loss. In most cases (when the ICMP datagram is smaller than ethernet MTU) both are the same. But why do large ICMP datagrams have higher loss percentage than individual ethernet frames? Enter Math.