Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There have been a lot of modifications to TCP since the 1980's to allow it to push a lot more bandwidth on faster networks. Most notably perhaps window scaling.

How does BBR avoid killing other streams that happen to share the same pipe? It seems it would consume more than its fair share if the other TCP streams are using older algorithms.

p.s. presumably if you get 1% loss with no congestion there's wireless/mobile involved?



BBR uses mode switching to learn what the latency is and what its fair share bandwidth is.


Do you know if any experimental results of sharing with the other congestion avoidance flavors are available somewhere? Historically this requirement for backwards compatibility has been a big problem. Maybe YouTube is getting better but other web traffic is getting hosed?



Cool. Thanks! I worked on a UDP congestion avoidance algorithm that had bandwidth/latency feedback built into the protocol and had to deal with some of the same issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: