Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> where I'd like a uniform way of handling playback and live

Seems to me that WebRTC and HLS solve two different problems though. WebRTC largely prefers dropping packets to stay real-time while HLS buffers and preserves every frame by default rather than drop frames to stay real-time. One is designed for calls (real-time) and one is not (streaming).

That’s why HLS seems over-complicated. It’s not designed around real-time signalling, instead it’s designed around making requests for frames/bytes effectively sequentially.

Now, if you’re not distributing requests at scale, HLS is indeed overkill. But if you might have latency and want to stream uninterrupted video footage, it’s a necessary evil. If you pick WebRTC, you’ve no easy way to ask to pick up where you left off, because the default is just a stream of “real-time now” packets and dropped packets are lost forever.

MSE would be a way of capturing packets, but if the protocol doesn’t let you sequentially access bytes starting from a timestamp, you’re stuck when trying to resume a stream, no?

I might have misunderstood something, but they do seem like they serve different purposes. :)



> WebRTC largely prefers dropping packets to stay real-time

In my experience, this doesn't work as well as people say. I haven't played with WebRTC yet, but my understanding is it's based on RTP over an unreliable transport, which I am familiar with. Dropping packet by packet (no mechanism to either retransmit the dropped packet or skip sending the rest of the packets in the frame) isn't great. When you lose one packet, you lose a full frame but waste bandwidth sending the data anyway. Worse, it's usually a reference frame, so all the following frames are suspect until the next IDR frame. RTP over TCP (interleaved RTSP channels) can be better, by retransmitting lost packets belonging to a frame once you've decided to send that frame, and by skipping whole frames at once when things fall behind (observed within the application as insufficient buffer space). TCP has more rapid feedback, too. (ACKs are far more frequent than RTCP RRs.)

> while HLS buffers and preserves every frame by default rather than drop frames to stay real-time. One is designed for calls (real-time) and one is not (streaming). That’s why HLS seems over-complicated. It’s not designed around real-time signalling, instead it’s designed around making requests for frames/bytes effectively sequentially.

Sure, and the WebSocket protocol I mentioned also can preserve every frame, more simply.

The most justifiable complexity IMHO in HLS is multiple encodings (variants) of different quality so you can switch between them. But not everyone needs/wants that, or trusts the user agent to do a good job with the selection.

> MSE would be a way of capturing packets, but if the protocol doesn’t let you sequentially access bytes starting from a timestamp, you’re stuck when trying to resume a stream, no?

MSE lets you specify your own protocol. Mine [1] lets you seek to arbitrary timestamps. It has its own complexity (around clock uncertainty/drift due to cheap/unreliable hardware setups) but will be more complex if I have to deal with HLS's complexity also.

[1] https://github.com/scottlamb/moonfire-nvr/blob/master/design...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: