Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Encoding and storage aren't significant, relative to the bandwidth costs. Bandwidth is the high order bit.

The primary difference between live and static video is the bursts -- get to a certain scale as a static video provider, and you can roughly estimate your bandwidth 95th percentiles. But one big live event can blow you out of the water, and push you over into very expensive tiers that will kill your economics.



> But one big live event can blow you out of the water, and push you over into very expensive tiers that will kill your economics.

But if you're broadcasting something live and what's killing you is that everyone wants to watch it at the same time... wouldn't you serve it P2P so that everyone is downloading it from each other rather than you?


P2P is going to be a big challenge for tons of reasons. Set top boxes aren't going to play. Lots of people are behind NATs that make it hard to connect. Mobile is battery sensitive and sending to peers is going to eat more battery. Some users pay for every byte they send, and they won't want to pay for you to save operating costs. Plus all the other stuff everyone said about latency.


FYI, apparently doing that (making a P2P system to offload your links to the users) is illegal in China.

>Due to the proliferation of P2P CDN (or PCDN for short), which includes a large amount of home broadband uplink bandwidth at the central office, increasing operational pressure, and cannibalizing the traditional CDN business revenue share of telecom operators […] access is technically detected If the user's traffic volume exceeds a certain threshold, the speed will be limited or even the user's Internet access service will be interrupted. If the user finds a complaint, the user will be required to ensure that he or she has not used or removed the PCDN corresponding access device in exchange for restoring normal access. access services; thereby preventing access users from overusing home broadband and infringing on the interests of telecom operators.

https://zh.wikipedia.org/wiki/內容傳遞網路#P2P_CDN


I doubt that live(!) P2P video sharing would work. You will have some users who get the video stream directly from you. These primary peers will then need to relay the same data through their tiny consumer DSL line (slow upload!) to secondary peers. These secondary peers will have a noticeable lag. It will get even worse when you have tertiary peers.


One great thing about P2P is you can provide more peers. You can surge inexpensive machines near your market and drastically reduce the load on your main servers.

And home connections —while still largely asymmetric— are much faster than they used to be. Having 10mbps up means one client can serve two more. And there's a lot more FTTP with 100-1000mbps up too. These really make a difference when you have a large swarm.


A problem with live is that everyone wants the content at the same time. One client can only serve two more after it has the content. Any drop in connection is also very disruptive because you don't have a big buffer and everyone wants the content now.

A place this could work is streaming a conference, live-ish is the goal and the producers aren't rich. Sports would be the worst case.


> A problem with live is that everyone wants the content at the same time.

Isn't the point of the P2P approach that it gets better the more this is true?


No, not really on those timescales. If it's about a popular show that's released the whole season today, yeah absolutely. Pulling ep1 from my neighbour while they watch ep2 makes sense.

It doesn't really work for something you want to watch simultaneously and reliably. I have to wait for my neighbour to get the chunk I want, then I get it. If they got it from someone else, we form a bigger chain, and then you have all the broadcasting etc to figure out who to actually get a chunk of video from.

Hearing the street cheer while I watch my national team captain take a runup for a penalty is really quite bad.


But the problem is that you have a gigantic audience. Many of them will make effective primary peers. If that weren't true, you wouldn't have a problem in the first place.


If they're not significant, then why does youtube build ASICs for doing video encoding? See e.g., https://arstechnica.com/gadgets/2021/04/youtube-is-now-build...


If you make a billion, a 1% saving is 10 million. You can hire and fund a lot of activity with 10 million.

If you make 1 million, 10k isn't going to go very far towards paying devs to save you 1%


Because when you are Youtube, even relatively marginal cost improvements can be huge in absolute. There is also the UX of having to wait X minutes for an uploaded video to be ready that is improved by this.


Doing so wouldn’t hurt and would make a sizable impact at the scale of Google?


AFAICT, the answer to "why does Google do X" is basically always "because someone needed a launch to point at when they're up for promotion".


Because significance varies, as does optimisation. At YouTube scale it might matter more, or the benefits might be bigger, even if just to save some energy or carbon footprint (and even that might be just for a compliance or marketing line).


VA-API, NVENC,

nvenc > See also: https://en.wikipedia.org/wiki/Nvidia_NVENC#See_also

NVIDIA Video Codec SDK v12.1 > NVENC Application Note: https://docs.nvidia.com/video-technologies/video-codec-sdk/1... :

> NVENC Capabilities: encoding for H.264, HEVC 8-bit, HEVC 10-bit, AV1 8-bit and AV1 10-bit. This includes motion estimation and mode decision, motion compensation and residual coding, and entropy coding. It can also be used to generate motion vectors between two frames, which are useful for applications such as depth estimation, frame interpolation, encoding using other codecs not supported by NVENC, or hybrid encoding wherein motion estimation is performed by NVENC and the rest of the encoding is handled elsewhere in the system. These operations are hardware accelerated by a dedicated block on GPU silicon die. NVENCODE APIs provide the necessary knobs to utilize the hardware encoding capabilities.

FFMPEG > Platform [hw video encoder] API Availability table: https://trac.ffmpeg.org/wiki/HWAccelIntro#PlatformAPIAvailab... :

> AMF, NVENC/NVDEC/CUVID (CUDA, cuda-nvcc and libnpp) (NVIDIA), VCE (AMD), libmfx (Intel), MediaCodec, Media Foundation, MMAL, OpenMAX, RockChip MPP, V4L2 M2M, VA-API (Intel), Video Toolbox, Vulkan




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: