Why food? It's static, and AI 3D models do not make food that I want to eat. Using photogrammetry means that high quality reconstructions of real food look tasty - it's an easy qualitative metric for me.
Previously the app only produced 3D models and threw away the original video, but incorporating the underlying videos both shows off to new users what type of content they're supposed to record (i.e. a 1 second video of a darkly lit pizza box is NOT going to produce good content), and it makes the output shareable content.
I've been working on Mukbang 3D for the past year and a half—an iOS app that converts food videos into interactive 3D models using photogrammetry. Record a short video of food, and viewers can rotate/zoom/explore it while the video plays.
I recently added pose tracking of the 3d model so I can overlay 3d effects onto the underlying video.
It sounds like you want to do adaptive bitrate streaming. This [1] blogpost probably does better justice than I could do.
I think it kind of sounds similar to what you were mentioning, but it sounds like the lowest possible latency solution is to stream multiple streams of different bit rates at the same time and then WebRTC picks up the best one it can.
This is a cool game concept and I feel like it compressed a lot of geometry intuition into a short period of time. I have a math degree but managed to never take a geometry class in college or high school, so this was the first time I've had my (non-existent) knowledge of geometry "graded."
I hope more games like this can be incorporated into the formal educational process in the future; I feel like my childhood video game addiction could have been exploited by the education system just as much as the gaming companies, but with a better outcome.
Maybe the same type of game could be made for other subjects, too.
I'd like to see the concept extended in 3d with augmented reality with a limited set of construction tools. Maybe I'll try to do that if I get the time.
Also, I just realized that I only played the tutorial! There goes my morning.
People have a wide range of reactions to FAANG interviews. One of my friends has panic attacks when thinking about interviews. I actually find LeetCode questions to be fun way to spend the evening and I like being interviewed, but I know I'm in the minority.
I consider myself a deep thinker too. And I've been slower at solving interview questions before and got faster over time.
The key about doing these problems is not memorization. I haven't memorized binary search. That's not why I'm fast. But I know the concepts and I can reproduce it at will, maybe with a bug or two which I can iron out while walking through it.
Solving problems in 40 minutes is actually the very last step in the learning process. So not being able to do that just means you haven't completed your training yet.
The process of practicing different problems over and over helps you see the patterns across different types of problems. Similar problems will have similar code structure, data structures, and algorithmic choices.
If I hear a new song on the radio I can guess what band it is before the singing starts because bands often have the same style of songs. It's the same thing when I see a coding problem and choose how to solve the problem. It's an impulse I learned from training, it has nothing to do with me being a quick thinker.
Some people on Blind mention that they do 100's of questions on LeetCode before doing FAANG interviews. Some still fail. I've failed plenty.
The paper by Elizabeth Derryberry [1] is really fascinating.
They classified four distinct dialects of white-crowned sparrows in urban and rural communities and studied the changes of their bird song when background noise levels lowered during the pandemic. They measured a doubling of the signal-to-noise ratio of bird songs to urban noise, which doubled the distance you can hear birds and lead to a 4-fold increase in the amount of birds that you heard during that time.
I recently learned how to distinguish between different bird types by appearance and their songs thanks to Merlin Bird ID, which is essentially a Pokédex for birds. Being able to identify different bird species has been eye-opening since I had never paid much attention to the different songs and behaviors of different birds.
Anyone can record audio and the spectrum of bird songs in the recording can be used to classify the bird species. It's super accessible to anyone who's interested in getting started.
The idea that birds can modulate their signal strength based on noise level seems very crazy to me. But what exactly are they transmitting? It seems like the information content must be something more than 'basic' calls for mating partners, threat management, and perhaps nutritional information? It makes me wonder what else birds may care about.
$ exiftool chatgpt_image.png
...
Actions Software Agent Name : GPT-4o
Actions Digital Source Type : http://cv.iptc.org/newscodes/digitalsourcetype/trainedAlgori...
Name : jumbf manifest
Alg : sha256
Hash : (Binary data 32 bytes, use -b option to extract)
Pad : (Binary data 8 bytes, use -b option to extract)
Claim Generator Info Name : ChatGPT
...