Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
DeepDreamVideo (github.com/graphific)
145 points by albertzeyer on July 10, 2015 | hide | past | favorite | 30 comments


This must be one of the most disturbing things I have ever seen..

(link to youtube version below) https://www.youtube.com/watch?v=oyxSerkkP4o


It would be far more terrifying if the rate of change was less rapid and there was a higher degree of continuity between frames.

Right now it's very noisy, presumably because the frames aren't really interdependent. The video description mentions a blending technique, but its primary purpose seems to be ensuring the underlying video is not completely overwritten by the "dream" generation feeding into itself.


Looks like some really weird progress has been made. NSFW I think? https://www.reddit.com/r/deepdream/comments/3cesna/with_opti...



Here's one to top that: https://vimeo.com/132951073


Wow...that is deeply unsettling. Watching it caused me to experience actual physical stress and anxiety.


Interesting - it unsettled my wife in the same way, but my reaction was "woaaaah, that's cool!"


Have you or your wife had any big psychedelic experiences? If you haven't and your wife have that's probably the explanation. The Deep Dream animations trigger flashbacks like nothing I've seen before.


I'd like to see this for a network trained for more neutral images like places. See for instance the bottom image of [1]. I imagine it would be cool to continuously zoom and iterate through this image in a video.

[1] http://googleresearch.blogspot.ch/2015/06/inceptionism-going...


This guy [1] made an interactive streaming version.

It's a live feedback loop stream where you can suggest words to influence the dream.

Running on Twitch [2] Very cool.

[1] https://317070.github.io/Dream/

[2] http://www.twitch.tv/317070


I just wasted an hour on there, it's amazing.


There's a nice live version of this effect on Twitch that you can guide by typing things in chat: http://www.twitch.tv/317070


This isn't using Google's tech though, they built their own version before Google released any of the code. One of the creators actually started a job at Google right after releasing this.


Not yet, I start in about a week! :)


I was really enjoying this until someone put "tarantula" in the chat.


Ha, I was there for that. Ick. The fun thing is to go full screen, cover up the upper left corner for a couple minutes, then try to guess what it's drawing. I got it right a couple of times.


Centipede and cockroach is the best.


This isn't exactly Deep Dream's equivalent for video. It takes frames individually (smushing them together), rather than having a unified neural network that takes the entire video at once.


It's also using low-quality jpegs as source and destination output so a good chunk of what you're seeing is the robot dreaming about jpeg artifacts.


I'm really curious to see how much the original training material affects these images.

Is everyone using the same source? There's a lot of doge in there along with that bird that always pops up. Why are faces so prominent? Is that an artefact of the training data or inherant in the algorithm. I would guess the former.


It was trained on Imagenet. 30-40% of Imagenet is just distinguishing different breeds of dogs. And the remainder is mostly other types of animals and random objects.


All the versions of this I've seen so far are using the model from Google's github page https://github.com/google/deepdream


Does anybody know why there are "flashes" of images? For instance, dogs appear in pulsating patterns, is there a reason for that?


It's a combination of the fact that the source video takes place in a club with some flashes/strobe effects going on, and the fact that the algorithm looks at things frame by frame (more or less.) When it flashes it sees more/different details than when it is presented with a dark frame, and it interprets those as a dog.


Acid compliance just took a whole different meaning in computing.


Moore's Law + Deep Dreams + AR = a weird and somewhat disturbing future. I am ready. I have prepared.


Interesting. Tough to evaluate quality, very subjective. Still, seems like quite a bit of progress have been made since Hinton's networks dreaming of digits in 2006 - http://www.cs.toronto.edu/~hinton/digits.html

I remember doing something similar a way back, only just using nearest neighbor search on relatively large dataset (1m of human faces with different scales/etc, or sounds in the MP3 voice recordings), rather than passing things through up-and-down the neuro-net. The result was very similar. I wonder if one can get a better result with the dogs as well, just by using nearest neighbor. A good baseline...


This is the stuff of nightmares, congratulations!


https://www.reddit.com/r/deepdream/ has many static images with this effect (and a few videos too).


every time I see the deep dream demos, I found so many "eyes" in it. very very creepy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: