Hacker Newsnew | past | comments | ask | show | jobs | submit | csmoak's commentslogin

whenever i hear about a Cray, i'm always reminded of the scene in the movie Sneakers where they sit on one to have a conversation


I never realized; thanks for pointing that out. Sneakers is one of my favorite movies.


i made some art on this site years ago. some people used this to make plottable art. plotting it is definitely a slower way to watch it work through a drawing :)


turtletoy was made back around 2018 before web assembly was generally available.


you can download each as an SVG and then render it out at any resolution using something like inkscape


The main benefit I see is being able to more accurately represent different light sources. This applies to transmission but also reflectance.

sRGB and P3, what most displays show, by definition use the D65 illuminant, which approximates "midday sunlight in northern europe." So, when you render something indoors, either you are changing the RGB of the materials or the emissive RGB of the light source, or tonemapping the result, all of which can approximate other light sources to some extent. Spectral rendering allows you to better approximate these other light sources.


Whether the benefit is light sources or transparency or reflectance depends on your goals and on what spectral data you use. The article’s right that anything with spiky spectral power distributions is where spectral rendering can help.

> sRGB and P3, what most displays show, by definition use the D65 illuminant

I feel like that’s a potentially confusing statement in this context since it has no bearing on what kind of lights you use when rendering, nor on how well spectral rendering vs 3-channel rendering represents colors. D65 whitepoint is used for normalization/calibration of those color spaces, and doesn’t say anything about your scene light sources nor affect their spectra.

I’ve written a spectral path tracer and find it somewhat hard to justify the extra complexity and cost most of the time, but there are definitely cases where it matters and it’s useful. Also there’s probably more physically spectral data available now than when I was playing with it. I’m sure you’re aware and this is what you meant, but might be worth mentioning that it’s the interaction of multiple spectra that matters when doing spectral rendering. For example, it doesn’t do anything for the rendered color of a light source itself (when viewed directly), it only matters when the light is reflected or transmitted through spectra that are different from the light source, that’s where wavelength sampling will give you a different result than a 3-channel approximation.


The only applications I'm aware of that currently do spectral rendering on the fly are painting apps.

I have one called Brushwork ( https://brushworkvr.com ) that upsamples RGB from 3 to a larger number of samples spread across visible light, mixes paint in the upsampled space, and then downsamples for rendering (the upsampling approach that app uses is from Scott Burns http://scottburns.us/color-science-projects/ ). FocalPaint on iOS does something similar ( https://apps.apple.com/us/app/focalpaint/id1532688085 ).

I'm happy that tech like this will open up more apps to use spectral rendering.


i would expect the more dense part to be the smaller gamut that can be made with paint since we've been naming those colors for a lot longer than the larger gamut that can be made with a screen. The paint/print gamut looks kinda like the more dense parts of these scatter plots within the larger sRGB cube (though the paint gamut isn't entirely contained within sRGB).


there is a cool project that explores this topic: https://coolcolors.lbl.gov/


dreams on playstation and unbound on pc both use sdfs to allow users to make truly round objects for games


diffraction grating wouldnt give you a controlled lighting environment (illuminant). they seem to handle that issue here by using a known spectral reference chart which might let them handle any normal lighting environment.


I would think in the same environment you would take images immediately before and after adding the sample.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: