u(t) is called 60 times per second.
t: Elapsed time in seconds.
S: Shorthand for Math.sin.
C: Shorthand for Math.cos.
T: Shorthand for Math.tan.
R: Function that generates rgba-strings, usage ex.: R(255, 255, 255, 0.5)
c: A 1920x1080 canvas.
x: A 2D context for that canvas.
A (very inefficient) physically based renderer in a Dweet! :) Give it 5 mins for reasonable convergence, the longer the better, Firefox is fastest. I know this is not the most interesting visually, but I found it challenging to make, and didn't fully understand the result. So the following hand wavey write-up is more of an analysis as I attempt to understand what's going on. I am not a PBR expert so I probably don't know what I'm talking about.
This approach conceptually follows Photon Mapping, where light rays are forward traced, bouncing around the scene recursively, accumulating radiance into a spatial map on each bounce. Essentially painting reachable surfaces with light. Separately a ray for each pixel is cast into the scene to retrieve the current radiance for the position it intersects. From here, implementation deviates substantially from the original algorithm. Rays are marched. The map only stores irradiance, and uses spatial hashing instead of a kd-tree. However the most interesting differences are in surface interactions. BSDFs are the standard abstractions for reflectance and transmission but they are far too large to implement. Through "experimentation", AKA messing around on Dwitter, I found a roughly Lambertian reflectance distribution to emerge from a simple random march. Lambertian meaning a perfectly diffuse surface, which causes uniform propagation through 3D space, but a necessarily non-uniform distribution of reflected angles (this is worth taking some time to understand separately if it's not immediately intuitive).
With ray marching each intersection naturally ends up embedded in a surface. Instead of correcting this, a new direction is randomly chosen and the ray is sent on it's way. Depending on the surface intersection depth and angle of reflection there is a chance the ray will escape (reflect) or remain trapped (absorbed). With the chance being greatest when reflecting in line with the surface normal, and lowest when parallel to the surface. Note how the surface, depth, and reflection ray form a right-triangle. For the reflection ray to escape, the depth must be less than cos(θ) at a step size of 1. Therefore assuming a uniformly random depth between 0 and 1, the reflectance distribution follows Lambert's cosine law. Additionally unlike a BSDF this provides random absorption for free, but with the disadvantage of reflectance distribution and absorption rate being inextricably linked. Another significant difference is that BSDFs are a statistical abstraction designed to extract the most value from each ray by modulating its radiance, representing many photons; whereas this model effectively represents a single photon for each surface intersection. Absorbed rays are thrown away rather than modulated, and each irradiance accumulation is an integer increment. The implication being the random march requires far more rays to converge because each ray has a binary existence and intensity like a photon, on the other hand each intersection is far simpler computationally compared to BSDFs.
There are a few more gnarly details that complicate this. For simplicity I suggested trapped rays are immediately absorbed, which is possible, but it's simpler to allow them to rattle around a bit. This "widens" the reflectance distribution and increases overall reflectance, to a degree depending on how many consecutive intersections are allowed. But it also changes the overall behaviour by producing a slight Subsurface Scattering effect like skin or paper. Surfaces soften and glow as rays traverse, and weak points like corners diffuse as some rays tunnel through. Secondly, rays with a higher angle of incidence (ωi) have a lower max surface depth, yet the reflectance step length is unaffected. This difference distorts the reflectance distribution as ωi increases. When (ωr < ωi) escape is certain, and when (ωr > ωi) the chance abruptly returns to cos(ωr)/cos(ωi). i.e min(1, cos(ωr)/cos(ωi)). At ωi=0 the distribution is perfectly Lambertian, and at ωi=π/2 it is unnaturally uniform in angle. AFAIK this is not a physically realistic reaction to ωi. It almost resembles Fresnel reflections, where transmissive surfaces reflect more at grazing angles. However Fresnel is anisotropic, like a mirror. Yet here the distribution is centered around the surface normal. This is still a simplified analysis, it ignores subsurface marching, and how adjacent ωr and ωi share pseudo random variables (see below). The chaotic nature and compounding biases means an accurate distribution can probably only be obtained empirically.
Finally, the ray step/direction vector, must be randomised upon each intersection. Initialising random unit vectors takes a lot of code, so there's a few tricks going on here, but these tricks also affect the result and noticeably compromise accuracy. Rather than a unit vector I've settled for a point in a cube, where the length is anywhere between 0 and 1. This actually helps convergence by randomising depth at repeat surface intersections, however the cube shape results in non-uniform propagation so distance based shading of all the axis aligned surfaces looks a bit too flat. Doing this three times still takes a lot of code, so only Z is randomized upon each intersection, and is then reused by cycling through the XY components, which provides a pretty bad but not-terrible distribution, apparently making it harder to reflect into corners. Z's randomisation combines t with one component of the ray position, so it is effectively a kind of feedback PRNG derived from the very scene geometry the randomness is being used to traverse, which I find kind of neat.
wow just came back from my vacation in saudi arabia. im now back in my 10m penthouse in new york and this is the first post ive seen after coming back that has genuinely impressed me. good job!
Thanks. u is the dweet function which I'm exploiting as a predefined object for the spatial map. The purpose of bitwise inverting ~ each component is to obtain a floored version of the ray position (that it happens to invert doesn't matter) e.g [~0.1,~1.2,~2.3] = [-1,-2,-3], and importantly all positions between [0,1,2] and [1,2,3] will result in [-1,-2,-3] i.e it is binning positions into an integer grid. When passed as a key to u this is coerced into a string to become '-1,-2,-3'. The idea is to count photon intersections into each grid cell with u[k]++ as a kind of frequency based irradiance, but as an object each cell must be initialised before incrementing, so u[k]|=Z uses bitwise again to coerce all values into an int, ORing with Z will convert undefined values to 0, and existing integer values will be unaffected because Z is always less than 1.
Thanks tomxor. The math / physics behind is too high for me ^^. My question is more related to understand JS what u[k] is. k is an array which is passed to u. When k=[10, 20, 30], what is the result in u[k]++?
u/UEZ The property accessor [key] coerces key to a string. So, when you do obj[a] and a is [1,2] it will be obj['1,2'] because [1,2].toString() implicitly called.
I think you're getting hung up on the array like key notation, but it's not being used as an array. Think of -1,-2,-3 as an abitrary property name on an object, the u.-1,-2,-3 property syntax would be equivalent if not for the invalid chars. u[k]++ simply increments a number stored for the property k, ++ only works because that property value has been previously initialised using u[k]|=0 which will convert undefined values to 0.
u(t) is called 60 times per second.
t: elapsed time in seconds.
c: A 1920x1080 canvas.
x: A 2D context for that canvas.
S: Math.sin
C: Math.cos
T: Math.tan
R: Generates rgba-strings, ex.: R(255, 255, 255, 0.5)