Subscribe: Pete Shirley's Graphics Blog
http://psgraphics.blogspot.com/feeds/posts/default
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
area  book  code  color  direction  light  make  mini book  pdf  ray tracing  ray  theta  tracing  variance  weekend 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Pete Shirley's Graphics Blog

Pete Shirley's Graphics Blog





Updated: 2018-01-15T08:31:56.696-08:00

 



Rendering the moon, sun, and sky

2017-12-29T04:10:58.417-08:00

A reader asked me about rendering the moon using a path tracer.   This has been done by several people and what's coolest about it is that you can do the whole thing with four spheres and not a lot of data (assuming you don't need clouds anyway).First, you will need to deal with the atmosphere which is most easily dealt with spectrally rather than RGB because scattering has simple wavelength based formulas.   But you'll also have RGB texture for the moon, so I would use the lazy spectral method.Here are the four spheres-- the atmosphere sphere and the Earth share the same center.   Not to scale (speaking of which, choose sensible units like kilometers or miles, and I would advise making everything a double rather than float).The atmosphere can be almost arbitrarily complicated but I would advice making it all Rayleigh scatterers and have constant density.  You can also add more complicated mixtures and densities.   To set the constant density just try to get the overall opacity about right.   A random web search yields this image from Martin Chaplin: This means something between 0.5 and 0.7 (which is probably good enough-- a constant atmospheric model is probably a bigger atmospheric limitation).   In any case I would use the "collision" method where that atmosphere looks like a solid object to your software and exponential attenuation will be implicit.For the Sun you'll need the spectral radiance for when a ray hits it.   If you use the lazy binned RGB method and don't worry about absolute magnitudes because you'll tone map later anyway, you can eyeball the above graph and guess for [400-500,500-600-600-700]nm you can use [0.6,0.8,1.0].   If you want to maintain absolute units (not a bad idea-- good to do some unit tests on things like luminance of the moon or sky).   Data for the sun is available lots of places but be careful to make sure it is spectral radiance or convert it to that (radiometry is a pain).For the moon you will need a BRDF and a texture to modulate it.   For a first pass use Lambertian but that will not give you the nice constant color moon.   This paper by Yapo and Culter has some great moon renderings and they use the BRDF that Jensen et al. suggest:Texture maps for the moon, again from a quick google search are here.The Earth you can make black or give it a texture if you want Earth shine.   I ignore atmospheric refraction -- see Yapo and Cutler for more on that.For a path tracer with a collision method as I prefer, and implicit shadow rays (so the sun directions are more likely to be sampled but all rays are just scattered rays) the program would look something like this:For each pixel     For each  viewing ray choose random wavelength          send into the (moon, atmosphere, earth, sun) list of spheres          if hit scatter according to pdf (simplest would be half isotropic and half to sun)The most complicated object above would be the atmosphere sphere where the probability of hitting per unit length would be proportional to (1/lambda^4).    I would make the Rayleigh scattering isotropic just for simplicity, but using the real phase function isn't that much harder.The picture below from this paper was generated using the techniques described above with no moon-- just the atmosphere.There-- brute force is great-- get the computer to do the work (note, I already thought that way before I joined a hardware company).If you generate any pictures, please tweet them to me![...]



Lazy spectral rendering

2017-12-06T21:10:10.680-08:00

If you have to do spectral rendering (so light wavelengths and not just RGB internal computations) I am a big fan of making your life simpler by doing two lazy moves:1. Each ray gets its own wavelength2. Use a 3 element piece-wise constant approximation for most of the spectra, and make all the XYZ tristimulous stuff implicitFirst, here's how to do it "right".   You can skip this part-- I'll put it in brown so it's easy to skip.  We want some file of RGB pixels like sRGB.   Look up the precise definition of sRGB in terms of XYZ.   Look up the precise definition of XYZ (if you must do that because you are doing some serious appearance modeling use Chris Wyman's approximation).   You will have three functions of wavelength x(), y(), and z().   X is for example:X = k*INTEGRAL x(lambda) L(lambda) d-lambdaIf you use one wavelength per ray, do it randomly and do Monte Carlo: lambda = 400 + 300*r01(), so pdf(lambda) = 1/300X =approx=  k*300*x(lambda) L(lambda)You can use the same rays to approximate Y and Z because x(), y(), and z() partially overlap.Now read in your model and convert all RGB triples to spectral curves.    How?   Don't ask me.   Seems like overkill so let's be lazy.OK now let's be lazier than that.    This is a trick we used to use at the U of Utah in the 1990s.   I have no idea what its origins are.   Do this:R =approx= L(lambda)where lambda is a random wavelength in [600,700]nmDo the same for G, B with random wavelengths in [500,600] and [400,500] respectively.When you hit an RGB texture or material, just assume that it's a piecewise constant spectrum with the same spectral regions as above.   If you have a formula or real spectral data (for example, Rayleigh scattering or an approximation to the refractive index of a prism) then use that.This will have wildly bad behavior in the worst case.   But in practice I have always found it to work well.   As an empirical test in an NVIDIA project I tested it on a simple case, the Macbeth Color Checker spectra under flat white light.   Here's the full spectral rendering using the real spectral curves of the checker and XYZ->RGB conversion and all that done "right": And here it is with the hack using just 3 piece-wise constant spectra for the colors and the RGB integrals above. That is different, but my belief is that is no bigger than the intrinsic errors in input data, tone mapping, and display variation in 99% of situations.   One nice thing is it's pretty easy to convert an RGB renderer to a spectral renderer this way.[...]



Email reply on BRDF math

2017-04-09T09:29:35.880-07:00

I got some email asking about using BRDFs in a path tracer and thought my reply might be helpful to those learning path tracing.

Each ray tracing toolkit does this a little differently.   But they all have the same pattern:

color = BRDF(random direction) * cosine / pdf(random direction)

The complications are:

0. That formula comes from Monte Carlo integration, which is a bit to wrap your mind around.

1. The units of the BRDF are a bit odd, and it's defined as a function over the sphere cross sphere which is confusing

2. pdf() is a function of direction and is somewhat arbitrary, through you get noise if it is kind of like the BDRF in shape.

3. Even once you know what pdf() is for a given BRDF, you need to be able to generate random_direction so that it is distributed like pdf

Those 4 together are a bit overwhelming.   So if you are in this for the long haul, I think you just need to really grind through it all.   #0 is best absorbed in 1D first, then 2D, then graduate to the sphere. 



Bug in my Schlick code

2016-12-28T18:59:36.458-08:00

In a previous post I talked about my debugging of refraction code.   In that ray tracer I was using linear polarization and used these full Fresnel equations:

Ugh those are awful.   For this reason and because polarization doesn't matter that much for most appearance, most ray tracers use R = (Rs+Rp)/2.    That's a very smooth function and Christophe Schlick proposed a nice simple approximation that is quite accurate:

R = (1-R0)(1-cosTheta)^5

A key issue is that the Theta is the **larger** angle.   For example in my debugging case (drawn with limnu which has some nice new features that made this easy):

The 45 degree angle is the one to use.   This is true on the right and the left-- the reflectivity is symmetric.   In the case where we only have the 30 degree angle, we need to convert to the other angle by using Snell's Law: Theta = asin(sqrt(2)*sin(30 degrees).

The reason for this post is that I have this wrong in my book Ray Tracing in One Weekend :


Note that the first case (assuming outward normals) is the one on the left where the dot product is the cos(30 degrees).  The "correction" is messed up.    So why does it "work"?    The reflectances are small for most theta, and it will be small for most of the incorrect theta too.   Total internal reflection will be right, so the visual differences will be plausible.

Thanks to Ali Alwasiti (@vexe666) for spotting my mistake!




A new programmer's attitude should be like an artist's or musician's

2016-09-18T07:44:52.159-07:00

Last year I gave a talk at a CS education conference in Seattle called "Drawing Inspiration from the Teaching of Art".   That talk was aimed at educators and said that historically CS education was based on that of math and/or physics and that was a mistake and we should instead base it on art.   I expected a lot of pushback but many of the attendees had a "duh-- I have been doing that for 20 years" reaction.

This short post is aimed at students of CS but pursues the same theme.   If you were an art or music student your goal would be to be good at ONE THING in TEN YEARS.   If a music student that might be signing, composition, music theory, or playing a particular instrument.   Any one of those things is very hard.   Your goal would be to find that one thing you resonate with and then keep pushing your skills with lots and lots of practice.  Sure you could become competent in the other areas, but your goal is to be a master of one.   Similarly as an artist you would want to become great at one thing be it printmaking, painting, drawing, pottery, sculpture, or art theory.   Maybe you become great at two things but if so you are a Michelangelo style unicorn and more power to you.

Even if you become great at one thing, you become great at it in your own way.    For example in painting Monet wanted Sargent to give up using black.   It is so good that Sargent didn't do that.   This painting with Monet's palette would not be as good.   And Monet wouldn't have wanted to do that painting anyway!

Computer Science is not exactly art or music, but the underlying issues are the same.   First, it is HARD.   Never forget that.   Don't let some CS prof or brogrammer make you think you suck at it because you think it's hard.   Second you must become master of the tools by both reading/listening and playing with them.   Most importantly find the "medium" and "subject" where you have some talent and where it resonates with you emotionally.   If you love writing cute UI javascript tools and hate writing C++ graphics code, that doesn't make you a flawed computer scientist.   It is a gift in narrowing your search for your technical soul mate.   Love Scheme and hate C# or vice-versa.   That is not a flaw but again is another productive step on your journey.   Finally, if you discover an idiosyncratic methodology that works for you and gets lots of pushback, ignore the pushback.   Think van Gogh.    But keep the ear-- at the end of the day, CS is more about what works :)




I do have a bug

2016-07-15T22:08:30.650-07:00

I questioned whether this was right:



I was concerned about the bright bottom and thought maybe there was a light tunnel effect.   I looked through my stuff and found a dented cube, and its bottom seemed to show total internal reflection:
The light tunnel effect might be happening and there is a little glow under the cube, but you cant see it thought the cube.   Elevating it a little does show that:

Digging out some old code and adding a cube yielded:
This is for debugging so the noise was just because I didn't run to convergence.   That does look like total internal reflection on the bottom of the cube, but the back wall is similar to the floor.   Adding a sphere makes it more obvious:
Is this right?   Probably.






Always a hard question: do I have a bug?

2016-07-13T07:15:26.087-07:00

In testing some new code involving box-intersection I prepared a Cornell Box with a glass block, and first question is "is this right?".    As usual, I am not sure.    Here's the picture (done with a bajillion samples so I wont get fooled by outliers):


It's smooth anyway.   The glass is plausible to my eye.   The strangest thing is how bright the bottom of the glass block is.   Is it right?    At first I figured bug.   But maybe that prism operates as a light tunnel (like fiber optics) so the bottom is the same color as a diffuse square on the prism top would be.   So now I will test that hypothesis somehow (google image search?   find a glass block?) and if that phenomenon is right and of about the right magnitude, I'll declare victory.



A sale on Limnu

2016-06-07T19:22:45.832-07:00

The collaborative white-boarding program I used for my ray tracing e books has finished their enterprise team features that your boss will want if you are in a company.   It's normally $8 a month per user but if you buy in the next week it is $4 per month for a year.   Looks like that rate will apply to any users added to your team before or after the deadline as well.   I love this program-- try it!



Prototyping video processing

2016-05-15T09:24:17.681-07:00

I got a prototype of my 360 video project done in Quartz Composer using a custom Core Image filter.     I am in love with Quartz Composer and core graphics because it is such a nice prototyping environment and because I can stay in 2D for the video.   Here is the whole program:




A cool thing is I can use an image for debugging where I can stick in whatever calibration points I want to in Photoshop.   Then I just connect the video part and no changes are needed-- the Core Image Filter takes and image or video equally happily and Billboard displays the same.

The Filter is pretty simple and is approximately GLSL     

One thing to be careful on is the return range of atan (GLSL atan is the atan2 we know and love).

I need to test this with some higer-res equirectangular video.    Preferably with fixes viewpoint and with unmodified time.   If anyone can point me to some I would appreciate it.



What resolution is needed for 360 video?

2016-05-14T11:33:18.948-07:00

I got my basic 360 video viewer working and was not pleased with the resolution.   I've realized that people are really serious that they need very high res.   I was skeptical of these claims because I am not that impressed with 4K TVs relative to 2K TVs unless they are huge.   So what minimum res do we need?    Let's say I have the following 1080p TV (we'll call that 2K to conform to the 4K terminology-- 2K horizontal pixels):

(image)
Image from https://wallpaperscraft.com
If we wanted to tile the wall horizontally with that TV we would need 3-4 of them.   For a 360 surround we would need 12-20.   Let's call it 10 because we are after approximate minimum res.  So that's 20K pixels.   To get up to "good" surround video 20K pixels horizontally.   4K is much more like NTSC.   As we know, in some circumstances that is good enough.

Facebook engineers have a nice talk on some of the engineering issues these large numbers imply. 

Edit: Robert Menzel pointed out on Twitter that the same logic is why 8K does suffice for current HMDs.





equirectangular image to spherical coords

2016-05-15T09:14:45.747-07:00

An equirectangular image, popular in 360 video, is a projection that has equal area on the rectangle match area on the sphere.   Here it is for the Earth:

(image)
Equirectangular projection (source wikipedia)
This projection is much simpler than I would expect.    The area on the unit radius sphere from theta1 to theta2 (I am using the graphics convention of theta is the angle down from the pole) is:

area = 2*Pi*integral sin(theta) d_theta = 2*Pi*(cos(theta_1) - cos(theta_2))

In Cartesian coordinates this is just:

area = 2*Pi*(z_1 - z_2)

So we can just project the sphere points in the xy plane onto the unit radius cylinder and unwrap it!   If we have such an image with texture coordinates (u,v) in [0,1]^2, then

phi = 2*Pi*u
cos(theta) = 2*v -1

and the inverse:

u = phi / (2*Pi)
v = (1 + cos(theta)) / 2

So yes this projection has singularities at the poles, but it's pretty nice algebraically!



spherical to cartesian coords

2016-05-12T11:02:23.137-07:00

This is probably easy to google if I had used the right key-words.   Apparently I didn't.   I will derive it here for my own future use.

One of the three formulas I remember learning in the dark ages:

x = rho cos(phi) sin(theta)
y = rho sin(phi) sin(theta)
z = rho cos (theta)

We know this from geometry but we could also square everything and sum it to get:

rho = sqrt(x^2 + y^2 + z^2)

This lets us solve for theta pretty easily:

cos(theta) = z / sqrt(x^2 + y^2 + z^2)

Because sin^2 + cos^2 = 1 we can get:

sin(theta) = sqrt(1 - z^2/( x^2 + y^2 + z^2))

phi we can also get from geometry using the ever useful atan2:

phi = atan2(y, x)






Advice sought on 360 video processing SDKs

2016-05-06T08:47:31.942-07:00

For a demo I would like to take come 360 video (panoramic, basically a moving environment map) such as that in this image:

(image)
An image such as you might get as a frame in a 360 video (http://www.airpano.com/files/krokus_helicopter_big.jpg)
And I want to select a particular convex quad region (a rectangle will do in a pinch):


And map that to my full screen.

A canned or live source will do, but if live the camera needs to be cheap.   MacOS friendly preferred.

I'm guessing there is some terrific infrastructure/SDK that will make this easy, but my google-fu is so far inadequate.



Machine learning in one weekend?

2016-05-03T10:58:10.235-07:00

I was excited to see the title of this quora answer: What would be your advice to a software engineer who wants to learn machine learning?   However, I was a bit intimidated by the length of the answer.

What I would love to see is Machine Learning in One Weekend.    I cannot write that book; I want to rread it!    If you are a machine learning person, please write it!   If not, send this post to your machine learning friends.

For machine learning people: my Ray Tracing in One Weekend has done well and people seem to have liked it.    It basically finds the sweet spot between a "toy" ray tracer and a "real" ray tracer, and after a weekend people "get" what a ray tracer is, and whether they like it enough to continue in the area.   Just keep the real stuff that is easy, and skip the worst parts, and use a real language that is used in the discipline.   Make the results satisfying in a way that is similar to really working in the field.   Please feel free to contact me about details of my experience.  



Level of noise in unstratified renderers

2016-04-25T13:38:08.827-07:00

When you get noise in a renderer a key question, often hard to answer, is is it a bug or just normal outliers?   With an unstratified renderer, which I often favor, the math is more straightforward.   Don Mitchell has a nice paper on the convergence rates of stratified sampling which is better than the inverse square root of unstratified.In a brute force ray tracer it is often true that a ray either gets the color of the light L, or a zero because it is terminated in some Russian Roulette.   Because we average the N samples the actual computation looks something like:Color = (0 + 0 + 0 + L + 0 + 0 + 0 + 0 + L + .... + 0 + L + 0 + 0) / NNote that this assumes Russian Roulette rather than downweighting.   With downweighting there are more non-zeros and they are things like R*R'*L.   Note this assumes Color is a float, so pretend it's a grey scene or think of just each component of RGB.The expected color is just pL where p is the probability of hitting the light.    There will be noise because sometimes luck makes you miss the light a lot or hit it a lot.The standard statistical measure of error is variance.    This is the average squared error.   Variance is used partially because it is meaningful in some important ways, but largely because it has a great math property:The variance of a sum of two random quantities is the sum of the variances of the individual quantitiesWe will get to what is a good intuitive error message later.   For now let's look at the variance of our "zero or L" renderer.   For that we can use the definition of variance:the expected (average) value of the squared deviation from the mean Or in math notation (where the average or expected value of a variable X is E(X):variance(Color) =  E[ (Color - E(Color))^2 ]That is mildly awkward to compute so we can use the most commonly used and super convenient variance identity:variance(X) = E(X^2) - (E(X))^2We know E(Color) =  pL.    We also know that E(Color^2) = pL^2, so:variance(Color) =  pL^2 - (pL)^2 = p(1-p)L^2So what is the variance of N samples (N is the number of rays we average)?First it is the sum of a bunch of these identical samples, so the variance is just the sum of the individual variances:variance(Sum) = Np(1-p)L^2But we don't sum the colors of the individual rays-- we average them by dividing by N.   Because variance is about the square of the error, we can use the identity:variance(X / constant) = variance(X) / constant^2So for our actual estimate of pixel color we get:variance(Color) =   (p(1-p)L^2) / NThis gives a pretty good approximation to squared error.   But humans are more sensitive to contrast and we can get close to that by relative square-root-of-variance.   Trying to get closer to intuitive absolute error is common in many fields, and the square-root-of-variance is called standard deviation.   Not exactly expected absolute error, but close enough and much easier to calculate.    Let's divide by E(Color) to get our approximation to relative error:relative_error(Color) is approximately   Q = sqrt((p(1-p)L^2) / N) / ( pL)We can do a little algebra to get: Q = sqrt((p(1-p)L^2) / (p^2 L^2 N) ) = sqrt( (1-p) / ( pN) )If we assume a bright light then p is small,  thenQ is approximately sqrt(1/(pN))So the perceived error for a given N (N is the same for a given image) ought to be approximately proportional to the inverse squareroot of pixel brightness, so we ought to see more noise in the darks.If we look at an almost converged brute force cornell box we'd[...]



Debugging by sweeping under rug

2016-04-03T14:44:31.666-07:00

Somebody already found several errors in my new minibook (still free for until Apr 5 2016).    There are some pesky black pixels in the final images.

All Monte Carlo Ray Tracers have this as a main loop:

pixel_color = average(many many samples)

If you find yourself getting some form of acne in the images, and this acne is white or black, so one "bad" sample seems to kill the whole pixel, that sample is probably a huge number or a NaN.   This particular acne is probably a NaN.   Mine seems to come up once in every 10-100 million rays or so.

So big decision: sweep this bug under the rug and check for NaNs, or just kill NaNs and hope this doesn't come back to bite us later.   I will always opt for the lazy strategy,  especially when I know floating point is hard.

So I added this:


 There may be some isNaN() function supported in standard C-- I don't know.   But in the spirit of laziness I didn't look it up.   I like to chase these with low-res images because I can see the bugs more easily.    It doesn't really make it faster-- you need to run enough total rays to randomly trip the bug.   This worked (for now!):

(image)
Left: 50x50 image with 10k samples per pixel (not enough for bug).    Middle 100k samples per pixel.   Right: with the NaN check. 



 

Now if you are skeptical you will not that by increasing the number of samples 10X I went from 0 bugs to 20+ bugs.   But I wont think about the possibly troublesome implications of that.   MISSION ACCOMPLISHED!




Making an "In One Weekend" Kindle Book

2016-04-02T08:48:05.793-07:00

Over the last few months my side project has been a series of three Kindle mini-books on ray tracing.   Ray tracing is a computer graphics programming area to produce nice images and ray tracing has been hot lately (most computer animated films are now ray traced, are are many of the special effects shots in regular movies).   This post is not about those books specifically, but is instead describing my experiences and encouraging others to write similar mini-books on other subjects. I promote and document the books at in1weekend.com If you have something (book, youtube, blog, whatever) to help people learn something in a weekend, let me know and if I can learn from it I will promote it on my site!Here are the books at Amazon: Ray Tracing In One WeekendRay Tracing: The Next Week  Ray Tracing: The Rest of Your LifeI set my price for each book at $2.99.   Conversions for other countries Amazon does automatically for you.   You get 70% of the sales if your book is priced from $2.99 to $9.99.   Amazon will let you give away the book for free for five days every quarter, and I began each book with a free promotion.   So far since early 2016 when I started, I have sold about 700 books and given away almost 3000.CALL TO ACTION: It is easy and fun to make such books, and they make money, and give others a mechanism to see if they love the same things you do with just a weekend commitment.   I will show you the mechanics of how I wrote my instance of this.THE MECHANICS WRITING THE TEXT:  Just write a Word file or a Google Doc and “download as” a Word file.    Really that’s it! Here is an example of my book on the Amazon cloud reader:Two kindle pages with code screen shots, ray tracer output, and limnu drawingTHE MECHANICS OF CREATING FIGURES:  For drawings I used the shared whiteboard program from limnu.com.    I drew the figures on a tablet, and then opened them on my laptop (limnu stores your boards in the cloud so no file copying needed), and did a screen capture of the part of the image I wanted to make a figure (command-shift-4 on a mac), and then dragged the file into google docs.    For code samples I just screen-captured the code from vim. Here is my favorite limnu figure in the books:THE MECHANICS OF CREATING A KINDLE BOOK:  Go to https://kdp.amazon.com/ and create an account.    Upload the word file.    Upload a cover image which must be tif or jpeg.   Amazon also has some tools for cover creation. It is all super-easy and has a great interface and here is a screen shot of the style of data entry:ADVERTISING THE BOOK:   I used word of mouth and twitter and hackernews and facebook.   I also created a website in1weekend.com to promote the book.    If you do a book and I try it out and like it, I will make an article about your book and point to it.   You can feel free to use the “in one weekend” phrase in your title.IT IS THAT EASY!   And now I feel like I have done some good spreading how-to information, made some cash, and helped my resume.   And it was surprisingly fun; I looked forward to working on the books when done with my other work.   I encourage you to try it, and if you write one please let me know and I will see if I can learn something new in a weekend.[...]



Converting area-based pdf to direction-based pdf

2016-03-31T07:51:32.951-07:00

One thing I covered in my new mini-book is dealing with pdf management in a path tracer.     Most of my ray tracers (and many others do it too) are set up to sample directions.   So at a point being shaded, the integral is over directions:color(direction_out) = INTEGRAL   brdf(direction_in, direction_out) cosinethe domain of integration is all direction_in coming from the sphere of all directions.   Often the brdf is zero for directions from inside the surface so it's the hemisphere.Basic Monte Carlo integration says that if you choose a random direction_in with pdf pdf(direction_in), the unbiased estimate is:       color(direction_out)  =  brdf(direction_in, direction_out) cosine / pdf(direction_in)As long as you can generate random direction_in, and can compute pdf(direction_in) then you have a working ray tracer.   It works best when you are likely to cast random rays toward bright areas like lights.    If you sample a polygonal light, you normally just sample q uniformly on the area.   The direction is just q-p.    The key observation to get from the pdf in area space p_area(q) = 1/A (where A is the area of the light) is that by definition the probability of q being in a small area dA is       probability = p_area(q) dASimilarly the directional pdf of a direction toward dA is:       probability = pdf(q-p) dwThose probabilities must be the same:      p_area(q) dA =  pdf(q-p) dwBy algebra:pdf(q-p) =  p_area(q) dA/dw = (1/A) (dA/dw)But what is (dA/dw)?Let's do some basic geometry.   As is so often the case, the right figure makes that tons easier:   You pick a point q on the light with pdf 1/A in area space, but what is the pdf in directional space?   Drawn in limnu whitboarding program.If dA faces p dead-on (so its normal faces p), then it's foreshotening:      dw = dA/distance_squaredBut if dA is tilted then we get a cosine shift:      dw = dA*cosine / distance_squaredSo recall we are after      pdf(q-p) = (1/A) (dA/dw)So we can plug and chug to get:       pdf(q-p) =length_sqaured(q-p) / (A*cosine)We just apply this formula directly in our code.   For my xz rectangle class this is:  [...]



Third and final ray tracing mini-book is out

2016-03-31T05:59:26.997-07:00

My third and final Kindle ray tracing mini-book is out.    It will be free April 1-5, 2016.

Ray Tracing: The Rest Of Your Life (Ray Tracing Minibooks Book 3)(image)

From the intro:

In this volume, I assume you will be pursuing a career related to ray tracing and we will dive into the math of creating a very serious ray tracer. When you are done you should be ready to start messing with the many serious commercial ray tracers underlying the movie and product design industries. There are many many things I do not cover in this short volume; I dive into only one of many ways to write a Monte Carlo rendering program.   I don’t do shadow rays (instead I make rays more likely to go toward lights), bidirectional methods, Metropolis methods, or photon mapping.   What I do is speak in the language of the field that studies those methods.   I think of this book as a deep exposure that can be your first of many, and it will equip you with some of the concepts, math, and terms you will need to study the others.



My buggy implimentation of Schlick approximation

2016-03-18T09:30:56.896-07:00

One of the most used approximations in all of graphics is the Schlick approximation by Christophe Schlick.    It says the Fresnel reflectance of a surface can be approximated by a simple polynomial of cosine(theta).    For dielectrics, a key point is that this theta is the bigger of the two regardless of which direction the ray is traveling.Regardless of light direction, the blue path is followed.   Regardless of light direction, the larger of the two angles (pictured as theta) is used for the Schlick approximation.A sharp reader of my mini-book pointed out I have a bug related to this in my code.   I was surprised at this because the picture looked right (sarcasm).      The bug in my code and my initial fix is shown below.The old code shown in blue is replaced with the next two lines.My mistake was to pretend that if snell's law applies to sines,  n1*sin(theta1) = n2*sin(theta2), it must also apply to cosines.   It doesn't.    (The fix is just to use cos^2 = 1 - sin^2 and do some algebra) I make mistakes like this all the time, but usually the wonky-looking picture, but in this case it didn't.    It only affected internal reflections, and it just picked some somehwhat arbitrary value that was always between 0 and 1.   Since real glass balls look a little odd in real life, this is not something I can pick up.   In fact I am not sure which picture looks right!Old picture before bug fix.New picture after bug fix.I am reminded of spherical harmonic approximation to diffuse lighting.   It looks different than the "correct" lighting, but not worse.  (In fact I think it looks better).    What matters about hacks is their robustness.    It's best to do them on purpose though...[...]



An example of an indirect light

2016-03-13T10:46:55.155-07:00

On the top you see a big reflection of the Sun off my not very polished wooden floor.   On the bottom is the view in the opposite direction.   Three interesting points:

1. The secondary light effect is surprisingly (to me) strong and un-diffused.
2. The secondary light effect is achromatic even through it is off a brown floor-- the specular component is achromatic.
3. Note the tertiary effect on the wall by the picture frame.    This would be tough for most renderers.






A simple SAH BVH build

2016-03-10T10:31:01.592-08:00

Here is a cut at a BVH build that cuts along the longest axis using the surface-area-heuristic (SAH).   The SAH minimizes:

SUM_left_right number_of_children_in_subtree*surface_area_of_bounding_box_of_subtree

This has been shown by many people to work shockingly well.


Here's my build code and it works ok.   I still have to sort because I need to sweep along an axis.   Is it worth it to tree each of three axes instead of longest?   Maybe.... bundles of long objects would be the adversarial case for the code below.





BVH builds

2016-03-08T09:14:59.029-08:00

In the new mini-book I cover BVHs.   In the book I always went for simple conceptual code figuring people can speed things up later.   I have what must be close to a minimum BVH build, but it just gets us the log(N).     It picks a random axis, splits in the middle of the list:

I could have split geometrically and done a O(N) sweep and the code might have been as small, but I wanted to set people up for a top-down surface-area heuristic (SAH) build.     As discussed by Aila, Karras, and Laine in 2013 (great paper--- download it here) we don't fully understand why lazy SAH builds work so well.   But let's just be grateful.   So what IS the most compact top down build?   I am going to write one to be supplemental for the book and just sweeping on those qsorts above is what appears best to me, but I invite pointers to good practice.



New ray tracing mini-book is out

2016-03-07T17:56:00.213-08:00

My second and probably final kindle mini-book Ray Tracing the Next Week is out.    It will be free for five days starting tomorrow Tu Mar 8.    I will be maintaining notes and links at in1weekend.com







Next mini-book on ray tracing

2016-03-05T19:16:56.757-08:00

I am almost done with the next (and probably last) mini-book on ray tracing.   It will be free for the first five days at the kindle store (probably sometime this week) so do not buy it -- there may be a short time when Amazon hasn't made it free yet.   The associated web site will be at in1weekend.com just like the first book.   This the the image with the features added:

(image)
Final image for Ray Tracing: the Next Week

The features added are solid texture, image texture, participating medium, motion blur, instancing, and BVH.   I decided not to add explicit direct light so it's still brute force path tracing (thus the caustic is implicit).