The Persistence of Vision Raytracer (POV-Ray).

This is the legacy Bug Tracking System for the POV-Ray project. Bugs listed here are being migrated to our github issue tracker. Please refer to that for new reports or updates to existing ones on this system.

Attached to Project: POV-Ray
Opened by Paul - 2013-04-29
Last edited by William F Pokorny - 2017-01-29

FS#287 - area_illumination shadow calculation

not sure if this is something needing further work or an intended effect.

Shadows from and area light with area_illumination on seem to follow the same shadow calculation as a standard area light by giving more weight to lights near the center of the array. I would assume the shadows would be calculated similarly to individual lights in the same pattern as the array by evenly distributing the amount of shadow equally for each light. But this is not what I see.

The code sample below when rendered with scene 1 will show shadows grouped near the center from the area light with area_illumination. If scene 1 is commented out and scene 2 is uncommented then rendered, you will see evenly distributed shadows from individual lights. Area lighting with area_illumination I would assume should give a result identical to scene 2. If scene 1 is rendered with area_illumination off, the shadow calculation is exactly the same as with area_illumination on.

example images rendered on win32 XP

#version 3.7;

global_settings {
 ambient_light 0
 assumed_gamma 1

camera {
  location <0, 3, -5>
  look_at <0, 2, 0>

background { rgb <.3, .5, .8> }
plane { y,0 pigment { rgb .7 } }
torus { 1.5,.1 rotate 90*x translate 4*z pigment { rgb .2 } }
plane { -z,-7 pigment { rgb .7 } }

// scene 1
  area_light 3*x, z, 7, 1
  area_illumination on
union {
 sphere { 0,.05 }
 sphere { .5*x,.05 }
 sphere { x,.05 }
 sphere { 1.5*x,.05 }
 sphere { -.5*x,.05 }
 sphere { -x,.05 }
 sphere { -1.5*x,.05 }
 translate y
  hollow pigment { rgbt 1 } interior { media { emission 10 } }
// end scene 1

// scene 2
#declare Light = light_source {
  looks_like { sphere { 0,.05 hollow pigment { rgbt 1 } interior { media { emission 10 } } } }

union {
 object { Light }
 object { Light translate .5*x }
 object { Light translate x }
 object { Light translate 1.5*x }
 object { Light translate -.5*x }
 object { Light translate -x }
 object { Light translate -1.5*x }
 translate y
// end scene 2

   scene1.bmp (900.1 KiB)
   scene2.bmp (900.1 KiB)
Jim Holsenback commented on Monday, 29 April 2013, 11:26 GMT

wonder if the fix for FS274 fixed this?


Christoph Lipka commented on Monday, 29 April 2013, 12:31 GMT

No, this one doesn't bear any similarity to  FS#274 , which only applied to subsurface scattering materials, and also affected the influence of each individual "lightlet". This one looks more like the center "lightlet" is given double brightness for some obscure reason.

This problem actually seems to be unrelated to the area_illumination feature, which is not responsible for the depth of individual shadows but rather for the proper shading of the (partially) lit regions.

(Hint: If you write flyspray issue IDs with a hash sign, flyspray will automatically make it a clickable link :-))

Simon commented on Monday, 29 April 2013, 18:09 GMT

It's because the algorithm is not calculating the contributions per-light and averaging at the end, but instead, it takes four-point corner average and uses it as a light intensity for the next subsection of the light (descending in powers of two). The weights are only correct, if the number of lights is a power of two.

Case in point:
First stage: lights [0] and [6] are evaluated. If the adaptive algorithm stopped here, it would return <[0],[6]>, where <> denote averaging.
Second stage: Instead, the corner samples are replaced by average of (0..3) and (3..6) subsections. If the algorithm stopped here, it would return < <[0],[3]>, <[3],[6]> >.
It's already wrong here, as it gives [3] a double weight. Here, I assume that ceil(3)=floor(3) (judging from the code). The code actually continues subdivision.

There are two problems here:
1) ceil/floor functions repeat the midpoint in case of odd number of lights in the interval. This repeats midpoints and if it's not all just power of two, you get something like
[0,1],[2,3],[3,4],[5,6] (3 is repeated, has double weight)
2) if you do it correctly: not repeating the midpoints, it should be integer division, followed by +1 for the second interval, let's say, splitting [0,6] in to [0,3]+[4,6], then at the end of splitting, if it's not power of two, you will get lights without a pair. Something like [0,1],[2,3],[4,5],[6] (6 is alone, has double weight).

You see that you cannot pair up the numbers into uniform intervals, if they are not power of two. Recursive adaptive algorithm cannot give correct weights this way, unless it takes into accounts the unbalanced interval sizes. The correct way would be this one:

Take interval [0,6]. Split it into [0,3],[4,6] (splitting is mandatory). Now, you measure the interval sizes, and assign weight 4/7 to the first one and 3/7 to the second one. NOW you can recursively call the function and everything will be ok.

This fix could go hand-in-hand with the circular light uniformity bug.
Edit: both changes will affect appearance of existing scenes, although not by much.

Christoph Lipka commented on Monday, 29 April 2013, 19:28 GMT

But this isn't an adaptive light...?!

Simon commented on Monday, 29 April 2013, 19:52 GMT

The same code is used for both cases AFAIK, it just goes all the way in (uses all the lights instead of just some of them). Default adaptive level is set to 100, which is effectively infinity.

Paul commented on Monday, 29 April 2013, 21:25 GMT

some testing with 4 and 8 light arrays confirms what Simon explained as the shadows were identical to individual lights.

found that adaptive should not be used if area_illumination is desired as the lighting calculations go way off. This is to be expected but maybe a note can be added to the help docs to alert others.

in scenes with area light sources close to objects jitter causes radical offsets to the shadowing. see example files rendered from tha code originally posted adding jitter. Is it possible to add a setting for jittter amount to control the amount of offset or can the distance from the light source be added to the calulation so close shadows have a small random offset while objects further away have a larger one??

my scene using an area light with area_illumination and a 7 light linear light source similar to the submitted code has a substantial speed increase over 7 individual lights. The scene has many reflective surfaces and some media to deal with. Removing most of the scene and using minimal AA and radiosity for a fast test, results below...

with area light

Radiosity Time:   0 hours  1 minutes  9 seconds (69.422 seconds)
            using 2 thread(s) with 115.234 CPU-seconds total
Trace Time:       0 hours  6 minutes 34 seconds (394.453 seconds)
            using 2 thread(s) with 746.343 CPU-seconds total

with individual lights

Radiosity Time:   0 hours  1 minutes 59 seconds (119.125 seconds)
            using 2 thread(s) with 209.655 CPU-seconds total
Trace Time:       0 hours  9 minutes 39 seconds (579.625 seconds)
            using 2 thread(s) with 1101.624 CPU-seconds total

so thank you for adding the area_illumination feature. The light distribution is right on. The area light and individual lighting show the exact same light distribution on highlights, light fading, intensity etc. Though the shadows exibit the above distribution errors.

Jim Holsenback commented on Monday, 29 April 2013, 23:38 GMT
Paul (OJD) wrote:
> found that adaptive should not be used if area_illumination is desired as the lighting calculations go way off. This is to be expected but maybe a note can be added to the help docs to alert others.

Before I make a doc addition (which I think is probably the best/easiest), I'd like to float the idea having adaptive default to 0 if area_illumination is used. What do you guys think?

Paul commented on Tuesday, 30 April 2013, 00:25 GMT

Jim, that was the first thing I tried. If you render the above scene with adaptive 0 you will change your mind.

Simon commented on Tuesday, 30 April 2013, 01:04 GMT

If I understand correctly, "adaptive 0" refers to the most loose adaptive level (no forced light, everything is conditional to the light contrast). If no "adaptive" is specified, it is set to 100, which means that it always uses all lights. Let's say we have 8 lights. Then, adaptive 0 would always use lights 0 and 7, but the ones in between only in case 0 and 7 were different enough. adaptive 1 would force 0,3,4,7 and adaptive 2 would already force all of them. You can easily check this with your scene: set it to 8 lights and check. You will see the pattern I described (although inside the shadows, you get more precision, because "adaptive" still uses the lights in between, if it detects enough contrast between shadows. In this case, because the components lights are too far apart, it does not work so well (the shadows don't overlap, so this automatic system does not notice the shadows in between).

In essence, it is splitting the interval.
adaptive 0: [0,7]
adaptive 1: [0,3][4,7]
adaptive 2: [0,1][2,3][4,5][6,7]
And if you have more lights, higher "adaptive" value is needed to force everything. This also shows why subdivision is unfair if you don't have powers of two.

The shadow test function is the only one that uses the adaptive parameter. The area_illumination does not use it at all: the diffuse illumination is calculated exactly as if there was and entire array of lights.

To repeat: no adaptive means the same as adaptive 100. And that's ok for default settings. You need to know what you are doing to use adaptive. It does not always work as expected.

Christoph Lipka commented on Tuesday, 30 April 2013, 10:38 GMT


I just checked the code, and you're right in that non-adaptive area lights use essentially the same algorithm as adaptive ones, which isn't really ideal for "odd" light dimensions.

You're wrong about your power-of-2 assumption though: Actually, area light dimensions should ideally be 2^N+1, because each splitting step only samples one additional lightlet, and then decides whether to subdivide the left and/or right half of the interval further:



It looks like you're trying to use area_light to simulate an array of individual light sources. This is not what the feature is intended for: Its purpose is to simulate a single line-, rectangle-, disc- or sphere-shaped light source; the effect you want to achieve - visibly breaking up the light into distinctive point lights - is entirely contrary to that use case, so it's no surprise you encounter effects that you consider problematic.

Simon commented on Tuesday, 30 April 2013, 12:44 GMT

The code specifies the left interval to be
new_u1 = u1;
new_u2 = (int)floor((u1 + u2)/2.0);
and the right interval to be
new_u1 = (int)ceil((u1 + u2)/2.0);
new_u2 = u2;

Note the floor and ceiling functions.

For 8 lights this becomes u1=0, u2=7, so you get [0,3][4,7]. This is correct, because no light is sampled and averaged twice.

For 9 lights, u1=0, u2=8 and u1+u2 is divisible by 2, so you sample the middle light twice, like you have shown, [0,4][4,8]. But this is WRONG, because averaging in this case counts the middle light twice, making its shadow twice darker. And it cannot even be compensated with weights, because the doubled lights are in different recursion calls and are already mixed in the average before returning the result. The average will be
0.5*(0.5*(light[0]+light[4])+0.5*(light[4]+light[8]) )=
which is the reason for the artifacts we are seeing in this bug. If the subinterval size is again odd, even more weights are wrong. For instance, 11 lights create a very funny pattern.

The correct version should be:
left subinterval:
new_u1 = u1;
new_u2 = (u1 + u2)/2;
right subinterval:
new_u1 = (u1 + u2)/2 + 1; #never repeat the middle light
new_u2 = u2;

And the weights should be (new_u2-new_u1)/(double)(u2-u1) in each subinterval. Well actually, because it is 2D, the weight is a product of weight for u and v. The weights must replace the expression on the bottom instead of 0.25*(sum of corners).

With this change you must also be careful that averaging for the case of just 4 corners should still use 0.25 factor. So the code becomes

#code for recursion
lightcolour = sample_Colour[0]*weight[0]+sample_Colour[1]*weight[1]+... #added line
lightcolour = 0.25*(sample_Colour[0]+sample_Colour[1]+...) #added line
#removed global 0.25-weighted color.

(using # for comments, because double slash means italics)

Christoph Lipka commented on Tuesday, 30 April 2013, 13:09 GMT

Heh, you seem to know the code better than we do :-)

I actually looked at these details of the area light code for the first time. It does need a major overhaul I guess.

Simon commented on Tuesday, 30 April 2013, 15:43 GMT

Well I was interested in the problem so I studied the code - it's just one function that has the problem, so it's not so bad.
I have the fixed code (including the circular light fix for FS#275), I'll post it together with test scenes in a couple of hours.

Simon commented on Tuesday, 30 April 2013, 16:09 GMT

Changes are only in functions
Trace::TraceAreaLightSubsetShadowRay (circular light fix and weights fix)
Trace::ComputeFullAreaDiffuseLight (only circular light fix)

It's based on RC6 code - I hope no other changes happened in these two functions.

I applied the method we discussed in FS#275 that makes the circular light uniform in angular distribution - you can see the gaps are all equal now.

You can do with the code whatever you want.

Jim Holsenback commented on Tuesday, 30 April 2013, 17:44 GMT

In revision control I see that the last change to trace.cpp was change #5846 on 08Mar2013 fix for  FS#274 :


Who wants to take this one on?

Simon commented on Tuesday, 30 April 2013, 18:41 GMT

Just noticed that using trigonometry for circular lights makes simple scenes run almost 2x slower on my machine. If max ~1% difference from perfect distribution is acceptable (it is, because the entire thing is an approximation anyway), you get the old speed back by using approximations for sin/cos. Attaching a code snippet.

Jim Holsenback commented on Monday, 06 May 2013, 22:43 GMT

I've reviewed and tested your fixes. Everything went just fine until I applied the changes posted in code_snippet.txt when I ran into a couple of scoping errors. Logic hid the setting of: phi,phi2,usign and vsign.

Wouldn't it be better to set those at the top of the function rather than having to set them inside each conditional statement that they are used?

Simon commented on Tuesday, 07 May 2013, 00:25 GMT

Well you can see what I meant even if it didn't work out of the box, it's just a couple of tests to avoid unnecessary calculations, and a shortcut for sine and cosine. Could you let me know if it makes it faster on your machine too?

Anyway, that should not happen, it worked just fine in my compile (gcc on linux). Maybe some brackets are missing or something. Unless there are some portability issues I don't know about.

From what I know about c/c++ compilers, the compiler can optimize better if variable declarations are as "deep" as possible inside the code. If you declare it outside, it might want to keep the value for later use and waste register space (modern compilers should be smart, but it's better to make their job easier). It's also less confusing for programmers to have the variables in the inner scope. The top-declarations are seen mostly because of old c standards. But anyway, if it causes problems, put them outside, it shouldn't matter much, it's more a matter of style at this point.

"lightsource" is also named differently in the two functions.

Jim Holsenback commented on Tuesday, 07 May 2013, 01:05 GMT

Yes your changes were clearly documented, and it didn't take me very long to integrate the changes. I'm using gcc on linux as well, the problems I encountered more accurately described is inside a nested if else block. The outer test is:

if(jitter_u!=0 && jitter_v!=0)//save time by not doing anything that is not necessary

when it falls into this block usign and vsign gets set. Inside that block the else clause of the following test sets phi, phi2, usign and vsign:

if(jitter_u==jitter_v)//diagonals: avoid expensive division and (fake) trigonometry

then again in the else clause of the outer if/else block sets phi, phi2, usign and vsign which seems redundant. Rather than setting those at the top of the function Trace::ComputeFullAreaDiffuseLight after looking at it again maybe just after the test ...

if(lightsource.Circular == true)

... would be a better place.

Could you be a bit more specific about lightsource being named differently ... It's been a long day and I must be looking right past what you're talking about.

Simon commented on Tuesday, 07 May 2013, 01:41 GMT

Sorry, my mistake, I was talking from memory and I remembered there was something different. It's just axis1/axis1temp that's different in ComputeFullAreaDiffuseLight and TraceAreaLightSubsetShadowRay. So everything should be just fine.

I attached a piece of code that documents how I see things. Just to make sure we agree on the code. I changed indentation to make it obvious how the three if's are nested.

Jim Holsenback commented on Tuesday, 07 May 2013, 02:49 GMT

Thank you for the clarification on axis/axix1temp being the issue and not lightsource. I had also already indented and honored the same style that was already present in respect to opening/closing brace usage ... that is the brace on the next line rather than the same line as the test, so the 2nd code snippet was unnecessary. However, thanks so very much for your interest and help with this issue!

William F Pokorny commented on Sunday, 29 January 2017, 14:26 GMT

Now tracked on github as issue #224.


Available keyboard shortcuts


Task Details

Task Editing