The Persistence of Vision Raytracer (POVRay).
This is the legacy Bug Tracking System for the POVRay project. Bugs listed here are being migrated to our github issue tracker. Please refer to that for new reports or updates to existing ones on this system.
FS#275  circular area lights exhibit anisotropy
Attached to Project:
POVRay
Opened by Christoph Lipka (clipka)  Saturday, 09 March 2013, 02:53 GMT
Last edited by William F Pokorny (wfpokorny)  Saturday, 28 January 2017, 16:01 GMT
Opened by Christoph Lipka (clipka)  Saturday, 09 March 2013, 02:53 GMT
Last edited by William F Pokorny (wfpokorny)  Saturday, 28 January 2017, 16:01 GMT

Detailscircular area lights exhibit some anisotropy, being brighter along the diagonals than on average, as can be demonstrated with the following scene: //+w800 +h800 #version 3.7; global_settings{assumed_gamma 1} plane{z,10 pigment{rgb 1} finish{ambient 0 brilliance 0}} disc{0,z,10000,0.5} camera{orthographic location z look_at 10*z up y*12 right x*12} light_source{10*z rgb 10 area_light 10*x 10*y 257 257 adaptive 4 circular} 
This task depends upon
Not surprising, currently the rectangular grid of sample lights is just deformed into a circle, which makes the lights more dense along diagonals (see figure). I see no reason to stick with the rectangular array of lights. The expensive part is testing for shadows, but the sampling lights could be arranged in many ways:
usual polar coordinates (denser at the center, but isotropic)
polar coordinates with uniformly increasing number of lights with radius (best)
Haltonlike subrandom sequence for incremental sampling (like radiosity)
All these algorithms have a problem that they are harder to use with adaptive sampling (neighbouring lights are not easy to find, and they are not in fours, so oversampling is difficult). Circular lights would need a separate adaptive algorithm. The easiest solution would probably be to triangulate and subdivide the triangles. Triangulation could also be expanded to the concept of mesh lights.
A lazy solution is to keep current distribution of lights, and calculate dimming factors to make the illumination uniform. However, the lights still have larger resolution at diagonals, and dimming wastes accuracy (optimal sampling solution has equal weights of all the lights).
If I'm not mistaken, the density of lights goes as 1/cos(phi)^2 if phi is the polar angle: tan(phi)=y/x, if abs(x)>abs(y), and 1/sin(phi)^2 if abs(x)<abs(y), making the light density at diagonals precisely double than density at the horizontal and vertical line. A "patch" can therefore be done by multiplying each light intensity by
max(x,y)^2/(x^2+y^2)
and normalization of the all the weights, so that the sum of the intensities is still the same as expected.
However, I'm not entirely sure about this, it's a quick calculation.
Thanks for your feedback, Simon. Here are a few more thoughts of mine:
My idea would therefore be to just fix the formula that deforms the square into a circle, so that the points in each of the concentric circles end up spaced evenly. I guess this will involve some use of trigonometrics, which may degrade performance, but if it turns out to be too bad we could simply precompute the deformation factors (sharing the table among area lights with the same number of subdivisions).
Fixing the formula would be very difficult, if not practically impossible. It probably won't be just simple trigonometry (I had a similar problem before, it wasn't pretty). To make the density uniform, points have to rearrange in both dimensions (lights from the diagonals have to be pushed apart sideways and radialy at the same time), and finding a useful analytical formula is probably not worth the effort.
If you want to keep the quadrilateral adaptive algorithm, there are a few options:
Both solutions would require precomputing the points and arrays of indices for each quadrilateral, because indexing is very difficult (basically you are constructing a mesh). This can be done at parse time (storing around 30 vectors for each light and a couple of integers is not too much).
Personally, I would choose the construction with one point in the middle, N points in the next layer, 2N points in the third layer, 3N, 4N,... like this
http://www.photodictionary.com/photofiles/list/1762/2325pasta_strainer.jpg (just not on straight hexagonal lines but on circles)
If you can then group the triangles into quads and store them in an array, it's done (but not easy).
I'm well aware that achieving equal distances both radially and circumferentially is an absolute impossibility, but I'm just aiming at circumferentially uniform spacing. This should be rather trivial: All you need to do is interpret the square grid not as an array of NxN points, but as N/2 concentric square frames of 4(i1) points each (for even N), or (N1)/2 concentric square frames of 4(i1) points each with one additional point in the center (for odd N); obviously you can arrange the 4(i1) points of any of the square frames in a circular fashion with equal circumferential distances instead, and this circumferential distance will be the same for all nesting levels; likewise, the radial distance between any such circle and the one immediately nested within will also be the same for all circles (albeit different from the circumferential distance).
Those options are far from the current adaptive algorithm, which is based around the property that the mesh formed by the quadrilaterals is topologically equivalent to a simple NxN grid.
Actually what my suggestion would result in is exactly that (for odd number of elements squared), with N=4 (instead of N=6 as you probably have in mind).
Perfect, this is excellent (actually it's N=8), I should have seen it before  the merging of triangles is obvious and gives exactly the topology of the grid. And it actually has constant average density (which is more important than equal radial and circumferential spacing, which you get in hexagonal grid).
Now tracked on github as issue #222.