POV-Ray

The Persistence of Vision Raytracer (POV-Ray).

This is the legacy Bug Tracking System for the POV-Ray project. Bugs listed here are being migrated to our github issue tracker. Please refer to that for new reports or updates to existing ones on this system.

Attached to Project: POV-Ray
Opened by Benedikt Adelmann - 2013-03-19
Last edited by William F Pokorny - 2016-12-01

FS#278 - Implement Lens Flare Rendering

Currently POV-Ray does not support rendering lens flare effects, however, they can be simulated using a macro (include file) by Chris Colefax.

I would like to suggest adding a feature to POV-Ray to support lens effects “natively” since

  • as far as I know the macro has been designed for POV-Ray 3.1 so with each new POV-Ray version it gets more likely that this macro does not work properly any more
  • the macro does not work when rendering with radiosity, probably because the macro creates the lens effect by using a pigment with a high ambient value (which is ignored by POV-Ray 3.7’s radiosity algorithm).

Additionally, the macro is not quite easy to employ because

  • it needs to know the exact camera parameters (location etc.) and defines an own camera itself so any important camera information has to be stored if the effect has to work as expected
  • it does not (actually cannot) take into account that objects may (partially) hide the lens effect
  • reflections and refractions (of light sources) cannot be combined with it properly - the user would have to calculate both the point where the reflected/refracted light source can be observed and the shape it then has due to distortion, and in more complex scenes such computations are nearly impossible in SDL.

I would suggest integrating such a lens flare rendering feature with the “looks like” mechanism you already have for light sources. Several parameters that can currently be set for the macro - including effect brightness and intensity, lens options and whether to create a flare at all - could be set for the light source.

Then POV-Ray could store the location and colour of each ray that finally intersected the “looks like” object of a light source and, having finished the main rendering, from that data compute a partially transparent “lens flare layer” eventually mixed into the rendered image. By this, the above mentioned problems could be avoided:

  • an object fully or partially intersecting a light source’s “looks like” object would also reduce the number of pixels used to create a flare - and therefore reduce that flare until fully hiding it
  • the same goes for reflected and/or refracted versions of the “looks like” object
  • the camera’s location and other properties would be used automatically
  • and finally, as a feature supported by POV-Ray itself, there would be neither compatibility issues nor problems like the effect not fitting together with radiosity.

Do not get me wrong, I would not expect POV-Ray to really calculate intersections that naturally happen in a camera lens, causing lens flares. Effects looking appropriate can actually be created just in 2D space (as some graphics programs do support) so the work to be done would, as far as I have any overview, be:

  • storing, as mentioned above, the relevant data for pixels showing “looks like” objects
  • calculating a lens flare from that data after the render has finished
  • overlaying the rendered image with the newly created lens effect.
Simon commented on Tuesday, 19 March 2013, 20:22 GMT

In my opinion, this feature does not belong in a raytracer. Lens flare is a postprocessing effect even in physical reality, as it hapens inside the camera. Raytracer should map the rays that come to the camera from the scene, it's not supposed to simulate inperfections in the imaging system, that "pass" is completely independent from the rendering pass and can be done externally (as you said, it's a 2D effect). There are numerous tools for doing that (commercial or free; you can probably create a short imagemagick script to do this automagically) and if you have extra wishes, it's easier to write a short external application that takes an image, than to hard-code it into povray. Faking it with a povray macro is even worse, and unnecessary.

Here, a UNIX philosophy should be observed: an application should do one thing, and do it well.

Arguments against it:

  • Every lens in the world effectively has a different lens flare effect, so no matter how many parameters you introduce, you can only get a generic fake approximation to the lens flare.
  • Lens flares happen when a light source and highlights are orders of magnitude brighter than everything else, so that even after refractions inside the lens, you still get a bright effect. The correct way of doing it is to render in HDR and do post-processing on the resulting image. If the light source is off-screen, you can render a wider angle to get the highlight inside the frame.
  • The lens flare is not the only optical effect: there is also light bloom, where the highlights "bleed" into the shadow parts of the image. This is also easily done in post-processing and should not be put in a raytracer (it's not even ray optics - bloom is caused by wave properties of the light).
  • A "glow" around a lightsource is a combination of both effects (lens flare and light bloom).
  • Lens flare belongs in the same set of effects as vignetting, soft focus, chromatic aberrations and other similar effects.

You can always "build" a camera using povray primitives and povray will calculate the "flare" for you (won't be very realistic, as povray optics are an approximation to the real thing).

Even the focal blur does not seem to belong in a raytracer, but unfortunately requires shooting of additional rays (it may shoot around the corner and see objects that are obscured). Although this is strictly necessary only for macro shots. For blurring of faraway objects, per-pixel depth information would technically be enough. Both focal blur and motion blur can easily be implemented externally (render many times with different camera configuration and object positions and overlay), and having it built in is more of a convenience than anything else.

Benedikt Adelmann commented on Tuesday, 19 March 2013, 22:44 GMT

as you said, it's a 2D effect

Well, it is not a 2D effect as for itself, but it can be approximated this way. However, this can be said about several other ray-tracing tasks, if not all. You could probably approximate or imitate them all in 2D. You use a 3D renderer because it automates it.

There are numerous tools for doing that (commercial or free; you can probably create a short imagemagick script to do this automagically) and if you have extra wishes, it's easier to write a short external application that takes an image, than to hard-code it into povray. Faking it with a povray macro is even worse, and unnecessary.

All that is correct, as for a single image. If you are rendering an animation, however, maybe with 1,000 frames, you would need a sophisticatedly intelligent post-processing program that would have to be capable of re-constructing the 3-dimensionality of the scene and know the location of the light source in it (and those of its reflections etc., as mentioned!) to be able to place it correctly. You cannot do this by hand for 1,000 images, or at least you cannot do it well.

you can only get a generic fake approximation to the lens flare

So what? When you see a lens flare, can you right away state whether it is natural or artificial? The point is not simulating an existing lens but something the person who sees the image considers fitting.

The correct way of doing it is to render in HDR

You can actually do this instantly in POV-Ray. Colour values do not stick to range 0..1 each component. You can specify any floating-point value (even negative ones if you desire it).

there is also light bloom

A senseful algorithm could even do that the same time lens effect are created. You would just use the brightness of each pixel (before trimming it down to 0..1 and then 0..255 or whatever) and perform some kind of, say, convolution on them.

Lens flare belongs in the same set of effects as vignetting, soft focus, chromatic aberrations and other similar effects.

So Photon Mapping would be a superfluous feature as well? It is done to get realistic caustics, not far away from the chromatic aberrations you mentioned.

You can always "build" a camera using povray primitives and povray will calculate the "flare" for you

Apart from wondering about the additional tracing time and recursion depth this would take, I do not see any point in doing so as long as this can be done in a more intelligent way. You could also argue rendering could be done in a physically more realistic way by computing frequency spectrums, energy and so on of each ray, do calculations upon them, and finally re-convert them into RGB colours by employing the neural response curves of the human eye's cones.

POV-Ray's intention is to create realistic-looking images, isn't ist? So in a nutshell I would indeed suggest adding a lens flare feature:

  • POV-Ray obtains information while rendering an external program would lack but need for creating fitting lens flares
  • A raytracer should automate image generation if it has anything to do with three-dimensionality
    • which it has because you need 3D information
  • Just as well OpenOffice could have said: PDF export, who needs this? You can as easily do it with an external ghostscript program. Exporting PDFs is not a text processing program's job.

You will probably never get a program suitable for everything so post-processing will always be needed, yet any program can get as capacious as possible in its domain. Which, as for POV-Ray, is creating "creating stunning three-dimensional graphics" (c. http://www.povray.org/). I'm sorry if this has sounded a bit rude but I think in this situation just sitting back stating "not my business" is not the right solution.

Simon commented on Wednesday, 20 March 2013, 00:08 GMT

I didn't mean to argue with you, I know exactly what you want, optical effects really improve realism and I like them.

I just mean that because these effects are essentially 2D raster effects, they fit in a different workflow and it is more natural (and flexible) to do it with a more suitable external program. Even if this gets implemented in povray, it would be a post-processing stage, which you can easily split away from the raytracing stage. So at the end, you have the same thing as with external program, only they are "glued" together and therefore less modular and customizable. Right now, if I want a lens flare effect, I can write my own application in a few hours that does what I want, I don't have to wait for a new version of povray, because this post-processing stage is completely unrelated to the raytracing stage.

All that is correct, as for a single image. If you are rendering an animation, however, maybe with 1,000 frames, you would need a sophisticatedly intelligent post-processing program that would have to be capable of re-constructing the 3-dimensionality of the scene and know the location of the light source in it (and those of its reflections etc., as mentioned!) to be able to place it correctly. You cannot do this by hand for 1,000 images, or at least you cannot do it well.

For a lens flare, you don't need 3D information about lights. You just need HDR output from povray (I know internally povray works in high definition, I just meant that in order to use an external program to render the flares, you need to EXPORT it in HDR, which povray also supports). This way you get a flare from every highlight (even from glare on shiny objects, or caustics). This would work just well for animations - when lights and objects move, highlights will move, and lens flares with them (for most lenses, they are in the same direction as the highlight, but at different radial distance from the center).

A senseful algorithm could even do that the same time lens effect are created. You would just use the brightness of each pixel (before trimming it down to 0..1 and then 0..255 or whatever) and perform some kind of, say, convolution on them.

Exactly, this is why it is unrelated to raytracing. You can apply blooming to any HDR image, it does not need to be from povray. Why complicate povray with something that an imagemagick script of 1 line does just as well (and for as many images as you want).

So Photon Mapping would be a superfluous feature as well? It is done to get realistic caustics, not far away from the chromatic aberrations you mentioned.

No, photon mapping happens in 3D and interacts with materials in the scene. I meant chromatic abberation in the camera, not in the scene.

There is development being done to introduce native handling of spectral raytracing to povray - it will improve many things.

In short, I'm just trying to convey the basic idea that things are better when they are modular and each component does its own job well. I think this is in spirit of povray development, and main developers will probably agree with me. This is just like sowing jeans and underpants together: it works, but then you can't replace underpants without replacing jeans. Modularity is your friend.

If you get a lot of developers interested to develop this (I would even do it myself if I got motivated enough), you would probably still get two applications - povray, and let's say, afterray, which would just take the output from povray and apply effects on it, as a plug-in or something :) Besides, this way you could use "afterray" separately, which would make it more useful.

Admin
Christoph Lipka commented on Wednesday, 20 March 2013, 01:11 GMT

[Edit - I'd like to note that I wrote this before reading Simon's most recent comment]

Both of you are making valid points here, while at the same time getting it all wrong.

Let me first start out with what should and should not belong in a raytracer:

The objective of raytracing is image synthesis, that is generating realistically looking 2D images out of nothing but thin air and a formal description of a 3D scene. As such, the core functionality is rather clear-cut: If the effect cannot be added later without consulting the 3D scene's description, it definitely belongs in the raytracer. As such, photon mapping is definitely in: You can't add caustics to an existing synthesized 2D image via some simple post-processing rules. Lens flares, on the other hand, are a different matter: Provided the synthesized 2D image contains enough information - specifically, if it is not artificially clipped to a brightness range of 0..1 - it is trivial to add lens flares (and also bloom) to the image, boiling down to applying a series of vignette-like kernels to each pixel. It would actually be an unnecessary complication to implement it in any different way.

It may be worth noting that the core functionality also includes effects that could be achieved in a rather straightforward manner by generating multiple 2D images and merging them in a post-processing step, at least as long as there is some performance gain to be achieved by integrating it into the render engine. Anti-aliasing, for instance, is possibly the most trivial and obvious of such cases: While the same effect could easily be achieved by just rendering the image multiple times with a slight offset in the camera position, this would mean indiscriminatingly oversampling each and every pixel, while it is only actually needed at edges. Focal blur and motion blur fall into this category as well.

However, while the core set of features that should be included is very clear-cut, it does not necessarily mean that anything else must be excluded. The Unix way of thinking might demand this, but POV-Ray is not specifically designed to be a Unix program; to the contrary, it is intended to also be useful in an environment like Windows where it is common to have large applications that don't easily lend themselves to automated operation in a custom toolchain. As such, it might be reasonable to implement some non-core functionality as well.

This begs the question where to draw a line between what POV-Ray should and shouldn't do, especially as far as post-processing steps are concerned. This is a matter of taste, but personally I'd follow this simple rule: If the typical way of generating a 2D image of the real world (that is, taking a photograph) does it, then it is reasonable to ask for POV-Ray to do it as well. This, of course, does include lens flares. (Which doesn't mean that it will automatically implemented; but a request for such a feature shouldn't be automatically dismissed just because it could also be achieved with a post-processing tool at no extra performance cost.)

As for how to implement this feature, I already mentioned that I believe it should be a separate post-processing step, something to put between the actual rendering process and the generation of the output file, as this is both the most straightforward and most universal approach. There is really no information needed from the 3D scene, just the plain information which pixels are particularly bright. Of course this means you'll need to add looks_like objects to your light sources to get them to be visible as bright pixels in the first place, but that is easily done.

Benedikt Adelmann commented on Wednesday, 20 March 2013, 10:08 GMT

If the typical way of generating a 2D image of the real world (that is, taking a photograph) does it, then it is reasonable to ask for POV-Ray to do it as well.

That's why I opened this feature request.

a separate post-processing step, something to put between the actual rendering process and the generation of the output file

That's what I initially suggested, including the use of the looks_like object, so I can only agree with you.

In short, I'm just trying to convey the basic idea that things are better when they are modular and each component does its own job well.

Of course, but it does not necessarily mean that one component must not be able to automatically invoke another. Providing SDL or command-line possibility to chain a post-processing like lens flare creation before the creation of the final output file, that would also be at least sufficient.

I have explained my opinion, you have explained yours, so anything I can do is to wait for your decision whether to implement lens effects or not.

Admin
Chris Cason commented on Wednesday, 20 March 2013, 11:15 GMT

Have you considered using the mesh camera with a mesh that applies a flare?

Benedikt Adelmann commented on Wednesday, 20 March 2013, 14:09 GMT

If I understand the documentation correctly, a mesh camera could be used to add a special kind of distortion to the image (though it says it can be used for illumination calculations), so I do not see how it would work. (Admittedly I have not tried the mesh camera projection so far.)
Could you give a small example how you think a flare could be created by this?

Admin
Chris Cason commented on Wednesday, 20 March 2013, 16:26 GMT

The mesh camera allows you to instantiate mesh elements slightly offset from the ray origin. You can also instantiate a copy of a mesh such that rays go through it. While I haven't looked at the code for a while I suspect that if said elements were transmissive but the transmission/filter varied according to position, this would have the effect of altering the final image.

See http://www.ignorancia.org/en/index.php?page=mesh-camera for some examples of what can be done with this camera type.

Benedikt Adelmann commented on Thursday, 21 March 2013, 11:27 GMT

I have checked out the Ignorancia examples, yet I doubt that this method could be more than a stopgap. Creation of the applying mesh would be difficult I think and even if swapped out to an include file parsing times would probably get unreasonably long (as it happened with the example scenes). Besides, this solution seems to need an extra portion of camera location and direction calculations maybe not quite handier than employing the macro I originally mentioned.

William F Pokorny commented on Thursday, 01 December 2016, 14:50 GMT

Now tracked on github as issue #165.

Loading...

Available keyboard shortcuts

Tasklist

Task Details

Task Editing