|
333 | User interface | Feature Request | 3.70 release | Very Low | Low | Make text in "about" alt+b dialog selectable with the m... | Tracked on GitHub | |
|
Task Description
When you press alt+b or access the “about” dialog in the Help menu it displays some text including software version number and list of contributors.
It would be nice to be able to select and copy this text using this mouse. Sometimes in the newsgroup I have to tell people what version of POVray I am using, and typing the version number can be a pain.
|
|
335 | Parser/SDL | Possible Bug | 3.70 release | Very Low | Low | macro works in variable but not in array | Tracked on GitHub | |
|
Task Description
This doesn’t work:
#declare pavement_object = array[2] {
object {trash_can_macro() scale 3/4 translate -x * 1/2},
object {potted_plant_macro(_CT_rand2) scale 3/4 scale 3/2 translate -x * 1/2}
}
This does work:
#declare trash_can_object = object {trash_can_macro()}; #declare potted_plant_object = object {potted_plant_macro(_CT_rand2)}; #declare pavement_object = array[2] {
object {trash_can_object scale 3/4 translate -x * 1/2},
object {potted_plant_object scale 3/4 scale 3/2 translate -x * 1/2}
}
Logically, I cannot see a reason for this to be so.
|
|
4 | Subsurface Scattering | Unimp. Feature/TODO | 3.70 beta 32 | Very Low | Low | Integrate Subsurface Scattering with standard lighting ... | Tracked on GitHub | |
Future release |
Task Description
Subsurface Scattering still uses its own rudimentary code to compute illumination from classic light sources; this must be changed to use the standard light source & shadow handling code, to add support for non-trivial light sources (e.g. spotlights, cylindrical lights, area lights), partially-transparent shadowing objects etc.
|
|
6 | Subsurface Scattering | Unimp. Feature/TODO | 3.70 beta 32 | Defer | Low | Integrate Subsurface Scattering with Photons | Tracked on GitHub | |
Future release |
Task Description
Subsurface scattering must be made photon-aware.
|
|
222 | Geometric Primitives | Definite Bug | 3.70 RC3 | Very Low | Low | incorrect render of CSG merge with radiosity | Tracked on GitHub | |
Future release |
Task Description
The problem arises when I am trying to trace a radiosity scene without conventional lighting that has a GSG merge object. There are a coincident surfaces, but these surfaces are first merged, then the texture applied. The texture is a simple solig color non-transfluent pigment, default normal, default finish etc..
Problem consists when adding antialiasing, changing resolution, changing camera view-point etc.; when I replace merge with union, the problem disappeared.
The scene was checked on two different machines with different versions of POV-Ray:
Gentoo Linux, kernel 2.6.39-r3, i686 Intel(R) Xeon(TM) CPU 2.80GHz GenuineIntel, 2G RAM (this is Dell PowerEdge 2650 server with 2 dual-core Intel Xeon MP processors); Persistence of Vision™ Ray Tracer Version 3.7.0.RC3 (i686-pc-linux-gnu-g++ 4.5.3 @ i686-pc-linux-gnu)
Gentoo Linux, kernel 2.6.37-r4, x86_64 AMD Athlon™ X2 Dual Core Processor BE-2350, 2G RAM (non-branded machine); Persistence of Vision™ Ray Tracer Version 3.6.1 (x86_64-pc-linux-gnu-g++ 4.4.4 @ x86_64-pc-linux-gnu)
(scene has been adapted slightly to be rendered with 3.6, the adaptation was to change “emission” with “ambient” and replace gamma “srgb” with “2.2”)
Both machines generate similar images.
The attachment is an archive containing sources of minimal scenes with these problems, and sample pictures I generated from them on my machines.
|
|
230 | User interface | Feature Request | 3.70 RC3 | Very Low | Low | Improved handling of animations | Tracked on GitHub | |
|
Task Description
October to middle November, I prodduced a 5 minutes video mainly py POVRAY.
Here a part of the video.ini file
#
# szenes based on games.pov #
#game-pat #Initial_Frame=450 - time scale 1000 - 30 seconds #Final_Frame=899 #Initial_Clock=-12500 #Final_Clock=17500
#game-lost - time scale 1000 - 22 seconds #Initial_Frame=0 #Final_Frame=659 #Initial_Clock=2000 #Final_Clock=24000
#game-lost - time scale 3000 - 12 seconds - fast through the night #Initial_Frame=0 #Final_Frame=359 #Initial_Clock=24000 #Final_Clock=60000
#book-cover #clock=64000
#game-sunrise - time scale 1000 - 35 seconds #Initial_Frame=0 #Final_Frame=1049 #Initial_Clock=60000 #Final_Clock=95000
Now imagine all the problems:
One computer crashes often because of thermal problems. Last picture rendere 487.
Now calculate the setings, that this computer continues the task at 487
Or 2 computers should render a scene.
Sounds very easy. Something like computer 1 makes 0..499 computer 2 makes 500..999.
But the computers are different in speed and the pictures are very different in computation time.
So it would be best
computer 1: 0 to 999 computer 2: 999 to 0
They would meet in the middle, where ever this middle is.
So it would be much easier with
#game-sunrise - time scale 1000 - 35 seconds Initial_Frame=0 Final_Frame=1049 Initial_Clock=60000 Final_Clock=95000 Initial_Task=487 Final_Task=1049
So I have not to calculate the exact clock seting, when a computer sould continue a task after crashing at picture 487
#game-sunrise - time scale 1000 - 35 seconds Initial_Frame=0 Final_Frame=1049 Initial_Clock=60000 Final_Clock=95000 Initial_Task=1049 Final_Task=0
This would be the reverse calcualtion order. Starting with picture 1049 and going down 1048..1047
|
|
8 | Radiosity | Unimp. Feature/TODO | 3.70 beta 32 | Defer | Low | Improve Radiosity "Cross-Talk" Rejection in Corners | Tracked on GitHub | |
Future release |
Task Description
Near concave edges, radiosity samples may be re-used at a longer distance away from the edge than towards the edge; there is code in place to ensure this, but it only works properly where two surfaces meet roughly rectangularly, while failing near the junction of three surfaces or non-rectangular edges, potentially causing “cross-talk”.
It should be investigated how the algorithm can be improved or replaced to better cope with non-trivial geometry.
|
|
264 | Photons | Unimp. Feature/TODO | 3.70 RC6 | Defer | Low | Improve precision of photon direction information | Tracked on GitHub | |
|
Task Description
In the photons map, the direction of each photon is stored as separate latitude & longitude angles (encoded in one byte each), causing the longitudinal direction component’s precision to be unnecessarily high for directions close to the “poles” (Y axis); in addition, encoded value -128 is never used. For better overall precision as well as precision homogenity, the following scheme could be used instead:
latCode = (int)((LatCount-1) * (lat/M_PI + 0.5) + 0.5)
For each latitude code, define a specific number of encodable longitude values, LngCount[latCode] = approx. cos(lat)*pi*65536/(2*LatCount); this can be a pre-computed table, and may need slight tweaking for optimum use of the code space. Encode the longitude (-pi to +pi) into a value from 0 to (LngCount[lat]-1) using
LC = LngCount[latCode];
lngCode = (int)(LC * (lng/(2*M_PI) + 0.5) + 0.5) % LC;
dirCode = LatBase[latCode] + lngCode;
For decoding, a simple lookup from a precomputed list of directions could be used (2^15 entries, i.e. one hemisphere, will suffice). To conserve space, direction vectors could be scaled by (2^N-1) and stored as (N+1)-bit signed integer triples rather than floating point values; due to the limited precision of the lat/long information, 8 bits per coordinate might be enough, giving a table size of 96k. A full double-precision table would require 786k instead.
|
|
44 | Radiosity | Feature Request | All | Very Low | Low | Improve Normals Handling in Radiosity | Tracked on GitHub | |
Future release |
Task Description
Currently, radiosity does not make use of the fact that pertubed normals would theoretically just require a different weighting of already-sampled rays, leading to the following issues:
Honoring normal pertubations in radiosity leads to an increased number of samples, slowing down sample cache lookup.
The increased number of samples is generated from a proportionally higher number of sample rays, slowing down pretrace even further.
Low-amplitude pertubations tend to be smoothed out; “reviving” these is only possible by increasing the general sample density.
Handling of multi-layered textures with different normal pertubations is currently poorly implemented.
As a solution, I propose to store for each radiosity sample not only the resulting illumination for a perfectly unpertubed normal, but from the same set of sample rays also compute the illumination for an additional set of about a dozen standardized pertubed-normal directions, and interpolate among these when computing the radiosity-based illumination for a particular point that has a pertubed normal.
For backwards compatibility, this method of dealing with pertubed normals in radiosity might be activated by a different value for the “normal” statement in the radiosity block, say, “normal 2”.
|
|
41 | Other | Feature Request | 3.70 beta 32 | Very Low | Low | improve command-line parsing error messages | Tracked on GitHub | |
|
Task Description
POV-Ray 3.6, upon encountering problems when parsing command line and/or .ini file options, would quote the offending option in the error message.
POV-Ray 3.7 currently just reports that there is some problem with the command line, without providing any details. I suggest changing this, as the information may be helpful at times.
|
|
248 | Parser/SDL | Feature Request | Not applicable | Very Low | Low | Implement mechanism to compute direction of a spline | Tracked on GitHub | |
Future release |
Task Description
The SDL currently provides no way to compute the exact direction of a spline at a given location, even though mathematically this is a piece of cake: The first-order derivative of any spline section gives you the “speed” as a vector function, and is trivial to compute for polynomial splines (which are behind all spline types that POV-Ray supports); the normalized “speed” vector, in turn, gives the “pure” direction.
For exact direction/speed computations, I propose to extend the SDL invocation syntax as follows to allow for evaluating a spline’s derivative:
SPLINE_INVOCATION:
SPLINE_IDENTIFIER ( FLOAT [, SPLINE_TYPE] [, FLOAT] )
or
SPLINE_INVOCATION:
SPLINE_IDENTIFIER ( FLOAT [, FLOAT] [, SPLINE_TYPE] )
where the second FLOAT will specify the order of derivative to evaluate (defaulting to 0). In order to compute the position, direction, and acceleration of an object traveling along a certain spline, one could then for instance use:
#declare S = spline { ... }
#declare Pos = S(Time);
#declare VSpeed = S(Time,1);
#declare VAccel = S(Time,2);
#declare Dir = vnormalize(VSpeed);
#declare Speed = vlength(VSpeed);
#declare AccelDir = vnormalize(VAccel);
#declare GForce = vlength(VAccel) / 9.81;
Alternatively, a mechanism may be devised to create a spline representing another spline’s derivative; however, it would be debatable whether the syntax should be parameter-like (being an added information that could be overridden again when creating other splines from such a derived spline), or operation-like (converting the spline), and in the latter case how it should affect spline type (and consequently control points); so the spline invocation parameter approach might be more straightforward, with less potential surprises for the user.
|
|
334 | Texture/Material/Finish | Feature Request | 3.70 release | Very Low | Low | HLS colors | Tracked on GitHub | |
|
Task Description
It would be nice to be able to specify colors in HLS as well as RGB.
Currently, you can use a macor to convert individual colors. But this does not work in color_maps where you want smooth gradations/interpolations between two or several colors.
|
|
263 | Parser/SDL | Feature Request | 3.70 RC6 | Very Low | Low | Functions and patterns for finish variations | Tracked on GitHub | |
|
Task Description
the pigment {} and normals {} sections allow spatial variation of color, transparency and normal map. On the other hand, the specular parameter is a fixed scalar. This removes many possibilities. For instance, specularity could vary in space (speckles of oil or water on a surface, worn-out finish, having specularity reduce where the pigment transparency increases) and have color components. With current settings, the light’s color is simply multiplied by the scalar specified by “specular”, whereas multiplying each component with different color could create diverse effects (the “metallic” keyword already acts similar to duplicating the specular color from the pigment). The syntax could be exactly the same as for the pigment (all the patterns, color maps, image maps and functions would apply, allowing reuse of most of the code).
The effect can now be partially faked by having patterned textures, but it requires a very complex code and the lack of layering of patterned textures makes it difficult to vary the specularity and pigment separately.
In a similar way, roughness and brilliance could also vary in space.
Doing the same for varying reflectivity would be more difficult, as it has angular dependence and possibilty of Fresnel calculation, but it could at least be a full color instead of a simple scalar multiplier. For instance, having a blue surface that reflects only red component of the light should not be impossible.
I think at least part of this functionality actually makes the scene description language more uniform and self-consistent.
|
|
79 | Source code | Feature Request | 3.70 beta 35a | Very Low | Low | Full-Featured Test-Scene to check the correctness of po... | Tracked on GitHub | |
Future release |
Task Description
Hi,
it would be nice if there exists a test scene (not a benchmark) which has a high coverage of povray source and can be used as correctness validation of povray. It schould be produce an image which can be compared to a golden reference image.
It may be also possible to create a regression test suite which does automatic comparision of the render results.
|
|
301 | Other | Definite Bug | 3.70 RC7 | Very Low | Low | Fallback to default image size causes wrong values to b... | Tracked on GitHub | |
|
Task Description
When resolution is not specified (neither via POVRAY.INI nor via QUICKRES.INI nor via command line or custom .ini file), random values are displayed for image resolution in the Image Output Options message output. (The actual render will be performed at the default size of 160×120 pixels though.)
|
|
127 | Parser/SDL | Feature Request | 3.70 beta 37a | Very Low | Low | Expandable arrays | Tracked on GitHub | |
Future release |
Task Description
Currently, arrays are of a fixed size. You can’t add or remove items to/from an array. I think it would like arrays to be expandable with no fixed and pre-determined size.
|
|
311 | User interface | Possible Bug | 3.70 release | Very Low | Low | Elepsed time error on very long renders | Tracked on GitHub | |
3.71 release |
Task Description
On a very long render, around day 24, the elapsed time display becomes incorrect, showing 4294967272d 4294967272h 4294967272m 4294967272s.
Found on Windows 7 64 bits and reproduced on Windows 7 32 bits. NOT reported on other platforms.
|
|
310 | Editor | Feature Request | 3.70 RC7 | Very Low | Low | Editor should remember bookmarks | Tracked on GitHub | |
|
Task Description
Now the editor remembers only the cursor positions of the loaded files when starting a new PR session. It would be more friendly to remember whether the window was split or not, as well as the bookmarks.
|
|
183 | Texture/Material/Finish | Possible Bug | 3.70 beta 40 | Very Low | Low | cutaway_textures broken with child unions | Tracked on GitHub | |
Future release |
Task Description
When using cutaway_textures in a CSG object that has union children, results are not as expected; instead, surfaces in the union children that have no explicit texture will be rendered with the default texture instead. This is not the case for e.g. difference children.
Example:
#default { texture { pigment { rgb 1 } } }
camera {
right x*image_width/image_height
location <0,1.5,-4>
look_at <0,1,0>
}
light_source { <500,500,-500> color rgb 1 }
#declare U = union {
sphere { <0,-0.1,-1>, 0.3 }
sphere { <0, 0.1,-1>, 0.3 pigment { color red 1 } }
}
intersection {
sphere { <0,0,0>, 1 pigment { color green 1 } }
object { U }
cutaway_textures
rotate y*90
}
When declaring U as an intersection instead, the results are as expected, with the surface of the first sphere in U being rendered with the texture defined in the outer intersection.
|
|
256 | Texture/Material/Finish | Feature Request | 3.70 RC6 | Very Low | Low | CSG texturing modes | Tracked on GitHub | |
|
Task Description
At times, the current method of specifying texture for CSG components and compounds is restricting. The issue pops up now and then, see e.g.
http://news.povray.org/povray.pov4.discussion.general/thread/%3Cweb.4799def8e1857b77c150d4c10%40news.povray.org%3E/
http://news.povray.org/povray.general/thread/%3Cweb.4fc892634f065c00e32b83540@news.povray.org%3E/
http://news.povray.org/povray.general/thread/%3Cweb.5073e9f7dae1fbb2d97ee2b90%40news.povray.org%3E/
There should be a new CSG option “texture_mode” or similar, which could take one of the following values:
preserve (the current behavior) cutaway (the current behavior when specifying cutaway_textures) override (replace all individual textures with compound texture) layer (layer the compound texture over the existing textures)
and possibly, more involved
modify/merge: if both element and compund textures are simple, i.e. not layered or mapped, override all default values of the element textures with the values from the compound texture. The idea would be to, e.g., have the elements already pigmented but then apply common normal or finish properties.
|
|
302 | Other | Possible Bug | 3.70 RC7 | Very Low | Low | confusing error message when .ini file cannot be parsed | Tracked on GitHub | |
|
Task Description
When a command-line parameter in an .ini file cannot be parsed (such as “+a.3”), POV-Ray reports a “Problem with setting”, quoting the command line, rather than indicating that the problem occurred in an .ini file. This leads the user to think that the problem is with the command line itself, unnecessarily confusing him.
|
|
229 | Image format | Feature Request | 3.70 RC3 | Very Low | Low | Clock value into EXIF data for PNG | Tracked on GitHub | |
|
Task Description
The best time for a picture....
I set the day time and so the position of the sun by “clock=”
Normal I document my source very good, but this time, I forgot the clock seting for the picture of my book cover.
So I would find it very practicall to put the clock value and other setings for rendering into EXIF data of the picture.
|
|
275 | Light source | Definite Bug | 3.70 RC7 | Very Low | Low | circular area lights exhibit anisotropy | Tracked on GitHub | |
Future release |
Task Description
circular area lights exhibit some anisotropy, being brighter along the diagonals than on average, as can be demonstrated with the following scene:
//+w800 +h800
#version 3.7;
global_settings{assumed_gamma 1}
plane{-z,-10 pigment{rgb 1} finish{ambient 0 brilliance 0}}
disc{0,z,10000,0.5}
camera{orthographic location z look_at 10*z up y*12 right x*12}
light_source{-10*z rgb 10 area_light 10*x 10*y 257 257 adaptive 4 circular}
|
|
142 | Texture/Material/Finish | Feature Request | 3.70 beta 37a | Very Low | Low | camera_view pigment from MegaPOV | Tracked on GitHub | |
Future release |
Task Description
I probably don’t have to explain why the camera_view pigment in MegaPOV was important, but I will list some reasons anyway:
1) post-processing could be performed in-scene 2) new types of focal blur effects could be created 3) feedback fractals were possible
I’m sure there are many others, as this is one of those features that has undetermined potential!
|
|
281 | Geometric Primitives | Feature Request | 3.70 RC7 | Defer | Low | Bug in rendering of Bézier patches | Tracked on GitHub | |
Future release |
Task Description
In version 3.7.0.RC7.msvc10.win64, there is a bug in rendering Bézier patches in which four points (along one edge) are all the same point.
The rendering can be seen here: http://i.imgur.com/eq2UIXR.png [Edit: See attachment for the rendering]
As you can see, there is a visible unwanted artifact in the corner of each patch. The two patches shown are essentially the same, except with the 4×4 matrix of vertices transposed (just to demonstrate that simply transposing it didn’t fix it).
Expected rendering is a smooth surface without the artifact.
Below is the code used to render the above example.
#version 3.7;
global_settings { assumed_gamma 1.0 }
camera {
location <45, 31, -10>
look_at <40, 21, 200>
right x*image_width/image_height
}
light_source {
<660, 300, -525>
color rgb 1
}
Example 1: First point in each row is the same point bicubic_patch { type 1 flatness 0.001 u_steps 4 v_steps 4 <32.2168, -23.78125, 0>, <34.4968, -23.78125, 0>, <35.2168, -23.78125, -0.72>, <35.2168, -23.78125, -3>, <32.2168, -23.78125, 0>, <34.4968, -22.10256, 0>, <35.2168, -21.57244, -0.72>, <35.2168, -21.57244, -3>, <32.2168, -23.78125, 0>, <33.9709, -21.55577, 0>, <34.52483, -20.85299, -0.72>, <34.52483, -20.85299, -3>, <32.2168, -23.78125, 0>, <32.30556, -21.50298, 0>, <32.33359, -20.78352, -0.72>, <32.33359, -20.78352, -3> rotate 180*x
scale 1.4 translate ←5, 0, 0> pigment { color <1, 0, 0> } }
Example 2: First row is all the same point bicubic_patch {
type 1 flatness 0.001
u_steps 4 v_steps 4
<32.2168, -23.78125, 0>, <32.2168, -23.78125, 0>, <32.2168, -23.78125, 0>, <32.2168, -23.78125, 0>,
<34.4968, -23.78125, 0>, <34.4968, -22.10256, 0>, <33.9709, -21.55577, 0>, <32.30556, -21.50298, 0>,
<35.2168, -23.78125, -0.72>, <35.2168, -21.57244, -0.72>, <34.52483, -20.85299, -0.72>, <32.33359, -20.78352, -0.72>,
<35.2168, -23.78125, -3>, <35.2168, -21.57244, -3>, <34.52483, -20.85299, -3>, <32.33359, -20.78352, -3>
rotate 180*x
scale 1.4
pigment { color <1, 1, 0> }
}
|
|
321 | Other | Definite Bug | 3.70 release | Very Low | Low | bounding threshold inconsistency | Tracked on GitHub | |
|
Task Description
User reported documentation inconsistency. Investigation led to the discovery of a bug in the setting of the current default value.
~source/frontend/renderfrontend.cpp reports the value “3” while ~source/backend/scene/scene.cpp sets a default value of “1”
Before for addressing this issue, are there any thoughts as to what the default value should be?
|
|
85 | Other | Feature Request | Not applicable | Defer | Low | Aspect ratio issues | Tracked on GitHub | |
Future release |
Task Description
Background
When rendering an image, there are actually three aspect ratios involved:
1) The aspect ratio of the camera, set with the up and right vectors.
2) The aspect ratio of the rendered image, set with the +W and +H parameters.
3) The aspect ratio of the pixels in the intended target medium. While this is very often 1:1, it’s definitely not always so (anamorphic images are common in some media, such as DVDs).
The aspect ratio of the camera does not (and arguably should not, although some people might disagree) define the aspect ratio of the image resolution, but the aspect ratio of the image as shown on the final medium. In other words, it defines how the image should be displayed, not what the resolution of the image should be.
This of course means that the aspect ratio of the target medium pixels has to be taken into account when specifying the image resolution. If the target medium pixels are not 1:1 (eg. when rendering for a medium with non-square pixels, or when rendering an anamorphic image eg. for a DVD), the proper resolution has to be specified so that the aspect ratio of the displayed image remains the same as the one specified in the camera block.
This isn’t generally a problem. It usually goes like “my screen is physically 4:3, so I design my scene for that aspect ratio, but the resolution of my screen is mxn which is not 4:3, but that doesn’t matter; I just render with +Wm +Hn and I get a correct image for my screen”.
However, problems start when someone renders an image using an image aspect ratio / pixel aspect ratio combination which does not match the camera aspect ratio. By far the most common situation is rendering a scene with a 4:3 camera for a screen with square pixels but with a non-4:3 resolution (most typically 16:9 or 16:10 nowadays). The image will be horizontally stretched.
In a few cases the effect is the reverse: The scene (and thus the camera) has been designed for some less-typical aspect ratio, eg. a cinematic 2.4:1 aspect ratio, but then someone renders the image with a 4:3 resolution. The resulting image will be horizontally squeezed.
In a few cases this is actually the correct and desired behavior, ie. when you are really rendering the image in an anamorphic format (eg. for a DVD). However, often it’s an inadverted mistake.
Some people argue that this default behavior should be changed. However, there are also good arguments why it should not be changed. Some argue that POV-Ray should have more features (at the SDL level, at the command-line level or both) to control this behavior.
There are several possible situations, which is why this issue is so complicated. These situations may include:
- The scene author doesn’t really care what aspect ratio is used to render the image, even if it means that additional parts of the scenery become visible or parts are cropped away when using a different aspect ratio than what he used.
In this case the choice of camera aspect ratio should be up to the person who renders the image, and thus selectable on the command-line. However, he should have an easy choice of how changing the aspect ratio affects the image: Should it extend the viewing range, or should it crop part of it, compared to the original?
And this, of course, while still making it possible to render for an anamorphic format.
- The author wants to support different aspect ratios, but he wants to control precisely how it affects the composition of the image. Maybe he never wants anything cropped away within certain limits, but instead the image should always be extended in whichever direction is necessary due to the aspect ratio. Or maybe he wants to allow cropping the image, but only up to a certain point. Or whatever.
In this case the choice of camera aspect ratio should be up to the author, and thus selectable in the scene file, while still allowing some changes from the command-line.
- The author designed his scene for a precise aspect ratio and nothing else, and doesn’t want the image to be rendered in any other aspect ratio. Maybe he used some very peculiar aspect ratio (eg. something like 1:2, ie. twice as tall as wide) for artistic composition reasons, and wants the image rendered with that aspect ratio, period.
Perhaps the author should be able to completely forbid the change of camera aspect ratio in the command-line.
Of course anamorphic rendering should still be supported for targets with a different pixel aspect ratio.
Possible solution
This solution does not necessarily address all the problems described above perfectly, but could be a good starting point for more ideas:
Add a way to specify in the camera block minimum and maximum limits for the horizontal and vertical viewing angles (and if any of them is unspecified, it’s unlimited). Of course for this to be useful in any way, there should also be a way to change the camera and pixel aspect ratios from the command line.
The idea with this is that the author of the scene can use these angle limits to define a rectangular “protected zone” at the center of the view, using the minimum angle limits. In other words, no matter how the camera aspect ratio is modified, the horizontal and/or vertical viewing angles will never get smaller than these minimum angles. This ensures that the image will never be cropped beyond a certain limit, only extended either horizontally or vertically to ensure that the “protected zone” always remains fully visible regardless of what aspect ratio is used.
The maximum angles can be used for the reverse: They ensure that no scenery beyond a certain point will ever become visible, no matter what aspect ratio is used. This can be used to make sure that unmodelled parts of the scene never come into view. Thus the image will always be cropped to ensure this, depending on the aspect ratio.
I’m not completely sure what should be done if both minimum and maximum angles are specified, and the user specifies an aspect ratio which would break these limits. An error message could be a possibility. At least it would be a way for the author to make sure his scene is never rendered using an aspect ratio he doesn’t want. He can use these angle limits to give some leeway how much the aspect ratio can change, to an extent, or he could even force a specific aspect ratio and nothing else (by specifying that both the minimum and maximum angles are the same).
So in short:
- Add a “minimum/maximum horizontal/vertical angles” feature to the camera block. These can be used to define a “protected zone” in the image which must not be breached by command-line options.
- Add a command-line syntax to change the camera aspect ratio (which automatically obeys the “protected zone” settings). Could perhaps give an error message if the command-line options break the limits in the scene camera.
- Add a command-line syntax to specify a pixel aspect ratio other than 1:1. This can be used to render anamorphic versions of the image on purpose (iow. not by mistake).
This can probably be made backwards-compatible in that if none of these new features are used, the behavior could be the same as currently (or at least similar).
|
|
26 | Geometric Primitives | Definite Bug | 3.61 | Very Low | Low | Artifacts rendering a cloth which has two-side textures | Tracked on GitHub | |
Future release |
Task Description
Dear PovRay maintainers and developers, congratulations for your great RayTracer!
We think that we have found a bug while we were rendering a piece of cloth.
In this piece of cloth were defined two textures, one for one side and one for the another side:
texture { mesh_tex0_0 }
interior_texture { mesh_tex0_1 }
these definitions in their original context.
We have found some artifacts in the final rendering, in concrete near some wrinkles, please, look at the attached file “render_artifacts.tga”, I have painted a big green arrow near the artifacts, maybe you’ll need to do a zoom to see them more accurately.
They are as though the texture of the other side was painted in the incorrect side.
Fortunately, we have a patch to fix this bug (thanks to Denis Steinemann, he made the implementation for PovRay 3.5, so I have adapted these changes to release 3.6.1)
Although we have found this bug in the Windows and Linux 3.6.1 releases, the patch was generated in Linux (using the source code release of “povray-3.6.1”).
To apply this patch, inside the parent folder of the directory “povray-3.6.1” execute:
patch -p0 < other_side_artifacts.patch
And the “povray-3.6.1” will be patched and you will get a console output like this:
patching file povray-3.6.1/source/lighting.cpp
patching file povray-3.6.1/source/mesh.cpp
patching file povray-3.6.1/source/render.cpp
We don’t know if this “hack” is enough smart to apply in the next release, but we think that it fixes the bug (the artifacts dissapear).
Best regards and thank you very much for your great RayTracer!
|
|
289 | Light source | Possible Bug | 3.70 RC7 | Very Low | Low | area_illumination with light fading and scattering medi... | Tracked on GitHub | |
|
Task Description
with reference to http://bugs.povray.org/task/46
still some issue with area illumination and light fading when interacting with media
seems light fade is not taken into account with scattering media. emission and absorption media seem to work fine. occurs with all scattering types.
#version 3.7;
global_settings {
ambient_light 0
assumed_gamma 1
}
camera {
location <0, 3, -5>
look_at <0, 2, 0>
}
#declare Light = 3; // light 1 = individual lights
// light 2 = standard area light
// light 3 = area light with area illumination
#declare Fade = 1; // light fading: 1 on, 0 off
#declare Media = 1; // media 1 = scattering
// media 2 = emission
// media 3 = absorption
#declare Type = 1; // scattering media type
#switch(Light)
#case(1)
#declare Ls = light_source {
0
1/7
#if(Fade) fade_distance 2 fade_power 2 #end
}
union {
object { Ls }
object { Ls translate .5*x }
object { Ls translate x }
object { Ls translate 1.5*x }
object { Ls translate -.5*x }
object { Ls translate -x }
object { Ls translate -1.5*x }
translate y
}
#break
#case(2)
light_source{
y
1
area_light 3*x, z, 7, 1
#if(Fade) fade_distance 2 fade_power 2 #end
}
#break
#case(3)
light_source{
y
1
area_light 3*x, z, 7, 1
#if(Fade) fade_distance 2 fade_power 2 #end
area_illumination on
}
#break
#end
cylinder { <0, .01, 0>, <0, 5, 0>, 2 pigment { rgbt 1 } hollow no_shadow
interior {
media {
#if(Media = 1) scattering {Type, 30 } #end
#if(Media = 2) emission 2 #end
#if(Media = 3) absorption 2 #end
density { cylindrical turbulence 1.5 scale <1, .14, 1> }
}
}
scale <.15, 1, .4> translate 4*z
}
plane { y,0 pigment { rgb .7 } }
plane { -z,-7 pigment { gradient y color_map { [.5 rgb 1][.5 rgb 0] } } }
union {
sphere { 0,.05 }
sphere { .5*x,.05 }
sphere { x,.05 }
sphere { 1.5*x,.05 }
sphere { -.5*x,.05 }
sphere { -x,.05 }
sphere { -1.5*x,.05 }
translate y
hollow pigment { rgbt 1 } interior { media { emission 10 } }
}
|
|
287 | Light source | Definite Bug | 3.70 RC7 | Very Low | Low | area_illumination shadow calculation | Tracked on GitHub | |
Future release |
Task Description
not sure if this is something needing further work or an intended effect.
Shadows from and area light with area_illumination on seem to follow the same shadow calculation as a standard area light by giving more weight to lights near the center of the array. I would assume the shadows would be calculated similarly to individual lights in the same pattern as the array by evenly distributing the amount of shadow equally for each light. But this is not what I see.
The code sample below when rendered with scene 1 will show shadows grouped near the center from the area light with area_illumination. If scene 1 is commented out and scene 2 is uncommented then rendered, you will see evenly distributed shadows from individual lights. Area lighting with area_illumination I would assume should give a result identical to scene 2. If scene 1 is rendered with area_illumination off, the shadow calculation is exactly the same as with area_illumination on.
example images rendered on win32 XP
#version 3.7;
global_settings {
ambient_light 0
assumed_gamma 1
}
camera {
location <0, 3, -5>
look_at <0, 2, 0>
}
background { rgb <.3, .5, .8> }
plane { y,0 pigment { rgb .7 } }
torus { 1.5,.1 rotate 90*x translate 4*z pigment { rgb .2 } }
plane { -z,-7 pigment { rgb .7 } }
/*
// scene 1
light_source{
y
1
area_light 3*x, z, 7, 1
area_illumination on
}
union {
sphere { 0,.05 }
sphere { .5*x,.05 }
sphere { x,.05 }
sphere { 1.5*x,.05 }
sphere { -.5*x,.05 }
sphere { -x,.05 }
sphere { -1.5*x,.05 }
translate y
hollow pigment { rgbt 1 } interior { media { emission 10 } }
}
// end scene 1
*/
// scene 2
#declare Light = light_source {
0
1/7
looks_like { sphere { 0,.05 hollow pigment { rgbt 1 } interior { media { emission 10 } } } }
}
union {
object { Light }
object { Light translate .5*x }
object { Light translate x }
object { Light translate 1.5*x }
object { Light translate -.5*x }
object { Light translate -x }
object { Light translate -1.5*x }
translate y
}
// end scene 2
|
|
292 | Geometric Primitives | Unimp. Feature/TODO | 3.70 RC7 | Very Low | Low | Arbitrary containing object for isosurfaces | Tracked on GitHub | |
|
Task Description
A low priority thought for the future: isosurface now only allows contained_by to be a sphere or a box. It would be more intuitive to allow the same objects that are allowed in clipped_by and bounded_by (although it probably needs to be finite). It would enable allow much faster rendering in many cases:
1) There are a lot of cases when the sphere or a box are very bad in bounding - if an object has a hole, a torus may be better, and in many cases, cylindrical bounding would help a lot. 2) Sometimes, having a too large contained_by object includes far-away parts of the iso-function, and expose large gradients that you want to avoid. If a bounding object is better, you can decrease the max_gradient and speed up the render. 3) The isosurface is usually much more expensive to calculate than any normal bounding object, so it’s an improvement even if the intesection test is not as fast as bounding box. 4) A typical case: if you use texture-like functions to make the surface realistically rough, you know almost exactly what the bounding object is - it can be the original unmodified object. 5) For isosurface terrains, a preprocessing macro could create a rough mesh-like bounding object to contain the “mountains”, thus making everything faster. 6) In case you want clipping, having the contained_by set to the same object probably avoits calculating too many intersections.
The main modification is probably that the intersections of bounding objects can be split into more than one interval - but it’s probably worth it, the isosurfaces are usually a speed bottleneck.
|
|
58 | Parser/SDL | Unimp. Feature/TODO | 3.70 beta 32 | Defer | Low | allow SDL code to detect optional features | Tracked on GitHub | |
|
Task Description
Some features are optional in custom builds of POV-Ray (I’m thinking about OpenEXR in particular); it would be nice to have a syntax for an SDL script to check for support of such features, so it may take some fallback action if the feature is not supported.
|
|
27 | Other | Feature Request | 3.70 beta 32 | Very Low | Low | Add texture support to background statement | Tracked on GitHub | |
Future release |
Task Description
Adding full texture statement support to the background statement (with a scale of 1/1) aligned with the image_map direction of an image would allow i.e. specifying an image as background easily.
|
|
65 | Parser/SDL | Feature Request | 3.70 beta 34 | Very Low | Low | Add support for vectors with functions | Tracked on GitHub | |
Future release |
Task Description
Being able to have functions operate on vectors would be pretty nice to have.
|
|
177 | Light source | Feature Request | 3.70 beta 39 | Very Low | Low | Add support for conserve_energy to shadow computations | Tracked on GitHub | |
|
Task Description
The following scene gives a comparison of current conserve_energy handling in standard shadow computations vs. photons.
Note how the rather highly reflective slabs fail to cast shadows, except where the photons target sphere enforces computation of shadow brightness to be done by the photons algorithm.
For more realistic shadowing without the need to enable photons, I suggest do add proper conserve_energy handling to the shadow computation code (which shouldn’t be too much effort).
global_settings {
max_trace_level 10
photons { spacing 0.003 media 10 }
}
camera {
right x*image_width/image_height
location <-2,2.6,-10>
look_at <0,0.75,0>
}
light_source {
<500,300,150>
color rgb 1.3
photons {
refraction on
reflection on
}
}
sky_sphere {
pigment {
gradient y
color_map {
[0.0 rgb <0.6,0.7,1.0>]
[0.7 rgb <0.0,0.1,0.8>]
}
}
}
plane {
y, 0
texture { pigment { color rgb 0.7 } }
}
#declare M_Glass=
material {
texture {
pigment {rgbt 1}
finish {
ambient 0.0
diffuse 0
specular 0.2 // just to give a hint where the sphere is
}
}
interior { ior 1.0 }
}
#declare M_PseudoGlass=
material {
texture {
pigment {rgbt 1}
finish {
ambient 0.0
diffuse 0.5
specular 0.6
roughness 0.005
reflection { 0.3, 1.0 fresnel on }
conserve_energy
}
}
interior { ior 1.5 }
}
sphere {
<1.1,1,-1.3>, 1
material { M_Glass }
photons {
target 1.0
refraction on
reflection on
}
}
// behind target object
box {
<-0.2,0,-2.3>, <0.0,4,0.3>
material { M_PseudoGlass }
rotate z*1 // just to better see the reflection of the horizon
}
// before target object
box {
<2.4,0,-2.3>, <2.6,4,-0.3>
material { M_PseudoGlass }
photons { pass_through }
rotate z*1 // just to better see the reflection of the horizon
}
|
|
319 | Texture/Material/Finish | Feature Request | 3.70 release | Very Low | Low | Add interior to #default directive | Tracked on GitHub | |
|
Task Description
When working with predefined materials, it would be useful to have something like:
#if (!Use_photons)
#default { interior { caustics 1 } }
#end
#include "my_predefined_materials.inc"
Default medias or IORs could also be useful.
|
|
131 | Other | Feature Request | 3.70 beta 37a | Very Low | Low | Ability to change the order of editor tabs by dragging ... | Tracked on GitHub | |
Future release |
Task Description
See Notepad++ or EditPad Lite for examples.
It would be nice to be able to drag tabs in the editor window to change their order, so as to group opened files together by relevance for instance.
|
|
28 | Frontend | Feature Request | 3.70 beta 32 | Very Low | Low | #debug message not displayed. | Tracked on GitHub | |
Future release |
Task Description
The #debug message stream is only being flushed when it hits a newline character, instead of after each #debug statement. This means that some final strings don’t show up.
#debug "This line prints,\n but this line doesn't."
|
|
138 | User interface | Feature Request | 3.70 beta 37a | Very Low | Low | "Rename" option in File menu | Tracked on GitHub | |
|
Task Description
Would be great if there were a “Rename” option in the editor File menu to rename the current file name. Otherwise, you have to close the file, rename it in file manager, then open the file again, thus loosing the current tab position and undo history for the file.
|
|
140 | Platform-specific | Feature Request | 3.70 beta 37a | Very Low | Low | "Reload" option in File menu | Tracked on GitHub | |
|
Task Description
Would be great to have a “Reload” option in the File menu to manually reload the current file from disk, discarding all subsequent changes since the last save.
|
|
206 | Other | Possible Bug | 3.70 RC3 | Very Low | Low | "Cannot open file" error when text output files specifi... | Tracked on GitHub | |
3.71 release |
Task Description
I created an INI file which specifies the Input_File_Name, Output_File_Name, and also the Render_File and the remaining four text outputs as double-quoted absolute paths on my disk. When I run the render, I get the following output:
Preset INI file is ‘C:\USERS\TPREAL\DOCUMENTS\POV-RAY\V3.7\INI\QUICKRES.INI’, section is ‘[512×384, No AA]’. Preset source file is ‘D:\Ruby\POV-Rb\ini\20110521_004037_Noix.ini’. Rendering with 2 threads. - Cannot open file. Render failed - CPU time used: kernel 0.06 seconds, user 0.02 seconds, total 0.08 seconds. Elapsed time 0.52 seconds.
And the render does not start. The five text output files are not even created, and where the output image should be, there is a file with extension pov-state. The render works as it should only when I remove all five lines defining the five text output files. The paths I specify for the files are correct (paths exist and files do not, no white-spaces or anything), read/write restrictions are disabled in POV-Ray. This used to work in 3.6 and does not work now in 3.7 RC3. The error happens no matter if I run the render using GUI or command line.
(Also please note that the error message is really not useful here, it does not say which file it failed to open, and not even if it was an attempt to open for read or for write.)
I’d be really glad if you could correct this as it’s a critical functionality for me. I’m generating the POV-Ray code automatically and I need to parse the text output automatically to return the status to the generator.
|
|
303 | Other | Definite Bug | 3.70 RC7 | Defer | Very Low | wrong bit depth reported for OpenEXR file format | Tracked on GitHub | |
|
Task Description
When using OpenEXR output file format, POV-Ray erroneously reports it as “24 bpp EXR” in the message output, while in fact it generates a 3×16 = 48 bpp file.
|
|
323 | User interface | Possible Bug | 3.70 release | Very Low | Very Low | Tooltip for render speed status bar has wrong unit | Tracked on GitHub | |
|
Task Description
Tooltip popup for render speed always displays as “Pixels per Second” rather than matching status bar. I’ve noticed it in 3 renders so far. Most of my renders are fast enough not to see any other unit besides PPS, but I should be able to reproduce again if necessary.
|
|
133 | Geometric Primitives | Feature Request | 3.70 beta 37a | Defer | Very Low | Subdivision support | Tracked on GitHub | |
Future release |
Task Description
Someone built a version of Povray with internal support for automatic subdivision of meshes. See:
http://www.cise.ufl.edu/~xwu/Pov-Sub/
Would like to see this feature added natively to Povray.
|
|
20 | User interface | Feature Request | 3.70 beta 32 | Very Low | Very Low | render window behavior | Tracked on GitHub | |
|
Task Description
When changing the behavior of the render window, “Keep above main”, requires restarting the POV editor to take effect. It would be nice either to get a warning to restart, or to get it to work without restarting.
|
|
300 | Other | Feature Request | 3.70 RC7 | Defer | Very Low | Reference Documentation Support | Tracked on GitHub | |
|
Task Description
As emerged as an idea during the discussion of FS#299, an SDL / POV-Ray editor feature would be useful that allows API documentation via formal comments, e.g. in include files:
/**
* Creates a car object.
* @param a
* description of param a
* ...
*/
#macro car(a,b,c)
...
#end
In addition to the ability of (auto-)generating a documentation file from such comments, an editor window feature would be convenient that allows popup display of a macro’s (object’s / parameter’s / ...) documentation section.
|
|
99 | Refactoring/Cleanup | Unimp. Feature/TODO | 3.70 beta 36 | Defer | Very Low | Refactor engine (front- & back-end) code for Unicode su... | Tracked on GitHub | |
Future release |
Task Description
Front- & Back-end code should be refactored for full Unicode support in scene files and strings.
|
|
272 | Other | Feature Request | 3.70 RC6 | Defer | Very Low | Minor change, significant speedup in cubic polynomial s... | Tracked on GitHub | |
3.71 release |
Task Description
While familiarizing myself with the code, I found some small changes in the solve_cubic function that lead to a significant speedup.
In my experience, “pow” is by far the slowest function in math.h and replacing it with simpler functions usually makes a tremendous impact on the speed (it’s an order of magnitude slower than sqrt/exp/cbrt/log).
solve_cubic has a “pow” function that can be replaced by cbrt (cubic root), which is standard in ISO-C99 and should be available on all systems. Separate benchmarks of solve_cubic function show this change almost doubles the speed and does not lower the accuracy. As solve_cubic is part of the solution of quartic equation, this improves the speed for many primitives. Testing with a scene containing many torus intersection tests (attached below) I still observed almost 10% speedup (Intel, 4 threads, 2 hyperthreaded cores, antialiasing on, 600×600: from 91 to 84 seconds). And this is for a torus, where a lot of time is spent in the solve_quartic and cubic solver is only called once! Similar speedup should be expected for prism, ovus, sor and blob.
I do believe the cubic solver can be done without trigonometry, but that would mean changing the algorithm, introducing new bugs and requiring a lot of testing. However, the trigonometric evaluation can still be simplified (3% speedup in full torus benchmark).
These changes don’t affect the algorithm at all, they are mathematically identical to the existing code, so the changes can be applied immediately. I also included other changes just as suggestions. Every change is commented and marked with [SC 2.2013].
This sadly does not speedup the sturm solver, which uses bisection and regula-falsi and looks very optimized already.
The test scene I used has a lot of torus intersections from various directions (shadow rays, main rays, transmitted rays).
|
|
129 | Parser/SDL | Feature Request | 3.70 beta 37a | Defer | Very Low | Hash arrays | Tracked on GitHub | |
Future release |
Task Description
Currently, array items may only be referenced by their index number (an integer). It would be nice to also be able to assign string values as array indexes, as in other scripting languages.
|
|
237 | User interface | Definite Bug | 3.70 RC3 | Defer | Very Low | Glitch in displaying rendered pixels and percentage | Tracked on GitHub | |
|
Task Description
When rendering in multiple passes (radiosity in my case), the elapsed pixels and percentage, written to terminal are first displayed like this: Rendered 126202 of 360000 pixels (35%) Then on the second stage the output text becomes shorter and you see Rendered 25344 of 360000 pixels (7%)%) The contents of the previous status are not erased, so the longer text persists (note the duplicate percentage sign and closing parenthesis). Such a glitch could have more drastic effect in rare cases.
I’m running Version 3.7.0.RC3 (g++ 4.6.2 x86_64-unknown-linux-gnu) compiled for the Arch Linux package.
|