The problem is that the UV-space and world space do not coincide, because triangles in UV-space can be arbitrarily rotated in order to provide a better fit in the UV atlas.
#Does unwrella work software
This article has a nice software implementation of a half-space triangle rasterizer following the rules defined by both Direct3D and OpenGL.īut even if you use a rasterization scheme as the one mentioned above when rasterizing triangles in UV-space, there will still be visible seams at the edges of some triangles. Interpolation of any attribute can easily be achieved by using barycentric coordinates, but the question remains which texels are actually touched by our rasterizer? Each graphics API has a clearly defined set of rules for rasterizing triangles (mostly top-left filling) which make sure that no pixel is touched twice, and that there are no visible seams at triangle edges. the world space position) along the edges, like we used to do in the days of good ol’ software rasterization. What we need to do is rasterize each triangle in UV-space, interpolating its attributes (e.g. In order to be able to evaluate our signal in world space, we need to map texels in UV-space to their corresponding position in world space. While Step 3 sounds simple in theory, there are several practical problems when trying to implement it. Let us focus on Step 3, assuming that an appropriate UV-set has been generated for all surfaces in the scene already, using a suitable tool such as Maya’s built-in automatic-unwrapping, or an external tool such as Unwrella. Steps 1 and 2 can get quite involved and won’t be handled in this post – see Ignacio Castaño’s excellent post on Lightmap Parameterization instead. static lighting) for each texel in UV-space at the corresponding world space position. Tightly pack the surfaces into a certain number of texture maps, resulting in an atlas to be used during rendering.Generate a separate UV-set for all surfaces of a mesh, making sure that no UVs overlap each other.In theory, baking signals into texture maps is easy, and basically follows the following steps: spherical harmonics or wavelets), radiosity form factors, ambient occlusion, etc. Other signals that can be baked are radiosity normal maps, coefficients for precomputed radiance transfer (e.g.
There’s a multitude of signals that can be baked into texture maps, with the most widely known one being static lighting, resulting in so-called lightmaps. This post will try to explain issues I ran into while implementing a baking framework. Whenever you’re baking signals into texture maps, there’s quite some issues to watch out for, regardless of the signal you’re baking.