True Impostors
True Impostors
Abstract
True Impostors offers an efcient method for adding a large number of simple models to any scene without having to render a large number of polygons. The technique utilizes modern shading hardware to perform ray casting into texture dened volumes. To achieve this, a virtual screen is set up in texture space for each impostor and inherits the same camera dependent orientation as the impostor. Each pixel on the impostor corresponds to a point on its virtual screen. By casting the viewing ray from this point into our texture dened volumes, the correct color for the target pixel can be found. The technique supports self-shadowing on models, reection, refraction, a simple animation scheme, and an efcient method for nding distances through volumes. Keywords: per-pixel displacement mapping, image based rendering, impostor rendering, volumetric rendering, refraction
practical remedy to the problem. Amongst currently used imagebased techniques, Impostors are a powerful technique that has been widely used for many years in the graphics community. An impostor is a two dimensional image rendered onto a billboard which represents highly detailed geometry. Impostors can be generated in real time or pre-computed and stored in memory, in both cases the impostor is only accurate for a specic viewing direction, as the view deviates from this ideal condition, the impostor loses accuracy. The same impostor is reused until its visual error surpasses a given arbitrary threshold in which case it is replaced with a more accurate impostor.
Introduction
Most interesting environments, natural as well as synthetic, tend to be densely populated with many highly detailed objects. Rendering such an environment would require a huge set of polygons as well as a sophisticated scene graph and the overhead to run it. As an alternative, image-based techniques have been explored to offer a
e-mail:
True Impostors takes advantage of the latest graphics hardware to achieve accurate geometric representation updated every frame for any arbitrary viewing angle. It builds off Relief Mapping of NonHeight Field Surfaces [Policarpo et al. 2006] and Parallax Occlusion Mapping [Tatarchuk. 2006], extending past the goals of traditional per-pixel displacement mapping techniques, which are to add visually accurate sub-surface features to any arbitrary surface by generating whole 3D objects on a billboard. True Impostors distinguishes itself from previous per-pixel displacement mapping impostor methods in two major areas.
1. By contributing a technique for rendering the impostor from any viewing direction with no viewing limitations or restrictions. 2. Offering a GPU friendly ray tracing scheme designed to work with per-pixel displacement mapping techniques, opening up a wealth of potential ray tracing effects such as internal refraction.
Background
Parallax mapping [Kaneko et al. 2001] is a method for approximating the parallax seen on uneven surfaces. Using the view ray transformed to tangent space, parallax mapping samples a height texture to nd the approximate texture coordinate offset that will give the illusion that a three dimensional structure is being rendered. Parallax mapping is a crude approximation of displacement mapping. It cannot simulate occlusion, self shadowing or silhouettes. Since it requires only one additional texture read, it is a fast and therefore relevant approximation for use in video games. View dependent displacement mapping [Wang et al. 2003a] takes a bi-directional texture function (BTF) approach to per-pixel displacement mapping. This approach much like Impostors involves pre-computing for each potential view direction an additional texture map of the same dimensions as the source texture. The goal of this method is to store the distance from the surface of our polygon to the imaginary surface that is being viewed. The method stores a ve-dimensional map indexed by three position coordinates and two angular coordinates. The pre-computed data is then compressed and decompressed at runtime on the GPU. This method produces good visual results but requires signicant preprocessing and storage to operate. Relief mapping [Policarpo et al. 2005] can be considered an extension of parallax mapping. Rather than a coarse approximation to the solution, relief mapping performs a linear search or ray march along the view ray until it nds a collision with the surface at which point a binary search is used to home in on the exact point of intersection. Shell texture functions [Chen et al 2004] much like relief mapping attempts to render the subsurface details of a polygon. However, unlike relief mapping, which restricts itself to nding a new, more detailed surface, shell texture functions attempts to ray trace a complex volumetric data set. Shell texture functions produce highquality visual results by accounting for sub-surface scattering. This technique is notable due to the hybridization it attempts between rasterization and ray tracing. It is however not viable for many applications in the graphics community due to long pre-processing times and a non-interactive frame rate. Relief mapping of non-linear height-elds [Policarpo et al. 2006] extends the concepts laid out in relief mapping by adding multiple height-eld data into the problem, creating distinct volumes. The authors present an efcient method for determining whether the view ray passes through one of these volumes. Practical Dynamic Parallax occlusion mapping [Tatarchuk. 2006] offers an alternative per-pixel displacement mapping solution. Similar to relief mapping, a linear search is performed. Then a single iteration of the secant method is used to t a series of discrete linear functions to the curve. This achieves a high rendering speed by eliminating the need for branch dependent texture lookups. Whereas the single iteration of the secant method does not achieve the same level of accuracy as a true root nding method, in practice there is very little rendering error making this technique suitable for real-time applications.
By its denition the goal of per-pixel displacement mapping is to displace or move a pixel from one point on the screen to another in much the same way traditional vertex based displacement mapping displaces the position of a vertex. A simple enough concept which becomes difcult to implement due to the nature of rasterization and the design of modern GPUs which has no mechanism for writing a target pixels color value to a different pixel on the screen. Thus a wealth of techniques have been proposed to solve this problem. Generally when dening the color for a pixel two factors must be taken into account, the color of the material of the surface at that pixel and the amount of light being reected towards the viewer. Texture maps are the data structure that tends to store all of this information, either as color or normal values. Therefore, a practical version of our earlier problem becomes how can we for our current pixel, nd new texture coordinates that correspond to the point the target pixel is actually representing. To solve this problem the view vector needs to be known from the camera to each pixel, transformed to texture space using the polygons normal and tangent as well as the true topology of the object that is being drawn. The topology can take on many forms stored in many ways, for simplicity, though a single height-eld texture is used to dene the true shape of the surface in gure 1.
Figure 1: View ray penetrating Geometry and corresponding height-eld. Because the view ray projects itself as a line across the surface of our polygon, it effectively takes a 1D slice out of the 2D function, clarifying our problem to its true form, nding the rst intersection of a ray with an arbitrary function. Once this intersection has been found, the corresponding texture coordinates can be used to nd the desired color for the pixel.
Method
2.1
All of the methods presented as background as well as True Impostors can be classied under per-pixel displacement mapping. It is crucial that the reader have a solid grasp of this type of problem.
A single texture can hold four height-elds, which can represent many volumes. More texture data can be added to extend this to any shape. Traditionally these height-elds would represent the surface of a polygon, and the viewing vector would be transformed into the polygons local coordinate frame with respect to its normal. However, in this case the surface geometry and texture coordinates are transformed with respect to the view direction, or in other words, a billboard is generated which can then be used as a window to view the height elds from any given direction. This is illustrated in gure 2.
Figure 3: This Illustration walks through the True Impostors method step-by-step, note that at Cell E two separate directions can be taken depending on the material type being rendered.
Figure 2: Visual representation of a quads texture coordinates transformed into a 3D plane and rotated around the origin (left) and the corresponding image it produces (right).
The right image shows a representation of the billboards texture coordinates after they have been transformed into a 3D plane and rotated around the functions in the center (represented by the sh). The left image shows the perspective of the functions which would be displayed on the billboard. To expand on this concept please refer to gure 3. In cell A of the image, the component of this method which operates on geometry in world space is seen. The camera is viewing a quad which is ro-
tated so that its normal is the vector produced between the camera and the center of the quad. As shown, texture coordinates are assigned to each vertex. Cell B reveals texture space, traditionally a two dimensional space, a third dimension W is added and the UV coordinates are shifted by -0.5. In cell C the W component of our texture coordinates are set to 1. Keep in mind that the texture map only has UV components, so it is bound to two dimensions and therefore can only be translated along the UV plane, where any value of W will reference the same point. The texture map although bound to the UV plane can represent volume data by treating each of the four variables comprising a pixel as points on the W axis. In cell D the projection of the texture volume into three dimensional space is shown. The texture coordinates of each vertex are also rotated around the origin in the same way as the original quad was rotated around its origin to produce the billboard. Now the view ray is introduced into the concept. The view ray is produced by casting a ray from the camera to each vertex, during rasterization both the texture coordinates and view rays stored in each vertex are interpolated across the fragmented surface of the quad. This is conceptually similar to ray casting/tracing where a viewing screen of origin points is generated, where each point has a corresponding vector to dene a ray. It should also be noted that each individual ray projects as a straight line onto the texture map plane and therefore takes a 2D slice out of the 3D volume to evaluate for collisions. This is shown in detail in cell E. Still referring to gure 3, at this point two separate options exist depending on the desired material type. For opaque objects ray casting is the fastest and most accurate option available, a streamlined
approach to this has been laid out in Relief Mapping of Non Height Field Surfaces [Policarpo et al. 2006] using a ray march followed by binary search as shown in cells F-H. This nds the rst point of intersection into the volume. Due to the nature of GPU design, it is impractical from an efciency standpoint to exit out of a loop early, therefore the maximum number of ray marches and texture reads must be performed no matter how early in the search the rst collision is found. Rather than ignoring this free data, a method is proposed to add rendering support for translucent material types through a localized approximation to ray tracing. Rather than performing a binary search along the viewing ray, the points along the W axis are used to dene a linear equation for each height eld as shown in cell I. In cell J these linear equations are then tested for intersection with the view ray in a similar fashion as shown in Parallax Occlusion Mapping [Tatarchuk. 2006]. The intersection which falls between the upper and lower bounding points is kept as the nal intersection point. Since the material is translucent the normal is checked for this point and the view ray is refracted according to a dielectric value either held constant for the shader or stored in an extra color channel in one of the texture maps. Once the new view direction is discovered, the ray march continues and the process is repeated for any additional surfaces that are penetrated as shown in cells K-M. By summing the distances traveled through any volume, the total distance traveled through the model can be known and used to compute the translucency for that pixel. True Impostors also offers a simple yet effective animation scheme in which the animation frames are tiled on a single texture, offering quick look-ups using the pixel shader.
True Impostors can produce nearly correct reproductions of geometry on a single billboard for any possible viewing direction. This is shown in Figure 5 where multiple dogs are displayed from arbitrary directions. True Impostors is the rst per-pixel displacement mapping based impostor technique to achieve complete viewing freedom and thus is the rst of such techniques to represent objects in true 3D. In addition to viewing independence, this paper tackled a new volume intersection/traversal method ideal for rendering translucent/refractive objects which can be applied to any multiple layered per-pixel displacement mapping technique. Figure 6 shows a glass sphere rendered using True Impostors with correct double refraction. A trace-through of the view vector is also shown to conrm that True Impostors is properly rendering the object.
Figure 6: refraction through a glass sphere using True Impostors Given the lack of a true root nding method during refractive rendering, True Impostors suffers from very little error; however, there is a case where minor artifacts will occur. When entering or leaving a volume such that one of the linear steps falls on a pixel in the texture map which does not dene the volume being encountered, there is a chance that the line-segment intersection test can return a point that also does not lie on a pixel dening the target volume. The normal of the surface is needed to refract the view ray, in general this method rarely nds the exact point of intersection, but since the surface and therefore normal is generally continuous, the small level of error is unnoticeable by the human eye. However, at such rare points where a normal not lying on our surface is returned, the resulting image will show blatant artifacts. To soften these artifacts, the normal textures are mipmapped so each sampled point is actually the average of several neighboring points. For the most part this alleviates the rendering error; this does however leave a crease where two surfaces meet to form a volume. Because multiple points on the normal map are averaged, there is no smooth transition between layers dening a volume as shown in gure 7. This crease can be remedied through a simple art hack. When generating the normal maps for each surface, blend the color values concentrated around the edge of a volume, this should force the normals to conform at the edge and produce a smooth curve. In addition to refraction, True Impostors can also reect points on a surface achieving complex local interactions between mirrored
Rather than using the entire texture to store a single model, the image is partitioned into discrete equally sized regions each storing a frame of the desired animation as shown in gure 4. To accommodate the new data representation, the UV coordinates stored in each vertex span the length of a single region as opposed to the entire texture map. Also, the UV coordinates of the texture map are no longer shifted by -.5, but shifted so that the middle of the target region lies on the origin of the coordinate frame. The animation can be looped through by passing a global time variable into the shader and using it to select the current target region. This makes True Impostors a powerful technique for rendering large herds of animals, schools of sh, and ocks of birds.
Figure 5: The same impostor is shown from multiple arbitrary viewing directions. True Impostors is the rst technique which has successfully been able to render view directions which signicantly deviate from the prole, as shown in the third and last image.
Figure 7: A visible crease is shown along the intersection of two surfaces. Due to averaging normal values.
9. All benchmarks were taken using a GeForce 6800Go and two GeForce 7800s run in SLI. Although the vertex processor is crucial to True Impostors, the resulting operations do not put a heavy workload on the vertex processing unit and do not result in noticeable drops in performance. The performance of this technique is primarily determined by the fragment processing unit, the number of pixels on the screen, the number of search steps taken for each pixel, and which rendering technique is used. When performing ray casting the performance mirrored that of Relief Mapping of Non-Height-Field Surface Details [Policarpo et al. 2006] due to the similar ray casting technique used in both methods. The Jupiter model consisting of a quarter of a million impostors with a linear search size of 10 steps and a binary search size of 8 steps rendered in a 1024x768 window at 10 frames per second on the 6800Go and 35 - 40 frames per second on the 7800SLI. The ray tracing technique was rendered in a 800x600 window using 20 search steps and achieved on average 7-8 frames per second on the 6800Go and 2530 frames per second on the 7800SLI. No comparisons are made between the performances of the two techniques due to the fundamentally different rendering solutions they both offer. The only generalized performance statement made about the two techniques is that they both have been shown to achieve real time frame-rates on modern graphics hardware.
Discussion
Figure 8: Reective impostor. The method performs similarly to other per-pixel displacement mapping techniques; however, there are concerns unique to True Impostors. Performance is ll-rate dependent. Since billboards can occlude neighbors it is crucial to perform depth sorting on the CPU to avoid overdraw. Since True Impostors is ll-rate dependent, level of detail is intrinsic, and this is a great asset. With True Impostors it is possible to represent more geometry on screen at once then could be achieved using standard polygonal means, an example is shown on the rst page where a model of Jupiter is rendered using a quarter of a million asteroids to comprise its ring. True Impostors was implemented in C++ using DirectX
True Impostors offer a quick, efcient method for rendering large numbers of animated opaque, reective or refractive objects on the GPU. It generates impostors with very little rendering error and offers inherent per-pixel level of detail. These results are achieved by building upon the concepts laid out in Relief Mapping of NonHeight-Field Surface Details [Policarpo et al. 2006] and Parallax Occlusion Mapping [Tatarchuk. 2006]. By representing volume data as multiple height elds stored in traditional texture maps, the vector processing nature of modern GPUs is exploited and a high frame rate is achieved along with a low memory requirement. By abandoning the restrictions inherent in keeping per-pixel displacement mapping a subsurface detail technique, a new method for rendering staggering amounts of faux-geometry has been achieved, not by blurring the line between rasterization and ray tracing, but through a hybrid approach, taking advantage of the best each has to offer. This method is ideal for video games as it improves an already widely used technique.
References
B LINN , J. F. 1978. Simulation of wrinkled surfaces. In SIGGRAPH 78: Proceedings of the 5th annual conference on Computer graphics and interactive techniques, ACM Press, New York, NY, USA, 286292. C HEN , Y., T ONG , X., WANG , J., L IN , S., G UO , B., AND S HUM , H. 2004. Shell texture functions. In Transaction of Grpahics Proceedings of SIGGRAPH 2004 23. 343-352. C OOK , R. L. 1984. Shade trees. In SIGGRAPH 84: Proceedings of the 11th annual conference on Computer graphics and interactive techniques, ACM Press, New York, NY, USA, 223231. H ART, J. C. 1996. Sphere tracing: A geometric method for the antialiased ray tracing of implicit surfaces. The Visual Computer 12, 10, 527545. H IRCHE , J., E HLERT, A., G UTHE , S., AND D OGGETT, M. 2004. Hardware accelerated per-pixel displacement mapping. In GI 04: Proceedings of the 2004 conference on Graphics interface, Canadian Human-Computer Communications Society, School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada, 153158. K ANEKO , T., TAKAHEI , T., I NAMI , M., K AWAKAMI , N., YANAGIDA , Y., AND M AEDA , T. 2001. Detailed shape representation with parallax mapping. In Proceedings of the ICAT 2001, 205208. K AUTZ , J., AND S EIDEL , H.-P. 2001. Hardware accelerated displacement mapping for image based rendering. In GRIN01: No description on Graphics interface 2001, Canadian Information Processing Society, Toronto, Ont., Canada, Canada, 6170. KOLB , A., AND R EZK -S ALAMA , C. 2005. Efcient empty space skipping for per-pixel displacement mapping. In Proc. Vision, Modeling and Visualization. M ACIEL , P.W.C., S HIRLEY, P. 1995. Visual navigation fo large environments using textured clusters. In Proceedings of the 1995 symposium on Interactive 3D graphics. 95-102. M AX , N. 1998. Horizon mapping: shadows for bump-mapped surfaces. In The Visual Computer 4, 2, 109117. M C G UIRE , M. 2005. Steep parallax mapping. In I3D 2005 Poster. O LIVEIRA , M. M., B ISHOP, G., AND M C A LLISTER , D. 2000. Relief texture mapping. In SIGGRAPH 00: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, ACM Press/Addison-Wesley Publishing Co., New York, NY, USA, 359368. PATTERSON , J. W., H OGGAR , S. G., AND L OGIE , J. R. 1991. Inverse displacement mapping. Comput. Graph. Forum 10, 2, 129139. P OLICARPO , F., O LIVEIRA , M. M., AND C OMBA , J. L. D. 2005. Real-time relief mapping on arbitrary polygonal surfaces. In SI3D 05: Proceedings of the 2005 symposium on Interactive 3D graphics and games, ACM Press, New York, NY, USA, 155 162. P OLICARPO , F., O LIVEIRA , M. M. 2006. Relief Mapping of NonHeight-Field Surface Details. In SI3D 06:ACM SIGGRAPH 2006 Symposium on Interactive 3D Graphics and Games, Redwood City, CA, USA, 5562.
P RESS , W., F LANNERY, B., T EUKOLSKY, S., AND V ETTERLING , W. 2002. Root nding and non-linear sets of equations. In Numerical Recipes in C, 354360. S CHAUFLER , G., AND P RIGLINGER , M. 1999. Horizon mapping: shadows for bump-mapped surfaces. In Efcient displacement mapping by image warping, 175186. S LOAN , P., AND C OHEN , M., 2000. Interactive horizon mapping. TATARCHUK. 2006. Dynamic parallax occlusion mapping with approximate soft shadows. In SI3D 06: Proceedings of the 2006 symposium on Interactive 3D graphics and games, Redwood City, California, 6369. WALSH. 2003. Parallax mapping with offset limiting. In Inniscape Tech Report. WANG , L., WANG , X., T ONG , X., L IN , S., H U , S., G UO , B., AND S HUM , H.-Y. 2003. View-dependent displacement mapping. ACM Trans. Graph. 22, 3, 334339. WANG , X., T ONG , X., L IN , S., H U , S., G UO , B., AND S HUM , H.-Y. 2003. Generalized displacement maps. In Eurographics Symposium on Rendering, 227233.
///////////////////////////////////////////////////////////////////// // True Impostors // ///////////////////////////////////////////////////////////////////// // this portion of code requires a quad with its true center at // // the origin and the desired center stored as the normal // ///////////////////////////////////////////////////////////////////// //calculate billboards normal float3 quadNormal = normalize(in.normal.xyz - g_vEyePt.xyz); //compute rotation matrices based on new quad normal float2 eyeZ = normalize(float2(sqrt(pow(quadNormal.x,2) + pow(quadNormal.z, 2)), quadNormal.y)); float2 eyeY = (normalize(float2(-quadNormal.z, quadNormal.x))); xRot._m00 xRot._m10 xRot._m20 xRot._m30 yRot._m00 yRot._m10 yRot._m20 yRot._m30 = = = = 1; xRot._m01 = 0; xRot._m02 = 0; xRot._m03 = 0; 0; xRot._m11 = eyeZ.x; xRot._m12 = eyeZ.y; xRot._m13 = 0; 0; xRot._m21 = -eyeZ.y; xRot._m22 = eyeZ.x; xRot._m23 = 0; 0; xRot._m31 = 0; xRot._m32 = 0; xRot._m33 = 1; yRot._m01 yRot._m11 yRot._m21 yRot._m31 = = = = 0; 1; 0; 0; yRot._m02 yRot._m12 yRot._m22 yRot._m32 = = = = eyeY.y; 0; eyeY.x; 0; yRot._m03 yRot._m13 yRot._m23 yRot._m33 = = = = 0; 0; 0; 1;
= eyeY.x; = 0; = -eyeY.y; = 0;
World = mul(xRot, yRot); //update vertex positions in.pos = mul(in.pos, World); in.pos.xyz += in.normal.xyz; //generate texture plane out.viewOrigin = float3(in.tex_coords.x + 0.5f, in.tex_coords.y - 0.5f, -1.0f); out.viewOrigin = mul(out.viewOrigin, World); out.viewOrigin = float3(out.viewOrigin.x + 0.5f, out.viewOrigin.y + 0.5f, -out.viewOrigin.z); //output the final position and view vector for each vertex out.pos = mul(in.pos, g_mWorldViewProj); out.viewVec = float4(normalize((in.pos.xyz)-(g_vEyePt.xyz)), 1);
///////////////////////////////////////////////////////////////////// // True Impostors // ///////////////////////////////////////////////////////////////////// // this portion of code steps through the linear and binary // // searches of the ray-casting algorithm // ///////////////////////////////////////////////////////////////////// int linear_search_steps = 10; float depth_step=1.0/linear_search_steps; float dis = depth_step; float depth = 0; float4 prePixelColor = float4(0, 0, 0, 0);//for finding collision layer //////////////////////////////////////////////////////////// // linear search //////////////////////////////////////////////////////////// for(int i=1; i<linear_search_steps; i++ ) { depth = input.viewOrigin.z + dis*viewVec.z; tex_coords = (dis)*float2(-viewVec.x, -viewVec.y); tex_coords += float2(viewOrigin.x, viewOrigin.y); pixelColor = tex2D(heightSampler, tex_coords)*hscale+(1 - hscale)/2.0f-0.5; pixelColor.rgba = (depth) - (pixelColor.rgba); if((pixelColor.r*pixelColor.g*pixelColor.b*pixelColor.a) > 0) //no collision { prePixelColor = pixelColor; dis+=depth_step; } } //////////////////////////////////////////////////////////////// // bisection search //////////////////////////////////////////////////////////////// for(int i = 1; (i < 8); i++) { tex_coords = (dis)*float2(-viewVec.x, -viewVec.y); tex_coords += float2(viewOrigin.x, viewOrigin.y); pixelColor = tex2D(heightSampler, tex_coords)*hscale+(1 - hscale)/2.0f-0.5; depth = input.viewOrigin.z + dis*viewVec.z; pixelColor.rgba = depth - pixelColor.rgba; depth_step*=0.5f; if((pixelColor.r*pixelColor.g*pixelColor.b*pixelColor.a) > 0) //no collision { dis+=depth_step; } else { dis-=depth_step; } }
///////////////////////////////////////////////////////////////////// // True Impostors // ///////////////////////////////////////////////////////////////////// // this portion of code contains the main loop which marches // // through the volume, refracting the view vector at collisions // ///////////////////////////////////////////////////////////////////// for(int i=1; i<linear_search_steps; i++ ) { depth = in.viewOrigin.z + dis*viewVec.z; tex_coords_offset2D = (dis)*float2(-viewVec.x, -viewVec.y) tex_coords_offset2D += float2(in.viewOrigin.x, in.viewOrigin.y); pixelColor = tex2D(heightSampler, tex_coords_offset2D)*hscale pixelColor += (1 - hscale)/2.0f - 0.5;
float4 tempPixelColor = (depth) - (pixelColor.rgba); oldPolarity = newPolarity; newPolarity=(tempPixelColor.r*tempPixelColor.g*tempPixelColor.b*tempPixelColor.a); float colission = oldPolarity*newPolarity; //if we are entering or leaving the volume if(colission < 0) { newPolarity = normalize(newPolarity); //make sure its either 1 or -1 //do a secant step to approximate the intersection //we actualy have to do 4 tests with 4 height fields //then find the one that intersects within the bounds //first need to get the angle of our intersection vector float oldPointDistance = oldDis*sqrt(pow(viewVec.x,2)+pow(viewVec.y,2)); float newPointDis = dis*sqrt(pow(viewVec.x,2) + pow(viewVec.y,2)); run = (newPointDis - oldPointDistance); rise1 = pixelColor.r - oldPixelColor.r; rise2 = pixelColor.g - oldPixelColor.g; rise3 = pixelColor.b - oldPixelColor.b; rise4 = pixelColor.a - oldPixelColor.a; slope1 slope2 slope3 slope4 = = = = rise1/run; rise2/run; rise3/run; rise4/run;
//find the 2D slope of the view vector float Xv = sqrt((viewVec.x*viewVec.x) + (viewVec.y*viewVec.y)); //now perform the line tests // Y1 + S*Xo - Yo - S*X1 // t = _____________________ // Yv - S*Xv t1 = (pixelColor.r + (slope1*0) - in.viewOrigin.z t1 = t1 / (viewVec.z - (slope1*Xv)); t2 = (pixelColor.g + (slope2*0) - in.viewOrigin.z t2 = t2 / (viewVec.z - (slope2*Xv)); t3 = (pixelColor.b + (slope3*0) - in.viewOrigin.z t3 = t3 / (viewVec.z - (slope3*Xv)); t4 = (pixelColor.a + (slope4*0) - in.viewOrigin.z t4 = t4 / (viewVec.z - (slope4*Xv));
Ray-Casting Pseudo-code.
//this should return 0 if the x is not between the two points, 1 if it is tempT1 = min(ceil(max((t1 - oldDis), 0) * max(-(t1 - dis), 0)), 1); tempT2 = min(ceil(max((t2 - oldDis), 0) * max(-(t2 - dis), 0)), 1); tempT3 = min(ceil(max((t3 - oldDis), 0) * max(-(t3 - dis), 0)), 1); tempT4 = min(ceil(max((t4 - oldDis), 0) * max(-(t4 - dis), 0)), 1); //get the final x distance from the view origin IntersectionTDistance = (tempT1*t1)+(tempT2*t2)+(tempT3*t3)+(tempT4*t4); //now find the normal data for the intersection point float2 normal_tex_coords = (IntersectionTDistance)*-viewVec.xy; normal_tex_coords += float2(in.viewOrigin.x, in.viewOrigin.y); n1 = tex2D(normalSampler1, normal_tex_coords); n2 = tex2D(normalSampler2, normal_tex_coords); n3 = tex2D(normalSampler3, normal_tex_coords); n4 = tex2D(normalSampler4, normal_tex_coords); newNormal = (tempT1*n1)+(tempT2*n2)+(tempT3*n3)+(tempT4*n4); newNormal.xyz = normalize(newNormal.xyz - 0.5f); viewVec = refract(normalize(viewVec.xyz), normalize(oldPolarity)*newNormal.xyz, dielectric); //need to update the in.viewOrigin vector to be the new point in.viewOrigin.z = depth; in.viewOrigin.xy = tex_coords.xy; dis = 0; } oldDis = dis; dis+=depth_step; oldPixelColor.rgba = pixelColor.rgba; //stores the previous position oldTextureCoordinates = tex_coords; }