精确纹理覆盖

人气:1,105 发布:2022-09-18 标签: c++ directx

问题描述

我试图在3D引擎中设置一个两阶段对象渲染我正在用C ++编写的DirectX9,以促进透明度(和其他东西)。 这两个方法都可以正常工作,直到我注意到在使用这两个方法的对象之前渲染的对象的边缘。阶段方法很简单:

使用相同的zbuffer将模型绘制到相同大小的屏幕外MSAA用于任何地方)

在主渲染目标顶部绘制离屏(边)纹理,

在下图中,左侧视图是灰色对象它的前面的主体直接渲染到目标纹理。右视图是禁用两阶段渲染的,因此两者都直接渲染到目标表面上。

仔细检查它就好像边纹理恰好偏移1像素向下和1像素右,当在目标表面上呈现时(但是被正确地就地渲染)。这可以在屏幕截图的覆盖中看到(我让程序通过 D3DXSaveTextureToFile 在下面的屏幕截图中写出一个位图文件)。

>

最后一张图片,您可以看到侧边纹理边缘的来源(这是因为渲染到边材质 使用z test)。左侧是屏幕短,右侧是侧纹理(如上所示)。

这一切让我相信我的叠加不是很有效。在主渲染目标上渲染边纹理的代码如下所示(注意,相同的视口用于所有场景渲染(屏幕上和关闭))。 效果对象是 LPD3DXEFFECT 中的一个薄包装的一个实例,效果字段(对于命名的遗憾)是一个 LPD3DXEFFECT 本身。

  void drawSideOver(LPDIRECT3DDEVICE9 dxDevice,drawData * ddat) { ddatdrawdata包含大量的渲染状态信息,但我们需要的是targetSurface和sideSurface的句柄 D3DXMATRIX idMat;  D3DXMatrixIdentity(& idMat); // create identity matrix  dxDevice-> SetRenderTarget(0,ddat-> targetSurface); //切换到targetSurface   dxDevice-> SetRenderState(D3DRS_ZENABLE,false); // disable z test and z write  dxDevice-> SetRenderState(D3DRS_ZWRITEENABLE,false);   vertexOver overVerts [4]; // create square  overVerts [0] = vertexOver(-1,-1,0,0,1);  overVerts [1] = vertexOver(-1,1,0,0,0);  overVerts [2] = vertexOver(1,-1,0,1,1);  overVerts [3] = vertexOver(1,1,0,1,0);   effect.setTexture(ddat-> sideTex); // use side texture as shader texture(tex) effect.effect-> SetTechnique(over); //改为over技术 effect.setViewProj(& idMat); // set viewProj to identity matrix so 1 / -1 map directly  effect.effect-> CommitChanges();   setAlpha(dxDevice); //这设置了alpha混合工作正常  UINT numPasses,pass;  effect.effect-> Begin(& numPasses,0);  effect.effect-> BeginPass(0);   dxDevice-> SetVertexDeclaration(vertexDecOver);  dxDevice-> DrawPrimitiveUP(D3DPT_TRIANGLESTRIP,2,overVerts,sizeof(vertexOver));   effect.effect-> EndPass();  effect.effect-> End();   dxDevice-> SetRenderState(D3DRS_ZENABLE,true); //恢复这些,所以我们不要把所有的东西都乱了这个 dxDevice-> SetRenderState(D3DRS_ZWRITEENABLE,true); }   

VertexOver结构体和构造函数的C ++端定义):

  struct vertexOver  { public: float x;  float y;  float z;  float w;  float tu;  float tv;   vertexOver(){}  vertexOver(float xN,float yN,float zN,float tuN,float tvN) { x = xN;  y = yN;  z = zN  w = 1.0;  tu = tuN;  tv = tvN; } };  

重新创建和将顶点传递到GPU的效率低下,每个都不在,我真正想要的要知道为什么这个方法不太工作,如果有任何更好的方法覆盖像这样的阿尔法混合,不会展现这个问题

我认为纹理抽样在这方面可能有点问题,但是搞乱了选项似乎没有帮助(例如,使用线性滤波器只是使它模糊,你可能会认为,偏移量不像1像素差异那么明确)。着色器代码:

  struct VS_Input_Over  { float4 pos:POSITION0;  float2 txc:TEXCOORD0; };   struct VS_Output_Over  { float4 pos:POSITION0;  float2 txc:TEXCOORD0;  float4 altPos:TEXCOORD1; };   struct PS_Output  { float4 col:COLOR0; };  纹理纹理;  sampler texSampler = sampler_state {texture =< tex> ;; magfilter = NONE; minfilter = NONE; mipfilter = NONE;地址U =镜像; AddressV = mirror;};   // side / over shaders(这些组成了over技术(像素着色器版本2.0) VS_Output_Over VShade_Over(VS_Input_Over inp) { VS_Output_Over outp =(VS_Output_Over)0;  outp.pos = mul(inp.pos,viewProj);  outp.altPos = outp.pos;  outp.txc = inp.txc;  return outp; }   PS_Output PShade_Over(VS_Output_Over inp) { PS_Output outp =(PS_Output)0;   outp。 col = tex2D(texSampler,inp.txc);   return outp; }   

b $ b

我已经查找了一个Blended Blit或者某些东西,但是我找不到任何东西,而其他相关搜索只是提出了论坛,暗示着用正交投影渲染四边形是要走的路

对不起,如果我给这个问题太多的细节,但它是有趣和令人不安,任何反馈将非常感激。

解决方案

它寻找我,你的问题是纹理像素的映射。您必须偏移一个屏幕对齐的四分之一半像素,以匹配纹理直接到screenpixels。此问题的解释如下:直接将像素映射到像素(MSDN)

I'm trying to set up a two-stage render of objects in a 3D engine I'm working on written in C++ with DirectX9 to facilitate transparency (and other things). I thought it was all working nicely until I noticed some dodgyness on the edge of objects rendered before objects using this two stage method.

The two stage method is simple:

Draw model to off-screen ("side") texture of same size using same zbuffer (no MSAA is used anywhere)

Draw off-screen ("side") texture over the top of the main render target with a suitable blend and no alpha test or write

In the image below the left view is with the two stage render of the gray object (a lamppost) with the body in-front of it rendered directly to the target texture. The right view is with the two-stage render disabled, so both are rendered directly onto the target surface.

On close inspection it is as if the side texture is offset by exactly 1 pixel "down" and 1 pixel "right" when rendered over the target surface (but is rendered correctly in-place). This can be seen in an overlay of the off screen texture (which I get my program to write out to a bitmap file via D3DXSaveTextureToFile) over a screen shot below.

One last image so you can see where the edge in the side texture is coming from (it's because rendering to the side texture does use z test). Left is screen short, right is side texture (as overlaid above).

All this leads me to believe that my "overlaying" isn't very effective. The code that renders the side texture over the main render target is shown below (note that the same viewport is used for all scene rendering (on and off screen)). The "effect" object is an instance of a thin wrapper over LPD3DXEFFECT, with the "effect" field (sorry about shoddy naming) being a LPD3DXEFFECT itself.

void drawSideOver(LPDIRECT3DDEVICE9 dxDevice, drawData* ddat)
{ // "ddat" drawdata contains lots of render state information, but all we need here is the handles for the targetSurface and sideSurface
    D3DXMATRIX idMat;
    D3DXMatrixIdentity(&idMat); // create identity matrix
    dxDevice->SetRenderTarget(0, ddat->targetSurface); // switch to targetSurface

    dxDevice->SetRenderState(D3DRS_ZENABLE, false); // disable z test and z write
    dxDevice->SetRenderState(D3DRS_ZWRITEENABLE, false);

    vertexOver overVerts[4]; // create square
    overVerts[0] = vertexOver(-1, -1, 0, 0, 1);
    overVerts[1] = vertexOver(-1, 1, 0, 0, 0);
    overVerts[2] = vertexOver(1, -1, 0, 1, 1);
    overVerts[3] = vertexOver(1, 1, 0, 1, 0);

    effect.setTexture(ddat->sideTex); // use side texture as shader texture ("tex")
    effect.effect->SetTechnique("over"); // change to "over" technique
    effect.setViewProj(&idMat); // set viewProj to identity matrix so 1/-1 map directly
    effect.effect->CommitChanges();

    setAlpha(dxDevice); // this sets up the alpha blending which works fine

    UINT numPasses, pass;
    effect.effect->Begin(&numPasses, 0);
    effect.effect->BeginPass(0);

    dxDevice->SetVertexDeclaration(vertexDecOver);
    dxDevice->DrawPrimitiveUP(D3DPT_TRIANGLESTRIP, 2, overVerts, sizeof(vertexOver));

    effect.effect->EndPass();
    effect.effect->End();

    dxDevice->SetRenderState(D3DRS_ZENABLE, true); // revert these so we don't mess everything up drawn after this
    dxDevice->SetRenderState(D3DRS_ZWRITEENABLE, true);
}

The C++ side definition for the VertexOver struct and constructor (HLSL side shown below somewhere):

struct vertexOver
{
public:
    float x;
    float y;
    float z;
    float w;
    float tu;
    float tv;

    vertexOver() { }
    vertexOver(float xN, float yN, float zN, float tuN, float tvN)
    {
        x = xN;
        y = yN;
        z = zN;
        w = 1.0;
        tu = tuN;
        tv = tvN;
    }
};

Inefficiency in re-creating and passing the vertices down to the GPU each draw aside, what I really want to know is why this method doesn't quite work, and if there are any better methods for overlaying textures like this with an alpha blend that won't exhibit this issue

I figured that the texture sampling may matter somewhat in this matter, but messing about with options didn't seem to help much (for example, using a LINEAR filter just makes it fuzzy as you might expect implying that the offset isn't as clear-cut as a 1 pixel discrepancy). Shader code:

struct VS_Input_Over
{
    float4 pos : POSITION0;
    float2 txc : TEXCOORD0;
};

struct VS_Output_Over
{
    float4 pos : POSITION0;
    float2 txc : TEXCOORD0;
    float4 altPos : TEXCOORD1;
};

struct PS_Output
{
    float4 col : COLOR0;
};

Texture tex;
sampler texSampler = sampler_state { texture = <tex>;magfilter = NONE; minfilter = NONE; mipfilter = NONE; AddressU = mirror; AddressV = mirror;};

// side/over shaders (these make up the "over" technique (pixel shader version 2.0)
VS_Output_Over VShade_Over(VS_Input_Over inp)
{
    VS_Output_Over outp = (VS_Output_Over)0;
    outp.pos = mul(inp.pos, viewProj);
    outp.altPos = outp.pos;
    outp.txc = inp.txc;
    return outp;
}

PS_Output PShade_Over(VS_Output_Over inp)
{
    PS_Output outp = (PS_Output)0;

    outp.col = tex2D(texSampler, inp.txc);

    return outp;
}

I've looked about for a "Blended Blit" or something but I can't find anything, and other related searches have only brought up forums implying that rendering a quad with an orthographic projection is the way to go about doing this.

Sorry if I've given far too much detail for this issue but it's both interesting and infuriating and any feedback would be greatly appreciated.

解决方案

It looks for me that you problem is the mapping of texels to pixels. You must offset a screen-aligned quad with a half pixel to match the texels direct to the screenpixels. This issue is explaines here: Directly Mapping Texels to Pixels (MSDN)

111