草稿日期。 2021년 7월 13일.

JP Lee is creating Game | Patreon


Preface

After leaving the company after a long time, I am going to research some rendering techniques and post them on Github. By writing this pages, I think I will be able to remind myself of my old memories. So... I want to say that I didn't want to do anything too difficult from the start. It seemed that I would get tired first if I dealt with difficult things from the beginning. 😢 Back to the main text... We try to explain each effect section as simply as possible. As I have been paying attention to team management for several years, personally, my ability to explain in detail technical aspects seems to have decreased, so I will keep the attitude of reminding myself. Most of the explanations are not for programmers, but for artists interested in shading. Rather than relying entirely on rendering programmers or technical artists, if you understand the implementation aspects, artists will be able to organize their thoughts before communicating with them.

离开公司很久之后,我打算研究一些渲染技术并发布在Github上。 通过写这几页,我想我将能够提醒自己我的旧记忆。 所以……我想说,我从一开始就不想做任何太难的事情。 好像从一开始就处理困难的事情,我会先感到疲倦。 😢 回到正文... 我们尝试尽可能简单地解释每个效果部分。 由于我多年来一直关注团队管理,我个人对技术方面的详细解释能力似乎有所下降,所以我会保持提醒自己的态度。 大多数解释不是针对程序员,而是针对对着色感兴趣的艺术家。 与其完全依赖渲染程序员或技术美工,如果您了解实现方面,美工将能够在与他们交流之前组织他们的想法。


In this example, we did not write the shader in the Multi-pass(1) format.

在这个例子中,我们没有以 Multi-pass(1) 格式编写着色器。

<aside> 💡 Rendering is basically Forward rendering and was created in the URP environment. 渲染基本上是前向渲染,是在 URP 环境中创建的。

</aside>

What we can learn from this content. 我们可以从这个内容中学到什么。

Desmos | 그래핑 계산기


基本准备

As an example, I used a character from XRD obtained from the Internet. Of course, you can guess that Normal is edited in the DCC tool. We're not going to use the Normal information we computed directly by Unity. There are some things to check in the mesh's inspector information. Whenever possible, I'll use Import .

例如,我使用了从互联网上获得的 XRD 中的一个字符。 当然,你可以猜到Normal是在DCC工具中编辑的。 我们不会使用 Unity 直接计算的 Normal 信息。 有一些事情需要检查网格的检查器信息。 只要有可能,我就会使用 Import 。

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/9e505e84-4d10-4184-84ac-914d13369bf3/Untitled.png

Since we will not be building our shader with multipass shading we will need 2 materials.

由于我们不会使用多通道着色来构建着色器,因此我们将需要 2 种材质。

  1. OutlineMat.mat for outline rendering. OutlineMat.mat 用于轮廓渲染
  2. ToonShadingMat.mat for real character shading. ToonShadingMat.mat 用于真实字符着色.

Add these two materials to the Assets directory. When you are ready, register two materials in one mesh as shown in the picture below.

将这两种材质添加到 Assets 目录中。 准备好后,在一个网格中注册两种材料,如下图所示。

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/b08a2927-e88e-484a-ac9a-3c30f445a68c/Untitled.png

The order of the materials doesn't matter, as the shaders applied to the OutlineMat will be rendered as Cull Front. 材质的顺序无关紧要,因为应用于 OutlineMat 的着色器将渲染为 Cull Front。

Multi-pass implementation in one shader is equivalent to applying two materials like this. It could be simply conceptually called Multi-pass. I'm going to render the same Mesh Entity twice anyway... Personally, I prefer this method rather than Multi-pass when shading for character effects or other effects.

在一个着色器中实现多通道相当于应用两种这样的材质。 它可以在概念上简单地称为多通道。 无论如何,我将渲染相同的网格实体两次......就我个人而言,在为角色效果或其他效果着色时,我更喜欢这种方法而不是多通道。

<aside> 💡 Let's get some good information from TA Jongpil Jeong's very friendly URP shader course. 让我们从 TA Jongpil Jeong 非常友好的 URP 着色器课程中获取一些很好的信息。

</aside>

아티스트를 위한 URP 셰이더 Shader #1

아티스트를 위한 URP 셰이더 Shader #2 - 셰이더 이름바꾸기 (코드)

아티스트를 위한 URP 셰이더 Shader #3 - 프로퍼티스 1 (셰이더 그래프)

아티스트를 위한 URP 셰이더 Shader #3 - 프로퍼티스 2 (셰이더 그래프)

Let's first refer to a really good topic on how to write URP shader basics and then come back to my topic. 让我们先参考一个关于如何编写 URP 着色器基础知识的非常好的主题,然后再回到我的主题。


Created Outline Rendering shader. 创建轮廓渲染着色器。

Simply put, there are three major outline processing techniques. 简单地说,轮廓加工技术主要有3种。

  1. Offset the vertex of the mesh in the normal vector direction (direction pointed by the normal) and fill it with color.
  2. How to apply the rim light technique.
  3. How to use Post Process (edge detection processing using depth normal information + utilization of Sobel filter).

1.在法向量方向(法线指向的方向)偏移网格的顶点并用颜色填充它。 2. 如何应用边缘光技术。 3.如何使用Post Process(使用深度法线信息的边缘检测处理+Sobel filter的使用)。

You can categorize them like this: Once you know that you are doing things like these above. I'll just implement it in method 1.

您可以将它们分类如下: 一旦你知道你正在做上面这些事情。 我将在方法 1 中实现它。

Debug shading:: Light-Space Outline width variant result debug.

Debug shading:: Light-Space Outline width variant result debug.

I'm going to create something like the one above. Intermediate interim auxiliary theories will add external links. The internet is full of good resources.

我将创建类似上面的东西。 中级临时辅助理论将添加外部链接。 互联网上充满了很好的资源。

执行。

URP Toon Outline.shader

Shader "LightSpaceToon2/Outline LightSpace"
{
    Properties
    {
        
        [Space(8)]
        [Enum(UnityEngine.Rendering.CompareFunction)] _ZTest ("ZTest", Int) = 4
        [Enum(UnityEngine.Rendering.CullMode)] _Cull ("Culling", Float) = 1

        [Header(Outline)]
        _Color ("Color", Color) = (0,0,0,1)
        _Border ("Width", Float) = 3
        [Toggle(_COMPENSATESCALE)]
        _CompensateScale            ("     Compensate Scale", Float) = 0
        [Toggle(_OUTLINEINSCREENSPACE)]
        _OutlineInScreenSpace       ("     Calculate width in Screen Space", Float) = 0
        _OutlineZFallBack ("     Calculate width Z offset", Range(-20 , 0)) = 0

    }
    SubShader
    {
        Tags
        {
            "RenderPipeline" = "UniversalPipeline"
            "RenderType"="Opaque"
            "Queue"= "Geometry+1"
        }
        Pass
        {
            Name "StandardUnlit"
            Tags{"LightMode" = "UniversalForward"}

            Blend SrcAlpha OneMinusSrcAlpha
            Cull[_Cull]
            ZTest [_ZTest]
        //  Make sure we do not get overwritten
            ZWrite On

            HLSLPROGRAM
            // Required to compile gles 2.0 with standard srp library
            #pragma prefer_hlslcc gles
            #pragma exclude_renderers d3d11_9x
            #pragma target 2.0

            #pragma shader_feature_local _COMPENSATESCALE
            #pragma shader_feature_local _OUTLINEINSCREENSPACE

            // -------------------------------------
            // Lightweight Pipeline keywords

            // -------------------------------------
            // Unity defined keywords
            #pragma multi_compile_fog

            //--------------------------------------
            // GPU Instancing
            #pragma multi_compile_instancing
            // #pragma multi_compile _ DOTS_INSTANCING_ON // needs shader target 4.5
            
            #pragma vertex vert
            #pragma fragment frag

            // Lighting include is needed because of GI
            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"

            CBUFFER_START(UnityPerMaterial)
                half4 _Color;
                half _Border;
                half _OutlineZFallBack;
            CBUFFER_END

            struct VertexInput
            {
                float4 vertex : POSITION;
                float3 normal : NORMAL;
                UNITY_VERTEX_INPUT_INSTANCE_ID
            };

            struct VertexOutput
            {
                float4 position : POSITION;
                half fogCoord : TEXCOORD0;

                UNITY_VERTEX_INPUT_INSTANCE_ID
                UNITY_VERTEX_OUTPUT_STEREO
            };

            VertexOutput vert (VertexInput v)
            {
                VertexOutput o = (VertexOutput)0;
                UNITY_SETUP_INSTANCE_ID(v);
                UNITY_TRANSFER_INSTANCE_ID(v, o);
                UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);

                half ndotlHalf = dot(v.normal , _MainLightPosition)*0.5+0.5;
                 
            
            //  Extrude
                #if !defined(_OUTLINEINSCREENSPACE)
                    #if defined(_COMPENSATESCALE)
                        float3 scale;
                        scale.x = length(float3(UNITY_MATRIX_M[0].x, UNITY_MATRIX_M[1].x, UNITY_MATRIX_M[2].x));
                        scale.y = length(float3(UNITY_MATRIX_M[0].y, UNITY_MATRIX_M[1].y, UNITY_MATRIX_M[2].y));
                        scale.z = length(float3(UNITY_MATRIX_M[0].z, UNITY_MATRIX_M[1].z, UNITY_MATRIX_M[2].z));
                    #endif
                    v.vertex.xyz += v.normal * 0.001 * (_Border * ndotlHalf);
                    #if defined(_COMPENSATESCALE) 
                        / scale
                    #endif
                    ;
                #endif

                o.position = TransformObjectToHClip(v.vertex.xyz);
                o.fogCoord = ComputeFogFactor(o.position.z);

            //  Extrude
                #if defined(_OUTLINEINSCREENSPACE)
                    if (_Border > 0.0h) {
                        float3 normal = mul(UNITY_MATRIX_MVP, float4(v.normal, 0)).xyz; // to clip space
                        float2 offset = normalize(normal.xy);
                        float2 ndc = _ScreenParams.xy * 0.5;
                        o.position.xy += ((offset * (_Border * ndotlHalf)) / ndc * o.position.w);
                    }
                #endif

                
                o.position.z += _OutlineZFallBack * 0.0001;
                return o;
            }

            half4 frag (VertexOutput input ) : SV_Target
            {
                UNITY_SETUP_INSTANCE_ID(input);
                UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input);
                _Color.rgb = MixFog(_Color.rgb, input.fogCoord);
                return half4(_Color);
            }
            ENDHLSL
        }
    }
    FallBack "Hidden/InternalErrorShader"
}

代码比我想象的要长......

Create an outline rendering mask. 创建轮廓渲染蒙版。

Concept: Nothing special to say. Usually, outline thickness treatment is painted in one of the vertex colours to make it thicker or not rendered at all.

概念:没什么特别的。 通常,轮廓粗细处理以顶点颜色之一绘制,以使其更厚或根本不渲染。

struct Attributes//appdata
{
    float4 vertex : POSITION;
    float3 normal : NORMAL;
    float4 color  : COLOR;// Related of vertex color rgba attribute from mesh.
		UNITY_VERTEX_INPUT_INSTANCE_ID
};

struct Varyings//v2f
{
    float4 position : POSITION;
    float4 color    : COLOR;// Related of vertex color rgba attribute data delivering to vertex stage.
half fogCoord   : TEXCOORD0;
    half4 dubugColor : TEXCOORD1;
    half3 dotBlend   : TEXCOORD2;
    UNITY_VERTEX_INPUT_INSTANCE_ID
};

Let's review the code briefly.让我们简要回顾一下代码。

struct Attributes//appdata
{
    float4 vertex : POSITION;
    float3 normal : NORMAL;
    float4 color  : COLOR;// Related of vertex color rgba attribute from mesh.
		UNITY_VERTEX_INPUT_INSTANCE_ID
};

struct stands for struct. Attributes means information to pass from the modeled mesh to the shader.

struct 代表结构。 属性是指从建模网格传递到着色器的信息。

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/a96f6e88-18dc-4a08-8e04-c78e87fa422b/Untitled.png

It's like this. 就像这样。

In Unity3D, we create a structure like this and get information of mesh data, such as vertices, normals, and UV information. 在Unity3D中,我们创建一个这样的结构并获取网格数据的信息,例如顶点、法线和UV信息。

<aside> 💡 When talking to programmers, when we talk about vertex attributes, it means that we use them as an industry-standard so that there is no problem in communicating with each other. No matter the engine... 和程序员说话,当我们说vertex 属性的时候,就意味着我们把它们作为一个行业标准,这样互相交流就没有问题了。 不管发动机...

</aside>

If you look at Attributes, is there a color? What color is the mesh? If you think about it, do you know the vertex color we are familiar with? That's the guy. You need to put the vertex color attribute in the structure so that it can be delivered to the vertex stage. That's easy, right? Creating a structure defines which Attributes to make into an Attributes package. If you put it in the Attributes package, you should actually make it a package that can be transferred to the Shader Stage, right?

如果您查看属性,是否有颜色? 网格是什么颜色的? 想想看,你知道我们熟悉的顶点颜色吗? 就是那个人。 您需要将顶点颜色属性放入结构中,以便将其传递到顶点阶段。 这很容易,对吧? 创建一个结构定义了将哪些属性放入一个 Attributes 包中。 如果放在Attributes包里,其实应该是做成一个可以传送到Shader Stage的包吧?

For a brief explanation, please take a look at the picture below. 有关简要说明,请看下面的图片。

출처 : Shader Development https://shaderdev.com/

출처 : Shader Development https://shaderdev.com/

  1. Let's see how the vertex shader and pixel shader go through the process of drawing an image to the screen. Let's assume that there is a rectangular Mesh as shown in the picture. Assuming that this is drawn as an image on the screen, it must first pass through the Vertex Attribute processing unit, that is, Vertex - Shader. You can also add a ripple effect to the vertices using Sin() in the Vertex - Shader process. To make it easier to understand conceptually, I tried to think of each process unit in the factory. Vertex Attribute is packaged in Packet (Bundle, Bundle) in Vertex-Shader process. At this time, new location values or other information will be packetized. At this time, the position value (Position) among the properties of each vertex is transmitted as a required property. 让我们看看顶点着色器和像素着色器如何完成将图像绘制到屏幕的过程。 让我们假设有一个如图所示的矩形网格。 假设这是在屏幕上绘制的图像,首先要经过Vertex Attribute处理单元,即Vertex - Shader。 您还可以在 Vertex - Shader 过程中使用 Sin() 向顶点添加涟漪效果。 为了在概念上更容易理解,我尝试考虑工厂中的每个工艺单元。 Vertex-Shader过程中将Vertex Attribute封装在Packet(Bundle,Bundle)中。 此时,新的位置值或其他信息将被打包。 此时,将每个顶点的属性之间的位置值(Position)作为所需属性进行传输。
  2. Packetized data goes through the Rasterizer Stage in the Vertex-Output stage (Stage). To put it simply, you can think of the Rasterizer Stage as a pixelation stage. More precisely, image information is composed of pixels in a two-dimensional array, and one image information is expressed by combining these dots and pixels at regular intervals. In other words, it can be said that it is a set of consecutive pixels in one line, and processing this is called a rasterizer. If a triangle is drawn as shown in the figure above, the rasterizer collects three (XYZ) positions of the vertices one by one to make a triangle, and then Find the pixels that will fit inside. 打包后的数据经过顶点输出阶段(Stage)中的光栅化阶段。 简单地说,您可以将 Rasterizer Stage 视为像素化阶段。 更准确地说,图像信息由二维阵列的像素组成,通过将这些点和像素以一定间隔组合来表示一个图像信息。 换句话说,可以说它是一行中的一组连续像素,对其进行处理的称为光栅化器。 如果如上图绘制一个三角形,光栅化器将顶点的三个(XYZ)位置一个一个地收集起来构成一个三角形,然后找到适合里面的像素。
  3. Then, the Rasterizer-Output is sent to the Fragment stage, and finally the Pixel-Shader performs the calculation to determine the final color. 然后,将 Rasterizer-Output 送到 Fragment 阶段,最后由 Pixel-Shader 进行计算,确定最终的颜色。

It is also recommended that you refer to the PPT that I have prepared separately. 也建议大家参考我单独准备的PPT。

이펙트 팀 세이더 기초 강의 커리큘럼.pptx

Chinese version.

Chinese version.

What you create at this time is a struct Varyings structure. 你此时创建的是一个 struct Varyings 结构。

struct Varyings//v2f
{
    float4 position : POSITION;
    float4 color    : COLOR;// Related of vertex color rgba attribute data delivering to vertex stage.
		half fogCoord   : TEXCOORD0;
    half4 dubugColor : TEXCOORD1;
    half3 dotBlend   : TEXCOORD2;
    UNITY_VERTEX_INPUT_INSTANCE_ID
};

Does creating a struct mean that you can use this struct as a type? (This is a bit difficult, isn't it?) It's easier to just memorize it at this time.... It means you can make a Varyings type vertex stage. Anyway, you can define a type using a structure like this, so make it a Varyings structure type when creating a vertex shader stage as shown below. And it is defined as Attributes type input as a list of arguments and delivered. Let's interpret the Varyings vert (Attributes input)...

创建结构是否意味着您可以将此结构用作类型? (这有点困难,不是吗?) 这个时候直接记住比较容易....这意味着你可以制作一个Varyings类型的顶点阶段。 无论如何,您可以使用这样的结构定义类型,因此在创建顶点着色器阶段时将其设为 Varyings 结构类型,如下所示。 并且它被定义为 Attributes 类型输入作为参数列表并传递。 让我们解释一下 Varyings vert(属性输入)...

<aside> 💡 It can be understood that an input list of type Attributes is passed to the vert function of type Varyings. 可以理解为一个Attributes类型的输入列表传递给Varyings类型的vert函数。

</aside>

Varyings vert (Attributes input)
{
    Varyings o = (Varyings)0;
    UNITY_SETUP_INSTANCE_ID(v);
    UNITY_TRANSFER_INSTANCE_ID(v, o);
    UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
    float ndotlLine = dot(input.normal , _MainLightPosition);

//Color mask
		half vertexColorMask = input.color.a;//여기에 어트리뷰트 중에서 받아온 컬러 중에서 A체널을 넣어줍니다.
		input.vertex.xyz += input.normal * 0.001 * lerp(_BorderMin , _BorderMax , 1);
    input.vertex.xzy *= vertexColorMask;//버택스 컬러 마스크를 곱해줍니다.
		o.position = TransformObjectToHClip(input.vertex.xyz);
    o.position.z += _OutlineZSmooth * 0.0001;
    return o;
}

Add a variable called half vertexColorMask and put input.color.a here. If this value is multiplied by input.vertex.xzy *= vertexColorMask in this way, the value of 0 to 1 stored in the vertexColorMask variable is multiplied by outline thickness processing, so the vertex color part colored with 0 value is the return value. Since it is 0, the outline thickness will be 0, right?

添加一个名为 half vertexColorMask 的变量并将 input.color.a 放在这里。 如果以这种方式将该值乘以input.vertex.xzy *= vertexColorMask,则vertexColorMask变量中存储的0到1的值乘以轮廓粗细处理,所以用0值着色的顶点颜色部分就是返回值。既然是0,那么轮廓粗细就会是0吧?

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/b87e75b6-bcb2-4c36-95dd-a13d5966b9a5/Untitled.png

Grasp the characteristics of outer contour line thickness: 掌握外轮廓线粗细的特点:

input.vertex.xyz += input.normal * 0.001 * lerp(_BorderMin * vertexColorMask , _BorderMax , 1);

In the above format, you can multiply _BorderMin by vertexColorMask or multiply _BorderMax by vertexColorMask according to your purpose. 在上述格式中,您可以根据您的目的将 _BorderMin 乘以 vertexColorMask 或将 _BorderMax 乘以 vertexColorMask。

Applied results for the exam.

Applied results for the exam.


Light Space Outline Width implementation. 光空间轮廓宽度实现。

Concept 概念

<aside> 💡 When drawing a picture using lines, like when drawing a cartoon, plaster drawing, or line drawing using various writing instruments, the line on the parts receiving the light is thinly drawn or omitted, and vice versa. You can express your own three-dimensional effect by drawing darker or thicker on the sides. This was done so that the expression part could be processed according to the lighting direction. 用线条画画时,如用各种书写工具画卡通、石膏画或线条画时,受光部分的线条被细画或省略,反之亦然。 您可以通过在侧面绘制更深或更厚的颜色来表达您自己的 3D 效果。 这样做是为了可以根据照明方向处理表情部分。

</aside>

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/89c9786a-e572-4d37-b23c-edecda420bf0/Untitled.png

The reference image above is an image from a book called How to draw sold on Amazon. 上面的参考图片来自一本名为 How to draw 在亚马逊上出售的书。

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/9559b068-9983-4f64-8721-8541556035d6/Untitled.png

Here are some cartoon character drawing references that are much easier to understand! Anyway, this is how it is expressed. 这里有一些更容易理解的卡通人物绘图参考! 无论如何,这就是它的表达方式。

Implementation

Let's take a look at which part of the above code is related to the Light Space outline. This is how the implementation will look like.

我们来看看上面代码的哪一部分与Light Space的轮廓有关。 这就是实现的样子。

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/9a637c2d-4736-4f64-8571-a058a0df649b/Untitled.png

Naturally, the outline is being processed in the vertex stage. Let's look at the code below first.

自然,轮廓是在顶点阶段进行处理的。 我们先看下面的代码。

Varyings vert (Attributes input)
{
    Varyings o = (Varyings)0;
    UNITY_SETUP_INSTANCE_ID(v);
    UNITY_TRANSFER_INSTANCE_ID(v, o);
    UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
    float ndotlLine = dot(input.normal , _MainLightPosition);
    //Light Space Outline mask here.
    o.dotBlend = ndotlLine;
    //Color mask
    half vertexColorMask = input.color.a; //Put the A channel among the colors received from the attributes here.
    input.vertex.xyz += input.normal * 0.001 * lerp(_BorderMin * vertexColorMask , _BorderMax , (ndotlLine ));
    o.position = TransformObjectToHClip(input.vertex.xyz);
    o.position.z += _OutlineZSmooth * 0.0001;
    return o;
}

If you look at the code, it also contains junk code that has been added to work. A lot more than you think, you will use the ndotl operation to create a mask or weight value.

如果您查看代码,它还会包含已添加到工作中的垃圾代码。 比您想象的要多得多,您将使用 ndotl 操作来创建掩码或权重值。

출처: https://darkcatgame.tistory.com/14

출처: https://darkcatgame.tistory.com/14

토픽 아래 NdotL에 대한 좀 더 자세한 글이 있습니다. 主题下有一篇关于 NdotL 的更详细的文章。

Don't think of the above image as a 3D rendering, but think of it as a Quick Mask in Photoshop. It can be easily understood by thinking that the weight closer to white has a weight of 1, and the weight closer to black converges to 0. In the end, if you put the above result value in the blending weight weight of the lerp function, which is a linear interpolation, the result value of lerp(A , B , blending Weight) will be returned according to the weight. If we interpret the above figure, the value of A gets closer to 1 as we go to the left in the circular shape, and the value of B gets closer to 1 as we go to the right. Because I put ndotl value in blending weight. Let's look at the code below again.

不要将上图视为 3D 渲染,而应将其视为 Photoshop 中的快速蒙版。 可以很容易的理解为靠近白色的权重的权重为1,靠近黑色的权重收敛到0。 最后,如果把上面的结果值放在lerp函数的混合权重中,这是一个线性插值,会根据权重返回lerp(A,B,混合权重)的结果值。 如果我们解释上图,当我们在圆形中向左移动时,A 的值会越来越接近 1,而随着我们向右移动,B 的值会越来越接近 1。 因为我把 ndotl 值放在了混合权重中。 让我们再看看下面的代码。