High level shading language

from Wikipedia, the free encyclopedia

High Level Shading Language (HLSL) describes the programming language developed for DirectX , which is used for programming shader modules. Occasionally the group of high-level programming languages for shaders is also referred to as HLSL.

task

In computer graphics , shading refers to the change in individual vertices or fragments within the graphics pipeline . It is preferred to work directly on the hardware, which made the use of assembler necessary for a long time . However, programming with assembler is quite impractical, error-prone and depends on the hardware manufacturer. High level shading languages are intended to remedy this. They provide high-level language structures that simplify programming and thus enable the programmer to concentrate on his goal. A compiler translates the high-level language code into machine language for the graphics processor. The DirectX-specific high-level language HLSL is translated into the assembly language suitable for the current graphics hardware from the DirectX library with the aid of the graphics driver when the application is running. Different shaders for Nvidia or ATI / AMD graphics cards are no longer necessary.

Language elements

HLSL does not offer any OOP approaches like other languages, is strongly oriented towards C, but with built-in data types and operations optimized for shader programming.

Global shader parameters

Parameters that are passed to a shader are globally available in HLSL in the complete code and are written outside of methods or structs, usually at the beginning of the code.

 float4x4 world; // Definiert eine 4x4 - Fließkomma-Matrix, hier die Welt-Matrix 
 float4x4 worldViewProj; // Die Welt-View-Projektionsmatrix, gerechnet als World*View*Proj.
 float3 lightDir; // Ein 3-Element Vektor. 
 float4 LightColor = {0.5,0.5,0.5,1}; // Lichtfarbe (Vektor mit vordefiniertem Wert)
 float4 Ambient = {0.5,0.5,0.5,1}; // Lichtfarbe des Umgebungslichtes
 float4 LightDir={0,0,-1, 0}; // Richtung des Sonnenlichtes (hier: Senkrecht von oben)

 texture2D Tex0; // Eine Textur

SamplerState DefaultSampler // Der "Sampler" definiert die Parameter für das Texture-Mapping
{
 filter=MIN_MAG_MIP_LINEAR; // Interpolationsfilter für Texturstreckung 
 AddressU = Clamp; // Texturkoordinaten ausserhalb [0..1] beschneiden
 AddressV = Clamp;
};

For the meaning of the above matrices see the graphic pipeline article .

Input and output of the vertex shader

You can of course write each parameter individually in the parameter list of a shader method, in practice, however, uniform structures are common to save paperwork and to ensure greater clarity. In principle, any values ​​and vectors can be transferred with the input structure, but a position is almost always included.

// Eingabe für den Vertex-Shader. 
 struct MyShaderIn
 {
     float4 Position : POSITION; // Dem Compiler wird bekannt gegeben, was die Variable "bedeutet". Hier: Das ist eine Position
     float4 Normal: NORMAL0; // Die Vertex-Normale, wird für die Beleuchtung verwendet
     float2 TexCoords: TEXCOORD0; // Texturkoordinaten
 }

 struct MyShaderOut
 {
     float4 Position : POSITION;
     float4 TexCoords TEXCOORD0;
     float4 Normal : TEXCOORD1;
 }

The "In-Struct" specifies the data structure as it is passed from the wire frame model into the shader, i.e. to the vertex shader. This processes the data and returns an "Out-Struct" as the return type. This is then passed on to the PixelShader, which at the end only returns a float4 or something similar with the final pixel color.

Vertex / Pixel Shader Method

A method must exist for vertex shaders and pixel shaders . This takes up a data structure and processes it accordingly. The vertex shader is called once for each vertex, the pixel shader once for each texture pixel to be rendered.

 MyShaderOut MyVertexShader(MyShaderIn In)
 {
     MyShaderOut Output = (MyShaderOut)0;
     // Die nächste Zeile ist die Projektionsmultiplikation. Sie multipliziert die Position des aktuellen Punktes mit
     // der aus Welt-, Kamera- und Projektionsmatrix kombinierten 4x4-Matrix, um die Bildschirmkoordinaten zu erhalten
     Output.Position = mul(In.Position, WorldViewProj); 
     Output.TexCoords = In.TexCoords; // Texturkoordinaten werden in diesem einfachen Beispiel einfach durchgereicht
     Output.Normal = normalize(mul(In.Normal, (float3x3)World)); // Die Normale wird rotiert
     return Output;
 }

 // Eine Hilfsfunktion
 float DotProduct(float3 lightPos, float3 pos3D, float3 normal)
 {
     float3 lightDir = normalize(pos3D - lightPos);
     return dot(-lightDir, normal);    
 }


  // Der Pixel-Shader gibt als Rückgabewert lediglich eine Farbe (ggf. mit Alpha) zurück
 float4 MyPixelShader(MyShaderIn In): COLOR0
 {
     // Beleuchtungsstärke der Fläche (Das Skalarprodukt aus negativem Lichtvektor und 
     // Normalvektor der Fläche ist > 0 wenn die Fläche der Lichtquelle zugewandt ist)
     float sunLight=dot((float3)-LightDir, In.Normal); 
     float4 sunLightColor=float4(sunLight,sunLight,sunLight,1); // Den Alphakanal setzen
     sunLightColor *= LightColor; // Die Lichtfarbe anbringen
     sunLightColor = saturate(sunLightColor); // Die Farbwerte auf [0..1] beschneiden
     // Die Texturfarbe an der zu zeichnenden Stelle abholen. Um die Interpolation der Texturkoordinaten 
     // brauchen wir uns nicht zu kümmern, das übernehmen Hardware und Compiler. 
     float4 baseColor = Tex0.Sample(DefaultSampler, In.TexCoords);
     float4 brightnessColor = baseColor*(sunLightColor + Ambient); // Helligkeit und Kontrast einrechnen
     brightnessColor=(brightnessColor + OffsetBrightness) * (1.0 + OffsetContrast);
     return brightnessColor;
 }

Geometry shader

The implementation of a geometry shader is optional and enables a primitive to be mapped to 0 to n new primitives. However, the type of output primitives as well as the maximum number of vertices produced needs to compile time will be announced. The implementation is procedural and uses data structures introduced specifically for this purpose (PointStream <T>, LineStream <T> and TriangleStream <T>). There is also the option of accessing the neighboring triangles or lines. This can be achieved with the help of the input modifiers (triangleadj and lineadj). Typical applications for geometry shaders are the generation of point sprites and rendering in CubeMap textures. Here is a simple geometry shader that will shrink each triangle it is applied to towards its center of gravity:

 [maxvertexcount(3)]
 void GS(triangle MyShaderOut[3] input, inout TriangleStream<MyShaderOut> OutputStream)
 {
     MyShaderOut point;
     float4 centroid = (input[0].Position + input[1].Position + input[2].Position) / 3.0;
     point = input[0];
     point.Position = lerp(centroid, input[0].Position, 0.9);
     OutputStream.Append(point);

     point = input[1];
     point.Position = lerp(centroid, input[1].Position, 0.9);
     OutputStream.Append(point);

     point = input[2];
     point.Position = lerp(centroid, input[2].Position, 0.9);
     OutputStream.Append(point);

     OutputStream.RestartStrip();
 }

techniques

Finally, the defined methods must be assigned in the form of techniques and runs so that the compiler can implement them accordingly. The syntax of the shaders has changed slightly with DirectX 10, so the target version is also explicitly specified again in the technology.

 technique10 MyTechnique // Für DirectX 10+
 {
     pass Pass0
     {
         VertexShader = compile vs_4_0 MyVertexShader();
         PixelShader = compile ps_4_0 MyPixelShader();
     }
 }

Alternatives