Shader editor

Wikis > Game engine manual > Shader editor

Graphical shader editor


Shaders can be created in Nebula through directly writing shader code, or by using the shader editor. The editor can be found in the content browser, and is accessed through Window -> Show Shader Editor…

The shader editor provides a graphically guided method of designing shaders by using a node-based network. This is the first screen:shady

The green node in the center cannot be deleted, and is referred to as a super node. To the right is the toolbox containing shader functions, sorted by the type of operation they provide.


Refers to constant variables which cannot be changed during runtime. Context allows for getting variables from the shader context, such as UV coordinates and vertex colors. Changing the value of a constant can be done by clicking that node and editing its properties using the ‘Node properties’ window.


Contains functions for converting data, for example extracting only the alpha channel from a four color vector, or assembling a three color vector from three separate values.


Contains typical mathematical operations, such as add, subtract etc.


Like constants, however they allow for runtime modification, which means their actual value can be changed outside of the editor.


Contains variables exposed by the Nebula shading system in a global context, and gives a per-frame random number, the time and global light information.


Contains functions for texture sampling, such as using a fixed mip level, custom UV coordinates, and cube map samplers.



When the shader is done you might want to build it. This will generate a shader file in your project which matches the name of the project, so your project must be saved before the build can take place. When the build is completed, the code generated from the build will appear in the window labeled ‘Shader code’. If curiousity strikes you, and you might want to see how the code is generated, you can click the ‘Debug code generation’ button in the toolbar to graphically visualize how the code is being generated. This will show the traversal of the graph and how the resulting code is produced. The button ‘Real-time visualize’, which is also found in the toolbar, will when toggled animate the node graph if an animated variable such as time is used. Disabling this is recommended if the node graph and becomes slow or unresponsive.


Creating just a shader isn’t all that needs to be done before it can be used. In order to tell the Nebula rendering system when to render and with which shader (shadows, picking, unlit, etc) a material has to be defined. To create a new material using your new shader, open the material editor by going to Material -> Open material editor. This will bring you to this window:


Materials are defined in files not as single materials, but rather as lists containing several. The material editor creates an unnamed list by default, but it cannot be used until it is saved. To open an existing list, go to File -> Open list…


In this window I have created a testing material to demonstrate how to use this tool. To the right of ‘Material’ is a drop-down box, which contains all materials defined in this list file. Creating a new material in this list file is done by pressing the ‘Create new’ button which will create a new empty material. Deleting a material is done by pressing the ‘Delete selected’ button. When a material is selected in the drop-down box, the UI vill populate with the information defined by the material.


Materials can be defined as virtual, which prohibits them from being created as a Surface. The purpose of virtual materials is to create a stub material which can be implemented and used by other materials.


If your material is complicated and requires some type of explanation, you can write such a text using the ‘Edit description’ button.


Inherting a material will cause all of the variables and passes from the parent materials to be copied to this material. Inheriting several materials is possible, and is done by separating the material name with the pipe character |. For example, Mat1|Mat2 will inherit both material Mat1 and Mat2. If an inherited material defines a variable with the same name as your material, or if any inherited material has a name conflict, the last defined material will replace all previous definitions. If we use the previous example and lets say Mat1, Mat2 and your material contains a name conflict each, then the order of presedence will be Mat1 -> Mat2 -> Yours.


To explain to Nebula when and where to render an object, one must define a pass. This is done by clicking the Add pass button. By default, this will create a ‘default pass’ which will render shader you created before in the common pass FlatGeometryLit. To modify a pass after it’s been added, just click one of the three fields.


Batch denotes WHEN during the frame this object should render. Clicking this will bring up a window which looks like this:shady-mat-frame

Here, we can select which frame shader we want to add the pass from. By selecting the drop-down, we can select from all EXPORTED frame shaders. If for any reason your frame shader doesn’t appear in the list, it is because it’s not been exported yet. This window will give an overview of what is being done during a frame, but without the post effects, compute shaders and algorithms. Clicking on a pass will expand to all batches executed within that pass.


Double-clicking on a batch here will bring you back to the previous window, but now the batch has become the one your selected.


Selecting the shader box will bring a drop-down selection of available shaders. Select which shader you want to use for the batch.


Shaders contains more than one ‘pair’ of vertex and pixel shaders. Generated shaders from the shader editor will have only one, called ‘Generated’, however shaders created by hand will have several. Select from the list which variation within that shader file is to be used.


Materials also define which variables within the shader code are supposed to be changeable. Pressing the ‘Add variable…’ will add a new variable. If you want to set this variable to one from the shader being edited in the shader editor, right click on the item and select ‘Set to parameter’. This will bring up a menu of all parameters in the shader, and set the type accordingly. If you decide to change the type of a variable in the shader, you have to also reset the variable in this editor, by ‘Set to parameter’ again.


For programmers

This chapter follows ways to extend the shader editor beyond the default tools. This allows for adding own code, super nodes and even ordinary nodes.

Creating custom nodes

The shader editor uses XML files to declare nodes, which are called ‘variations’. The super node XML declarations are called ‘super variations’ because of their singleton nature. Variations and the projects can be found in <toolkit>/work/shady or <project>/work/shady. Variations are sorted by their parent folder, so if a new category of variations is to be created, they need to be grouped in a category. This is how the previously mentioned nodes have been created, and grouped under for example Context, System, Texture, etc.


Variations take the form of an XML diagram. The files has to be saved with the extension .ndv for the shader editor to find them. The XML declaration need to be contained within a Shady tag. Here follows an example:

    <Variation desc="Adds two variables together to produce a sum. The two inputs much have the same type" simulationCommand="add">
        <Input type="any" name="input1"/>
        <Input type="any" name="input2"/>
        <Output type="any" name="product"/>
        <Source type="GLSL">
            <Code output="product">
                [output] = input1 + input2;


The variation is opened with a Variation tag, containing a description and a simulation command. If no simulation command is defined, this node will not have a live preview. The simulation commands are hard coded into the shader code, since they are run through the engine code. However using the command ‘copy’ will simply pass the result through the node without making it black.

Inputs and outputs

Variations define inputs and outputs, these are used to determine what goes into the node, and what comes out of it. Type can be the special type we see here, called ‘any’, or a hard defined type. Using any will make the shader editor able to distinguish based on the highest common denominator what the ‘any’ type is referred as. This means that the most complex type (the type with the most components) will take precedence.

The types available for use are:

  • float
  • vec2
  • vec3
  • vec4
  • ivec2
  • ivec3
  • ivec4
  • bvec2
  • bvec3
  • bvec4
  • bool
  • int

So if we define two ‘any’ inputs and an ‘any’ output, and plug a vec4 into one input, then the second input will be expanded to vec4, and the output will be vec4. The name field used for inputs and outputs will dictate the graphical appearance in the editor.


A single variation can have several implementations. This allows the shader editor to target several platforms using the same node graph. Within the Source tag we must define a piece of code for each output. This will mean that if an output goes unused, it doesn’t produce code when generated. Here, we target an output by recalling its name (in this case ‘product’) and write some code. In our example, we want the output to be input1 + input2, and so the [output] is used as a special name used to define the output declared by this Code tag.

Super variations

Super variations are more like code path controllers than anything else. They need a template code file (per source type) which is used to generate a new shader, and can also be supplied a header file. The intention of the header file is to put generic stuff, like functions, variables and whatnot which will be used by the template. The template itself contains the shader stages to create (Vertex, Hull, Domain, Geometry and/or Fragment), but only serves as a stub. The intended way to use super variations is by declaring an input with a specific define. If that input is used, the define will be automatically put into the target code, and thus, that input can produce a code path which generates an input value. If no graph is attached to the input, the define is not set, and thus a ‘default’ code path can be used. Here is the XML for Default.ndv.

	<SuperVariation desc="Typical deferred shader. Outputs to diffuse, emissive, specular, roughness and godray projection." defines="">
		<Input type="vec4" name="Diffuse" result="pixel" defines="USE_CUSTOM_DIFFUSE"/>
		<Input type="vec4" name="Normal" result="pixel" defines="USE_CUSTOM_NORMAL"/>
		<Input type="vec4" name="Emissive" result="pixel" defines="USE_CUSTOM_EMISSIVE"/>
		<Input type="vec4" name="Specular" result="pixel" defines="USE_CUSTOM_SPECULAR"/>
		<Input type="float" name="Roughness" result="pixel" defines="USE_CUSTOM_ROUGHNESS"/>
		<Source type="GLSL" target="gl44" template="toolkit:work/shady/supervariations/templates/glsltemplate.fxh" header="toolkit:work/shady/supervariations/templates/glslheader.fxh">
			<Include path="toolkit:work/shaders/gl"/>
			<Include path="proj:work/shaders/gl"/>


Super variations have no outputs, since they are the end point of a finite graph. Instead they define a set of typed inputs (which may not use the ‘any’ type), a resulting shader stage output, and what to define if this input is used.


In this image, we see graphs connecting to ‘Diffuse’, ‘Emissive’ and ‘Specular’. This will cause the aformentioned inputs to declare their definitions in the generated shader code. By using preprocessor directives in this manner, we can control the code being generated. An example of the output from the above image is the following:

vec4 GetDiffuse(PixelShaderParameters params)
    vec4 Local0;
    Local0 = texture(Param0, params.uv);
    const float Local1 = 2.000000f;
    vec4 Local2;
    Local2 = Local0 * vec4(Local1);

    return Local2;

Notice how the USE_CUSTOM_DIFFUSE seen in ‘Default.ndv’ is used. If we scroll down the generated code we find this snippet which is declared in the template file.

vec4 GetDiffuse(const PixelShaderParameters params)
	return vec3(0, 0, 0);

Notice how if we don’t define USE_CUSTOM_DIFFUSE then this function will be used instead. This is one way inputs can control the resulting to shader code and allowing for ‘sane’ defaults for these values. For example, if nothing is passed to Normal, we can safely assume to use the vertex normal instead of sampling from a normal map.


To generate code, we need to define a source type and target language. This is done using the Source tag. This tag is also the one responsible for loading the template and if available a header. The source tag can also denote default include paths, which will be passed to the target language compiler when building the shader. This allows for using a hand-coded shader library and use functions from it when designing shaders using the shader editor.

Skip to toolbar