As a part of my studies, I’ve been developing a very simple programming language, very similar to that of Microsoft FX for effects. The difference between AnyFX and Microsoft FX is that AnyFX is generic, meaning it will work for any back-end implementation. The language works by supplying all the other stuff BESIDES the code which we need to render. This means that we actually put back-end specific implementations in the shader bodies. Why you may ask? Well, it may be extremely dangerous and poorly optimized if we are to define our own language for intrinsics, function calling conventions etc, and directly translate this to graphics assembler. Instead, we rely on the vendor-specific back-end compilers to do the heavy work for us. As such, we can super easily port our old HLSL/FX shaders or GLSL shaders by simply copying all of the functionality in the function bodies straight into an AnyFX file. However, this requires us to provide potentially several files in order to have support for different shader libraries, and yes, in this sense you are correct. We could implement a language with several function bodies, one for each implementation, but it wouldn’t look like C anymore, and the code could get messy in a hurry. Sounds confusing? Well, here’s an example:
//——————————————————————————
// demo.fx
// (C) 2013 Gustav Sterbrant
//——————————————————————————
// This is an example file to be used with the AnyFX parser and API.
profile = glsl4;
// A couple of example variable declarations
sampler2D DiffuseTexture;
sampler2D NormalTexture;
state OpaqueState;
state AlphaState
{
DepthEnabled = true;
BlendEnabled[0] = true;
SrcBlend[0] = One;
DstBlend[0] = One;
};
// a variable block containing a set of variables, this will instantiated only once in the effects system
// this block of variables will be shared by all other .fx files compiled during runtime with the same name and the [shared] qualifier
varblock Transforms
{
mat4 View;
mat4 Projection;
};
mat4 Model;
varblock Material
{
float SpecularIntensity = float(1.0f);
vec4 MaterialColor = vec4(1.0f, 0.0f, 0.0f, 1.0f);
};
//——————————————————————————
/**
Simple vertex shader which transforms basic geometry.
The function header here complies (and has to comply) with the AnyFX standard, although the function code is written in a specific target language.
This language is compliant with GLSL
*/
void
vsStatic(in vec3 position, in vec2 uv, out vec2 UV)
{
gl_Position = Projection * View * Model * vec4(position, 1.0f);
UV = uv;
}
//——————————————————————————
/**
Simple pixel shader which writes normals and diffuse colors.
Here, we use multiple render targeting using input/output attributes.
We also apply a function attribute which tells OpenGL to perform early depth testing
*/
[earlydepth]
void
psStatic([color0] out vec4 Color)
{
Color = texture(DiffuseTexture, uv);
}
//——————————————————————————
/**
Two programs, they share shaders but not render states, and also provide an API-available data field.
*/
program Solid [ string Mask = “Static”; ]
{
vs = vsStatic();
ps = psStatic();
state = OpaqueState;
};
program Alpha [ string Mask = “Alpha”; ]
{
vs = vsStatic();
ps = psStatic();
state = AlphaState;
};
So, what’s fancy here? Well first of all, we can define variables for several shader programs (yay!). The programs combines vertex shaders, pixel shaders, eventual hull-domain and geometry shaders, together with a render state. A render state defines everything required to prepare the graphics card for rendering, it includes depth-testing, blending, multisampling, alpha-to-coverage, stencil testing etc. Basically, for you DX folks out there, this is a combined Rasterizer, DepthStencil and BlendState into one simple object. You may notice that we write all the variable types with the GLSL type names. However, we could just as well do this using float1-4, matrix1-4×1-4, i.e. the HLSL style. The compiler will treat them equally. You may also notice the ‘profile = glsl4’ which just tells the compiler to generate GLSL code as the target. By generate code in this case, I mean the vertex input methodology (which is different between most implementations). It’s also used to transform the [earlydepth] qualifier to the appropriate GLSL counterpart. We can also define variable blocks, called ‘varblock’, which handles groups of variables as buffers. In OpenGL this is known as a Uniform Buffer Object, and in DirectX it’s a Constant Buffer. We also have fancy annotations, which allows us to insert meta-data straight into our objects of interest. We can for example insert strings telling what type of UI-handle we want for a specific variable, or a feature mask for our programs, etc. Since textures are very very special, in both GLSL and HLSL, we define a combined object, called sampler2D. We can also define samplers, which is handled by DirectX as shader code defined objects, and in OpenGL as CPU-side settings. In GLSL we don’t need to define sampling from a texture using both a texture and a sampler, but in HLSL4+ we do, so in that case, the generated code will quite simply put the sampler object in the code. We can also define qualifiers for variables, such as [color0] as you see in the pixel shader, which means that the output will be to the 0’th render target. AnyFX currently supports a plethora of qualifiers, but only one qualifier per input/output.
Anyways, to use this, we simply do this:
this->effect = AnyFX::EffectFactory::Instance()->CreateEffectFromFile(“compiled”);
this->opaqueProgram = this->effect->GetProgramByName(“Solid”);
this->alphaProgram = this->effect->GetProgramByName(“Alpha”);
this->viewVar = this->effect->GetVariableByName(“View”);
this->projVar = this->effect->GetVariableByName(“Projection”);
this->modelVar = this->effect->GetVariableByName(“Model”);
this->matVar = this->effect->GetVariableByName(“MaterialColor”);
this->specVar = this->effect->GetVariableByName(“SpecularIntensity”);
this->texVar = this->effect->GetVariableByName(“DiffuseTexture”);
Then:
// this marks the use of AnyFX, first we apply the program, which enables shaders and render states
this->opaqueProgram->Apply();
// then we update our variables, seeing as our variables are global in the API but local internally, we have to perform Apply first
this->viewVar->SetMatrix(&this->view[0][0]);
this->projVar->SetMatrix(&this->projection[0][0]);
this->modelVar->SetMatrix(&this->model[0][0]);
this->matVar->SetFloat4(color);
this->specVar->SetFloat(1.0f);
this->texVar->SetTexture(this->texture);
// finally, we tell AnyFX to commit all changes done to the variables
this->opaqueProgram->Commit();
Aaaand render. We have some restrictions however. First, we must run apply on our program before we are allowed to set the variables. This fits nicely into many game engines, since we first apply all of our shader settings, then apply our per-object variables, and lastly render. We also run the Commit command, which updates all variable buffers in a batched manner. This way, we don’t need to update the variable block for each variable, seeing as this might seriously stress the memory bandwidth. When all of this is said and done, we can perform the rendering. We need to perform Apply first, because each variable will have different binding points in the shaders. In OpenGL, each uniform have a location in a program, and since different programs may use any subset of all variables declared, the locations are likely to be different. In HLSL4+, we use constant buffers for everything. For HLSL4+, commit is vital since if we only use constant buffers, we need to, at some point, update them.
All in all, the language allows us to extend functionality to compile-time stuff. For OpenGL, we can perform compile-time linking by simply testing if our shaders will link together. We can also obfuscate the GLSL code, so that nobody can simply read the raw shader code and manipulate it to cheat. However, during startup, we still need to compile the actual shaders before we can perform any rendering. In the newer versions of OpenGL, we can pre-compile program binaries, and then later load them in the runtime. This could easily be implemented straight into AnyFX if needed, but I’d rather have the shaders compiled by my graphics card so that the vendor driver can perform its specific optimizations. Microsoft seems to be discontinuing FX (for some reason unknown), but the system is still really clever and useful.
And also, as you may or may not have figured out, this is the first step I will take to finish the OpenGL4 render module.
When I’m done with everything, and it’s integrated and proven to work using Nebula, I will write down a full spec of the language grammar, qualifiers and release it open source.