External content – Textures creation and import

Wikis > Game engine manual > External content - Textures creation and import
This page is out of date!
This page gives an overview of the Texture import pipeline: what you can import, some guidelines for texture content creation, and the important points of how to import it.
This page focuses on a quick overview of getting content into the level editor using the content browser’s Texture import window for individual texture files. This is useful to quickly get the occasional texture file into your project, as well as for tweaking texture settings. When running the Batch Exporter, all texture files in the project are batched using the settings stored when tweaking assets through the Texture import window.
For a detailed description of the Texture Import Window and its options, see the reference for that. For a continued high-level discussion of the various types of content, how they are used by the game, and specific workflows – continue reading the manual.


Importing Textures

1. In the Content Browser, go to File>Import Texture to bring up the window. Select the asset to be imported by finding it in the browser.


2. The Texture Importer displays information of the texture and you can modify the import settings. Be aware of the format and if the texture should be read as sRGB or linear. For instance, a diffuse map should be read as sRGB but use the default DXT settings, while a normal map should be read as linear and be set to DXT5nm. The different DXT settings mainly differ in how they handle alpha and how many bits per channel are used to store data.


3.b. If you ever want to change some of these options on an asset, right click the asset in the Content Library and choose reconfigure texture.




 The texture is ready to be applied to a shader’s material slots.



General guidelines for texture content:

  • Textures should be any of these types: png, jpg, bmp, psd, tga or dds. On import, all formats are converted to .dds textures. (You can use NVidia’s texture tools to edit .dds textures directly in Photoshop.)
  • All textures should be in power of 2 resolutions: 64×64, 128×128, 256×256…
  • Nebula currently does not do any sRGB conversion of textures – it reads everything as linear data. Diffuse (albedo) and specular (reflectivity) maps should be read as sRGB data; everything else should usually be read as linear data. Until this is function is added to the texture importer, this has to be done manually. (Simply gamma down the texture in Photoshop by 0.45.)
  • To make things quicker, the importer automatically detects certain namespaces and sets the import settings accordingly. For instance, textures ending with any of these suffixes: *_norm, *_normal, *_bump are interpreted as normal maps.



Texture import – guidelines

Textures that should be read as linear (roughness maps and other “data” textures)

Easy as peas. Just:

1. Create your texture

2. Import it with the default settings.

3. Use it in a material!


Textures that should be read as sRGB (diffuse, specular maps and other “visual” textures)

As Nebula currently does not support sRGB interpretation of data, visual textures such as diffuse/albedo maps need to be “manually” linearised before being imported to Nebula. You can approximate this easily enough with Photoshop:

1. Create your texture as usual

2. Add an Exposure adjustment layer and set the gamma to 0.45 – this should make it quite a bit darker

3Import to Nebula. The default DXT1c compression format will do.

4. Apply the texture to a model and view the texture in the viewport – now it will look “correct” again, with the same relative brightness values as when you authored it.


Normal maps

It’s on the to-do list to create a fully synced normals workflow and describe which settings are optimal for Nebula’s render engine. For now, it uses Maya-like settings (no flipped Green channel) but upon import with the DXT5nm format the RGB texture is converted and channels are swapped: R->A, G is left in G, and R and B are unused.

1. Create your normal map – if it looks good in Maya Viewport 2.0, it will look good in Nebula.

2. Import it with DXT5nm compression and as linear (default). You might want to raise the Quality setting a bit, as normal maps are quite sensitive to compression artifacts.

3. From the preview window, you should see the texture turning black and white. Apply it to a material’s normal map slot to see it in effect.



Textures with an alpha channel

Make sure to pick an appropriate DXT compression type to preserve alpha channel data.

1. Create a diffuse texture with the mask stored as transparency or an alpha channel

2. Import with appropriate DXT settings depending on the type of alpha needed – 2-bit alpha testing (DXT1), with slightly more gradient  (DXT3) or soft, full gradient (DXT5).

3. Apply the Alpha shader to a model (or another shader using alpha) and apply your texture to the diffuse material slot. You can adjust the Alpha Sensitivity slider to control which ranges are considered fully transparent. The Alpha Blend Factor slider adds transparency to the entire model.



Cube maps

A Cube map is a panorama built by six images which are mapped onto a cube. Cube map textures are used by Nebula for environment and irradiance mapping – casting reflections and diffuse light onto an object. They are a powerful component of current-gen rendering. Cube maps (specifically, their use) are also discussed in the Light and rendering section of the manual.


To create a cube map, we can use Modified CubeMapGen. CubeMapGen can prefilter our cubemaps and save the filtered results in the cubemap’s mips. This is necessary in order for Nebula’s image-based reflections to function properly, as the roughness of a material determines which mip of the environment map to use to simulate a more diffuse reflection. The (HDR) images fed into CubeMapGen can be created however you like – below is described the process using a Maya scene as starting point:

1. Create a scene with lighting

2. Create 6 square renders, one for each face of a cube. With your base camera pointing down the negative Z (depth) of your scene, render X-, X+, Y+, Y-, Z+ and Z- faces either by rotating the camera, or using a multi-purpose tool such as this set of camera lens shaders (which also enable you to render lat-long images).

          2.b. If doing it manually, you have to set the Angle of View of the cameras to 90 degrees, and also the Camera Aperture x and y to 1.0. If using the shaders linked to, you may run into blurry image artefacts (discussed in the linked page, “Grey Blurry Line”) which you can reduce by setting the camera’s FoV very, very high.

          2.c. Note that CubeMapGen’s Z axis is inverted compared to Maya’s.

          2.d. Instead of rendering out images from Maya, you can use HDR Shop v1 to convert a lat-long image to a certical cross cubemap format.

3. Bring your 6 image faces/1 vertical cross image into Modified CubeMapGen. Assuming the images you bring in are linear (such as .hdr files), you’re fine, but if you bring in sRGB images (such as .jpg) you’ll want to set the Input Degamma slider to 2.2.

4. Set Cosine Power to 8192 and Output Cube Size to 256 (depends on your needs, but 256 is usually a good starting point – 512 and higher is good for sky domes). This still produces a fairly blurry mip 0 because of the filtering; check the Exclude Base option to preserve those details. Generate the maps with Filter Cubemap.

5. To get HDR, higher-than-1 values, use one of the Float formats – such as Float 16 RGBA. There is rarely much visual difference between a 16bit and 32bit image, but considerable file size difference.

6. Export as .dds. Make sure to save the Mipchain.

          6.b. To export irradiance maps, check the Irradiance cubemap option and export as .dds. You don’t need any mipchain and can set the size quite low.

7. Import into Nebula. Disable mipmap generation for the environment map.

8. Apply to a shader as environment/irradiance map.



Light Maps

Light maps are textures containing raw light data, such as shadows and ambient occlusion. The light mapping workflow is not fully implemented in Nebula yet, but you can still bake light in Maya and import the results to Nebula – thus saving on computation.

1. Set up your model/scene in Maya, with lights

2. Create an secondary UV set for lightmapping

3. Bake light to texture – see the Maya reference

4. Import the models to Nebula, and make sure to include Secondary UVs

5. Import the texture

6. Assign a Lightmapped shader to the model and apply the texture.


Skip to toolbar