a-simple-triangle / Part 25 - Vulkan shader pipeline

Vulkan - Shader Pipeline

The first category of asset integration for our Vulkan application will be the shader pipeline. You may remember when we wrote the OpenGL application we modelled a class named OpenGLPipeline whose responsibility was the following:

In this article we will implement an equivalent version of this pipeline for our Vulkan application. In particular we will:


Vulkan shaders

In OpenGL we write vertex and fragment shader files, then at runtime we load them up and ask OpenGL to compile and stitch them together into a shader program. One of the main issues with this approach is that OpenGL shaders are compiled and interpreted at runtime - this means that if there was something wrong with a shader script in either the vertex or fragment shader files you won’t know about it at compile time. It also means that different vendor driver implementations might interpret shaders in different ways or have bugs related to running shaders, resulting non deterministic behaviour across different drivers and hardware.

The Vulkan specification allows us to precompile and verify our shader scripts ahead of time into an intermediate standard file format, though it is actually possible to compile them at runtime as well. Precompiling and verifying mitigates against the runtime penalty and vendor interpretation risk as it results in a consistent, stable standard shader format. The format which Vulkan shaders are compiled into is SPIR-V:

SPIR-V = Standard Portable Intermediate Representation - Vulkan.

You can learn about the format here: https://www.khronos.org/spir/. The section About SPIR and SPIR-V summarises nicely the benefits of using SPIR-V.

Note: Although it is possible to compile Vulkan shaders at runtime we will use the precompile and verify approach to take advantage of the performance and verification benefits of doing so.

When writing our actual vertex and fragment shader scripts we can target GLSL 4.6: https://www.khronos.org/registry/OpenGL/specs/gl/GLSLangSpec.4.60.pdf. This is a significant deviation from the OpenGL ES compatible shaders we wrote for our OpenGL renderer.

Creating Vulkan shader scripts

The Vulkan shader files that we will bundle into our application must have been precompiled using a Vulkan command line tool that ships with the Vulkan SDK. Within our project, the tool can be found here:

For Windows:

: root
  + third-party
    + vulkan-windows
      + Bin
        glslangValidator.exe

For MacOS:

: root
  + third-party
    + vulkan-mac
      + macOS
        + bin
          glslangValidator

The glslangValidator program is what we will use to take in the vertex and fragment shader script files and compile them into binary SPIR-V formatted files. The SPIR-V binary files are what we will then include in our assets folder in our application. This means we need a new place to store our source Vulkan shader script files which will be used to generate the SPIR-V Vulkan shader files.

Create a new folder named vulkan_shader_source in our project as the place to keep our source shader scripts along with two text files named default.frag and default.vert:

: root
  + project
    + main
      + vulkan_shader_source
        default.frag
        default.vert

Vertex shader

Edit default.vert which represents our vertex shader:

#version 460

layout(push_constant) uniform PushConstants {
    mat4 mvp;
} pushConstants;

layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec2 inTexCoord;

layout(location = 0) out vec2 outTexCoord;

void main() {
    gl_Position = pushConstants.mvp * vec4(inPosition, 1.0f);

    // The following two lines account for Vulkan having a different
    // coordinate system to OpenGL. See this link for a nice explanation:
    // https://matthewwellings.com/blog/the-new-vulkan-coordinate-system/
    gl_Position.y = -gl_Position.y;
    gl_Position.z = (gl_Position.z + gl_Position.w) / 2.0f;
    
    outTexCoord = inTexCoord;
}

We have specified #version 460 to indicate that our shader targets the GLSL 4.6 profile.

The push_constant block of code probably seems very mysterious - push constants are a way to pass small pieces of application data into a Vulkan shader very quickly. In this shader we will be passing a push constant named mvp which is a 4x4 matrix describing the model, view and projection of the geometry being fed into the shader. You might remember that in our OpenGL shader program we passed the mvp data via a shader uniform (which was named u_mvp). For more complicated or larger data structures we would need to use shader uniforms in Vulkan too, but for our application a push constant offers enough storage (guaranteed to have at least 128 bytes) to not need them. Don’t worry too much right now about the push constant, we will talk more about it later in this article.

Our vertex shader will have two inputs, one for the position and one for the texture coordinate of the incoming vertex. The position is bound to location 0 while the texture coordinate is bound to location 1. Both inputs are annotated with the in keyword to declare that they are vertex shader inputs:

layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec2 inTexCoord;

The only output is the texture coordinate, which we effectively just pass through to the fragment shader, note that is has the out keyword:

layout(location = 0) out vec2 outTexCoord;

The main shader function sets the gl_Position shader property to the mvp data encapsulated by our push constant, combined with the current vertex inPosition vector. This applies the correct transformation to each vertex. The outTexCoord is then set to the data in the inTexCoord vector, pretty much just passing it through to be carried forward into the fragment shader.

The following two lines are very important:

gl_Position.y = -gl_Position.y;
gl_Position.z = (gl_Position.z + gl_Position.w) / 2.0f;

The reason for these lines is that all of our model loading code and scene code where we specify coordinates to place vertices and objects aligns with the default OpenGL coordinate system which differs from the Vulkan coordinate system. In particular Vulkan y coordinates are flipped so positive y is down and negative y is up, whereas on OpenGL the reverse is true. The following blog explains what’s going on here: https://matthewwellings.com/blog/the-new-vulkan-coordinate-system/.

We are applying the fix in our shader code though you could also tackle it in the main application code instead. If we didn’t apply this fix then all of our scene objects would be in the opposite y position than they should be.

Fragment shader

Next up, edit the default.frag file with the following:

#version 460

layout(binding = 0) uniform sampler2D texSampler;

layout(location = 0) in vec2 inTexCoord;

layout(location = 0) out vec4 outColor;

void main() {
    outColor = texture(texSampler, inTexCoord);
}

Again we specify the shader version as #version 460. We then declare the special uniform sampler2D texSampler which will be used to lookup the colour to emit based on the inTexCoord vector. Recall our vertex shader passed outTexCoord into the fragment shader into its inTexCoord property.

The only output is the colour to paint the current fragment (pixel) represented by outColor. The main function uses the GLSL texture command to determine what texture colour to choose using the texSampler uniform combined with the inTexCoord vector of where in the texture sampler to look.

Compiling shader scripts

We will need to invoke the glslangValidator tool on each vertex and fragment shader script file to produce the equivalent SPIR-V binary representation.

Our goal is to author a command line script that runs glslangValidator for each .vert and .frag file in the vulkan_shader_source folder. We can then use this command line script during the setup of each platform target as well as invoking it whenever we make changes to any Vulkan shader source scripts to regenerate the SPIR-V assets.

Each .vert and .frag file will be compiled and validated and the resulting SPIR-V files placed into the project/main/assets/shaders/vulkan folder so they are bundled with our other assets - remembering that we do not want to bundle the Vulkan shader source files in the assets folder, only the compiled versions.

I will be walking through how to accomplish our goal for both MacOS and Windows as the tooling is available on both and to be honest I’ve been growing rather fond of working on my Windows laptop even though this is a MacOS focussed series - in fact I’ll show how to author the Windows script first as I am writing this article on my Windows machine at the moment :)

Windows

On Windows we will author a new PowerShell script to compile our shader files. Create a new text file named compile_shaders.ps1 in the vulkan_shader_source folder. Edit the file with the following:

# Don't allow our script to continue if any errors are observed
$ErrorActionPreference = "Stop"

# Check that we have a 'vulkan' shader asset folder
Push-Location -Path "..\assets\shaders"
if (!(Test-Path "vulkan")) {
    New-Item -ItemType Directory -Path "vulkan"
}
Pop-Location

# Grab all the files in the current directory ending with 'vert' or 'frag'
# and iterate them one at a time, invoking the Vulkan shader compiler for each.
Get-ChildItem -Name -Include *.vert,*.frag | Foreach-Object {
    $outputFileName = "..\assets\shaders\vulkan\" + $_
    Write-Host "Compiling Vulkan shader file"$_"..."

    ..\..\..\third-party\vulkan-windows\Bin\glslangValidator.exe -V --target-env vulkan1.0 -o $outputFileName $_

    # Check if the compilation exit code was successful.
    if($LASTEXITCODE -eq 0)
    {
        Write-Host "Compiled"$_" into "$outputFileName" ..."
    } 
    else 
    {
        Write-Host "Error! $_ failed to validate!"
        Exit-PSSession
    }
}

The script starts by checking that we have a vulkan folder under the existing assets\shaders folder and creates it if there isn’t one.

We then write a loop by finding all the child objects in the current folder with the filename pattern of *.vert / *.frag:

Get-ChildItem -Name -Include *.vert,*.frag | Foreach-Object {

Inside the loop we declare what the output file should be for each input file, pointing at our assets\shaders\vulkan folder and attaching the same filename as the input file’s name:

$outputFileName = "..\assets\shaders\vulkan\" + $_

Then we invoke the Vulkan tool to compile and validate the input shader file into the output file we declared:

..\..\..\third-party\vulkan-windows\Bin\glslangValidator.exe -V --target-env vulkan1.0 -o $outputFileName $_

Lastly we check the exit code of the glslangValidator invocation, terminating the script if it did not succeed:

if($LASTEXITCODE -eq 0)
{
    Write-Host "Compiled"$_" into "$outputFileName" ..."
} 
else 
{
    Write-Host "Error! $_ failed to validate!"
    Exit-PSSession
}

Save the script then run it within a PowerShell terminal to see it working:

.\compile_shaders.ps1
Compiling Vulkan shader file default.frag...
default.frag
Compiled default.frag into ..\assets\shaders\vulkan\default.frag ...
Compiling Vulkan shader file default.vert...
default.vert
Compiled default.vert into ..\assets\shaders\vulkan\default.vert ...

Check the assets folder and you should find the two SPIR-V formatted binary files:

+ main
  + assets
    + shaders
      + vulkan
        default.vert
        default.frag

We can test that the validation works by making some kind of error in our shader, for example, change line 7 in vulkan_shader_source\default.vert from this:

layout(location = 0) in vec3 inPosition;

to this:

xlayout(location = 0) in vec3 inPosition;

Save the shader file and rerun the PowerShell script:

.\compile_shaders.ps1

Compiling Vulkan shader file default.frag...
default.frag
Compiled default.frag into ..\assets\shaders\vulkan\default.frag ...
Compiling Vulkan shader file default.vert...
default.vert
ERROR: default.vert:7: '' :  syntax error, unexpected IDENTIFIER
ERROR: 1 compilation errors.  No code generated.
ERROR: Linking vertex stage: Missing entry point: Each stage requires one entry point
SPIR-V is not generated for failed compile or link
Error! default.vert failed to validate!

We can see that if the Vulkan tool cannot compile the shader we get some error feedback and we terminate the script.

We should invoke our new script as part of our Windows setup so the shaders are compiled when bootstrapping the project on a developer’s machine. Edit windows\setup.ps1 and append the following to the bottom of the script:

# Compile Vulkan shaders into SPIR-V binary assets.
Push-Location -Path "../main/vulkan_shader_source"
    .\compile_shaders.ps1
Pop-Location

This will navigate into the vulkan_shader_source folder then execute our compile script. You can also run the compile script any time you edit Vulkan shader source files to refresh the Vulkan shader assets.

MacOS

For all of our non Windows targets we will write a shell script that performs pretty much the same kind of job as the Windows PowerShell script. Start off by creating a new file named compile_shaders.sh in the main/vulkan_shader_source folder. Run chmod +x compile_shaders.sh to allow it to be executable. Enter the following into the script:

#!/bin/sh

# Check that we have a 'vulkan' shader asset folder
pushd ../assets/shaders
    if [ ! -d "vulkan" ]; then
        mkdir vulkan
    fi
popd

# Grab all the files in the current directory ending with 'vert' or 'frag'
# and iterate them one at a time, invoking the Vulkan shader compiler for each.
for FILE_PATH in *.vert *.frag; do
    FILE_NAME=$(basename $FILE_PATH)
    ../../../third-party/vulkan-mac/macOS/bin/glslangValidator \
        -V \
        --target-env vulkan1.0 \
        -o ../assets/shaders/vulkan/${FILE_NAME} \
        ${FILE_NAME}
done

The script starts off by checking there is an assets/shaders/vulkan folder then iterates all .vert and .frag files, invoking the Vulkan glslangValidator tool on each of them just like the Windows script did.

One minor difference to the Windows script is that we don’t need to manually perform any error checking - if any of the shader files cannot be compiled by glslangValidator the whole script will bomb out (which is what we want).

We can now run this script directly from the main/vulkan_shader_source folder whenever we like but we should also attach it to our existing setup scripts for each platform that needs Vulkan shaders. To make it a bit simpler we’ll add a new shared script - edit shared_scripts.sh and place the following at the bottom:

compile_vulkan_shaders() {
    pushd ../main/vulkan_shader_source
        ./compile_shaders.sh
    popd
}

Now add the following line at the bottom of console/setup.sh, ios/setup.sh, android/setup.sh and macos/setup.sh:

compile_vulkan_shaders

For Android on Windows add the following near the bottom of the android\setup.ps1 PowerShell script which simply navigates into the vulkan_shader_source folder and executes its compile_shaders.ps1 script:

# Compile all Vulkan shaders
Push-Location "..\main\vulkan_shader_source"
    .\compile_shaders.ps1
Pop-Location

Note: We are not compiling the Vulkan shaders for the Emscripten platform as it will never need them.

Hop into the console folder in Terminal and run ./setup.sh to see our Vulkan shaders compile as part of the overall setup configuration:

$ ./setup.sh

<snip other setup stuff>

Compiling Vulkan shaders ...
Compiling Vulkan shader: default.vert
default.vert
Compiling Vulkan shader: default.frag
default.frag

Tidy up

Since our Vulkan shader assets are generated on demand, there is no reason to commit them to version control. Add a new .gitignore file into the main folder with the following content:

assets/shaders/vulkan

This will prevent any of the SPIR-V generated files from being tracked by Git.

Important: Don’t forget to run the compile shaders script any time you make a change to the Vulkan shader source files! It is easy to forget and if you do you’ll be scratching your head why your shaders aren’t updated when running your application. I did this on more than one occasion and thought I had shader bugs!


Loading binary files

Before we dive into the more complex part of creating a Vulkan shader pipeline we need to revisit our asset handling code to allow for loading files in binary formats. We already have a way to load text files via the loadTextFile function in the assets class, but it won’t help us load our SPIR-V files.

Edit core/assets.hpp and add the following header:

#include <vector>

Then introduce a new function definition to allow us to load a binary asset into a vector of bytes represented by the char type:

namespace ast::assets
{
    ...

    std::vector<char> loadBinaryFile(const std::string& path);
} // namespace ast::assets

Edit core/assets.cpp to implement the function:

std::vector<char> ast::assets::loadBinaryFile(const std::string& path)
{
    // Open a file operation handle to the asset file.
    SDL_RWops* file{SDL_RWFromFile(path.c_str(), "rb")};

    // Determine how big the file is.
    size_t fileLength{static_cast<size_t>(SDL_RWsize(file))};

    // Ask SDL to load the content of the file into a data pointer.
    char* data{static_cast<char*>(SDL_LoadFile_RW(file, nullptr, 1))};

    // Make a copy of the data as a vector of characters.
    std::vector<char> result(data, data + fileLength);

    // Let SDL free the data memory (we took a copy into a vector).
    SDL_free(data);

    // Hand back the resulting vector which is the content of the file.
    return result;
}

The code is similar to the loadTextFile function, the main difference being that we use "rb" as the file operation mode and we construct a std::vector<char> from the result instead of a std::string.

We won’t be calling this function until later in this article but at least its now available to us.


Shader multisampling support

A bit later in our Vulkan pipeline code we will need to know if the physical device we are running on can support performing multisampling at the shader level: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#primsrast-sampleshading. We can find out if the current physical device can support this by querying its features: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkPhysicalDeviceFeatures.html.

We will expose a new function in our Vulkan physical device class to find out if this feature is supported. Edit vulkan/vulkan-physical-device.hpp and add a new function definition:

namespace ast
{
    struct VulkanPhysicalDevice
    {
        ...

        bool isShaderMultiSamplingSupported() const;

Hop into vulkan/vulkan-physical-device.cpp and add a new free function in the anonymous namespace:

namespace
{
    ...

    bool getShaderMultiSamplingSupport(const vk::PhysicalDevice& physicalDevice)
    {
        return physicalDevice.getFeatures().sampleRateShading;
    }
}

Then add a new member field to the Internal class, initialising it via the free function:

struct VulkanPhysicalDevice::Internal
{
    ...
    const bool shaderMultiSamplingSupported;

    Internal(const vk::Instance& instance)
        : ...
          shaderMultiSamplingSupported(::getShaderMultiSamplingSupport(physicalDevice)) {}
};

Finally, add the public function implementation to the bottom of the file which simply returns the shaderMultiSamplingSupported field to the caller:

bool VulkanPhysicalDevice::isShaderMultiSamplingSupported() const
{
    return internal->shaderMultiSamplingSupported;
}

Cool so now we can tell if our physical device supports the shader sampling feature but we also have to deal with a Vulkan constraint which specifies that any feature we want to use must have been enabled in the logical device at the time it is created. So, even though we can now tell if shader sampling is supported, Vulkan will not allow us to actually use it unless we had already enabled it in our logical device. Let’s fix that problem now.

Edit vulkan/vulkan-device.cpp and locate the existing createDevice free function in the anonymous namespace. Recall that the deviceCreateInfo configuration object currently passes nullptr into the argument for which physical device features to enable:

namespace
{
    ...

    vk::UniqueDevice createDevice(const ast::VulkanPhysicalDevice& physicalDevice,
                                  const QueueConfig& queueConfig)
    {
        ...

        vk::DeviceCreateInfo deviceCreateInfo{
            ...
            nullptr                                         // Physical device features
        };

Our job is to pass in an instance of a vk::PhysicalDeviceFeatures object which in our scenario will have enabled the sampleRateShading feature if it is available. Update the deviceCreateInfo code to include a new vk::PhysicalDeviceFeatures object like so:

namespace
{
    ...
    
    vk::UniqueDevice createDevice(const ast::VulkanPhysicalDevice& physicalDevice,
                                  const QueueConfig& queueConfig)
    {
        ...

        // Specify which physical device features to expose in our logical device
        vk::PhysicalDeviceFeatures physicalDeviceFeatures;

        // If shader based multisampling is available we will activate it.
        if (physicalDevice.isShaderMultiSamplingSupported())
        {
            physicalDeviceFeatures.sampleRateShading = true;
        }

        // Take the queue and extension name configurations and form the device creation definition.
        vk::DeviceCreateInfo deviceCreateInfo{
            vk::DeviceCreateFlags(),                        // Flags
            static_cast<uint32_t>(queueCreateInfos.size()), // Queue create info list count
            queueCreateInfos.data(),                        // Queue create info list
            0,                                              // Enabled layer count
            nullptr,                                        // Enabled layer names
            static_cast<uint32_t>(extensionNames.size()),   // Enabled extension count
            extensionNames.data(),                          // Enabled extension names
            &physicalDeviceFeatures                         // Physical device features
        };

We can now be sure that later in our Vulkan pipeline code we can safely activate the sample rate shading feature if it is supported.

Note: If you want to use other physical device features you should follow the same approach of enabling them during the logical device creation if they are available.


Refactor scene assets

The Vulkan shader pipeline we will be writing has a tightly coupled relationship with the Vulkan render pass that currently resides in our VulkanRenderContext class. A side effect of this relationship is that if the render pass ever needs to be recreated at runtime, then our shader pipeline does too. This was not a concern for our OpenGL shader pipeline but it causes us a bit of a problem in our current Vulkan implementation.

Recall in our VulkanContext class we have the following lifecycle related code which is invoked whenever Vulkan reports an out of date swapchain or suboptimal presentation during the render loop:

void recreateRenderContext()
{
    device.getDevice().waitIdle();
    renderContext = renderContext.recreate(window, physicalDevice, device, surface, commandPool);
}

Due to a Vulkan pipeline relying on a render pass - which lives inside the render context object - we need to not only recreate the render context, but also recreate any loaded assets such as pipelines too. To help us solve this problem we will need to revisit the way we load assets in our scenes.

At the moment we have the prepare function in our scene which takes an asset manager and allows the scene to use it however it likes:

namespace ast
{
    struct Scene
    {
        ...
        virtual void prepare(ast::AssetManager& assetManager) = 0;

An example of its usage can be seen in the scene-main class:

void prepare(ast::AssetManager& assetManager)
{
    assetManager.loadPipelines({Pipeline::Default});
    assetManager.loadStaticMeshes({StaticMesh::Crate, StaticMesh::Torus});
    assetManager.loadTextures({Texture::Crate, Texture::RedCrossHatch});

    ... other code to prepare the scene ...

In our application at the moment if we needed to reload any pipelines in the asset manager the only option we have is to call the prepare function on the scene again, but this could trigger code that has nothing to do with reloading pipelines and could put our scene in a bad state. Ideally our scenes shouldn’t even be aware that some of its assets had been reloaded at all.

We are going to change the way our scene classes load their assets such that they don’t actually interact with an asset manager but instead return a manifest of what assets the scene needs. By encapsulating the list of assets into a manifest we can avoid blindly calling unwanted code in a scene’s prepare function.

Having a manifest of all the assets that a scene needs will also give us opportunities later on to do some asset optimisation that is only possible if we know what the universe of assets are. This could include things like merging all static meshes into a single memory buffer instead of many small buffers, or merging many texture buffers into single buffers to reduce draw calls. For now though it helps us to solve the problem that will become fairly immediate during this article which is to allow Vulkan to regenerate any assets that are contextual to the Vulkan lifecycle.

The changes we will make will also affect our OpenGL implementation which is not such a bad thing because it means we can verify all the way through to rendering our static meshes in OpenGL that everything still works as it should.

The asset manifest

The first thing we’ll do is create a new structure to represent our asset manifest. It is pretty simple and really is just a bag of properties describing all the asset types to be loaded or generated.

Add a new file into the core folder named asset-manifest.hpp and enter the following:

#pragma once

#include "asset-inventory.hpp"
#include <vector>

namespace ast
{
    struct AssetManifest
    {
        const std::vector<ast::assets::Pipeline> pipelines;

        const std::vector<ast::assets::StaticMesh> staticMeshes;

        const std::vector<ast::assets::Texture> textures;
    };
} // namespace ast

There is no implementation file - the header is enough. When an asset manifest is constructed we will populate it with a list of pipelines, static meshes and textures we need to have available in our scene.

Scene base class

Our scene base class previously had the following function which gave the scene a lot of flexibility and control over how assets were loaded by taking in an asset manager then basically calling whatever functions it liked on it:

virtual void prepare(ast::AssetManager& assetManager) = 0;

We are going to constrain a scene by no longer allowing it to freestyle use the asset manager directly, but instead return a manifest of what assets it would like to have loaded. Replace the code in scene.hpp with the following:

#pragma once

#include "../core/asset-manifest.hpp"
#include "../core/renderer.hpp"

namespace ast
{
    struct Scene
    {
        Scene() = default;

        virtual ~Scene() = default;

        virtual ast::AssetManifest getAssetManifest() = 0;

        virtual void prepare() = 0;

        virtual void update(const float& delta) = 0;

        virtual void render(ast::Renderer& renderer) = 0;
    };
} // namespace ast

Notice that we no longer include the asset manager class at all, instead opting to split scene preparation from asset loading through the following two functions that differ from the original code:

virtual ast::AssetManifest getAssetManifest() = 0;

virtual void prepare() = 0;

With these changes applied, we now need to update the main scene class to adopt them. Edit scene-main.hpp and fix it up to mirror the changes:

#pragma once

#include "../core/internal-ptr.hpp"
#include "scene.hpp"

namespace ast
{
    struct SceneMain : public ast::Scene
    {
        SceneMain(const float& screenWidth, const float& screenHeight);

        ast::AssetManifest getAssetManifest() override;

        void prepare() override;

        void update(const float& delta) override;

        void render(ast::Renderer& renderer) override;

    private:
        struct Internal;
        ast::internal_ptr<Internal> internal;
    };
} // namespace ast

Then hop over to scene-main.cpp and find the existing prepare function that looks a bit like this:

struct SceneMain::Internal
{
    ...

    void prepare(ast::AssetManager& assetManager)
    {
        assetManager.loadPipelines({Pipeline::Default});
        assetManager.loadStaticMeshes({StaticMesh::Crate, StaticMesh::Torus});
        assetManager.loadTextures({Texture::Crate, Texture::RedCrossHatch});

        staticMeshes.push_back(ast::StaticMeshInstance{
            StaticMesh::Crate,           // Mesh
            Texture::Crate,              // Texture
            glm::vec3{0.4f, 0.6f, 0.0f}, // Position
            glm::vec3{0.6f, 0.6f, 0.6f}, // Scale
            glm::vec3{0.0f, 0.4f, 0.9f}, // Rotation axis
            0.0f});                      // Initial rotation

        ...

Remove all the asset manager related code, leaving the prepare function like this:

struct SceneMain::Internal
{
    ...

    void prepare()
    {
        staticMeshes.push_back(ast::StaticMeshInstance{
            StaticMesh::Crate,           // Mesh
            Texture::Crate,              // Texture
            glm::vec3{0.4f, 0.6f, 0.0f}, // Position
            glm::vec3{0.6f, 0.6f, 0.6f}, // Scale
            glm::vec3{0.0f, 0.4f, 0.9f}, // Rotation axis
            0.0f});                      // Initial rotation

        ...

Instead of using the asset manager (which we no longer have access to) we will introduce a new function in our Internal structure to return the asset manifest required by the scene - notice that the manifest contains the same set of assets we were loading previously within the prepare function:

struct SceneMain::Internal
{
    ...

    ast::AssetManifest getAssetManifest()
    {
        return ast::AssetManifest{
            {Pipeline::Default},
            {StaticMesh::Crate, StaticMesh::Torus},
            {Texture::Crate, Texture::RedCrossHatch}};
    }

    void prepare()
    {
        ...

Scroll to the bottom of the implementation file and change the prepare function implementation and add a new getAssetManifest implementation which delegates to the internal implementation like so:

ast::AssetManifest SceneMain::getAssetManifest()
{
    return internal->getAssetManifest();
}

void SceneMain::prepare()
{
    internal->prepare();
}

Refactor OpenGL asset manager

The second part of changing how assets are loaded for our scenes is to update the AssetManager classes themselves. In fact since we no longer need to pass a polymorphic asset manager into our scene and since the OpenGL and Vulkan asset manager implementations will need to diverge to accommodate the Vulkan lifecycle requirements, we can completely delete the core/asset-manager.hpp file altogether and not worry about having an abstract base class. Go ahead and delete it now.

Naturally this will cause us to have broken code to fix. First up, open opengl-asset-manager.hpp and replace it with the following:

#pragma once

#include "../../core/asset-manifest.hpp"
#include "../../core/internal-ptr.hpp"
#include "opengl-mesh.hpp"
#include "opengl-pipeline.hpp"
#include "opengl-texture.hpp"

namespace ast
{
    struct OpenGLAssetManager
    {
        OpenGLAssetManager();

        void loadAssetManifest(const ast::AssetManifest& assetManifest);

        const ast::OpenGLPipeline& getPipeline(const ast::assets::Pipeline& pipeline) const;

        const ast::OpenGLMesh& getStaticMesh(const ast::assets::StaticMesh& staticMesh) const;

        const ast::OpenGLTexture& getTexture(const ast::assets::Texture& texture) const;

    private:
        struct Internal;
        ast::internal_ptr<Internal> internal;
    };
} // namespace ast

Notice that we no longer have the loadPipelines, loadStaticMeshes or loadTextures functions but instead have a loadAssetManifest function only.

Open opengl-asset-manager.cpp to adopt the changes. Add a new function in the Internal structure that can take an asset manifest and load it using all the existing functions:

struct OpenGLAssetManager::Internal
{
    ...

    void loadAssetManifest(const ast::AssetManifest& assetManifest)
    {
        loadPipelines(assetManifest.pipelines);
        loadStaticMeshes(assetManifest.staticMeshes);
        loadTextures(assetManifest.textures);
    }

Move down to the bottom of the file and delete the loadPipelines, loadStaticMeshes and loadTextures public function implementations, replacing them with the public implementation of the loadAssetManifest function which delegates to the internal implementation:

void OpenGLAssetManager::loadAssetManifest(const ast::AssetManifest& assetManifest)
{
    internal->loadAssetManifest(assetManifest);
}

OpenGL Application

We need to revisit the OpenGL application class to change the way it creates its scenes. Edit opengl-application.cpp and find the anonymous free function that creates the main scene, which looks like this:

namespace
{
    ...

    std::unique_ptr<ast::Scene> createMainScene(ast::AssetManager& assetManager)
    {
        std::pair<uint32_t, uint32_t> displaySize{ast::sdl::getDisplaySize()};
        std::unique_ptr<ast::Scene> scene{std::make_unique<ast::SceneMain>(
            static_cast<float>(displaySize.first),
            static_cast<float>(displaySize.second))};
        scene->prepare(assetManager);
        return scene;
    }
}

Replace the function with the following, noting that we now pass in the concrete OpenGLAssetManager and perform the manifest loading step after constructing the scene but before the scene’s prepare function is called:

namespace
{
    ...

    std::unique_ptr<ast::Scene> createMainScene(ast::OpenGLAssetManager& assetManager)
    {
        std::pair<uint32_t, uint32_t> displaySize{ast::sdl::getDisplaySize()};
        std::unique_ptr<ast::Scene> scene{std::make_unique<ast::SceneMain>(
            static_cast<float>(displaySize.first),
            static_cast<float>(displaySize.second))};

        assetManager.loadAssetManifest(scene->getAssetManifest());
        scene->prepare();

        return scene;
    }
}

With these changes in place the OpenGL implementation should now work with our updated asset manager system. Of course you won’t be able to compile yet because we still need to fix the existing Vulkan asset manager.


Refactor Vulkan asset manager

The Vulkan asset manager until now was stubbed with empty implementations. In order to load assets in our Vulkan application we will need to provide a few of our Vulkan components into the asset manager and provide a way to reload any assets which are contextual in nature - such as shader pipelines. Edit vulkan-asset-manager.hpp recalling that it currently looks like this:

#pragma once

#include "../../core/asset-manager.hpp"
#include "../../core/internal-ptr.hpp"

namespace ast
{
    struct VulkanAssetManager : public ast::AssetManager
    {
        VulkanAssetManager();

        void loadPipelines(const std::vector<ast::assets::Pipeline>& pipelines) override;

        void loadStaticMeshes(const std::vector<ast::assets::StaticMesh>& staticMeshes) override;

        void loadTextures(const std::vector<ast::assets::Texture>& textures) override;

    private:
        struct Internal;
        ast::internal_ptr<Internal> internal;
    };
} // namespace ast

None of the function definitions above are good enough to successfully load assets into a Vulkan instance which is where the major diversion from the OpenGL implementation becomes apparent. Replace the header with the following:

#pragma once

#include "../../core/asset-manifest.hpp"
#include "../../core/internal-ptr.hpp"
#include "vulkan-render-context.hpp"

namespace ast
{
    struct VulkanAssetManager
    {
        VulkanAssetManager();

        void loadAssetManifest(const ast::VulkanPhysicalDevice& physicalDevice,
                               const ast::VulkanDevice& device,
                               const ast::VulkanRenderContext& renderContext,
                               const ast::AssetManifest& assetManifest);

        void reloadContextualAssets(const ast::VulkanPhysicalDevice& physicalDevice,
                                    const ast::VulkanDevice& device,
                                    const ast::VulkanRenderContext& renderContext);

    private:
        struct Internal;
        ast::internal_ptr<Internal> internal;
    };
} // namespace ast

We have a loadAssetManifest function which takes an asset manifest along with a few core Vulkan components that will be needed to load assets. We also have a reloadContextualAssets which is almost the same except we don’t pass in an asset manifest - the expectation is that this function will simply reload whatever assets that have already been loaded and that are sensitive to Vulkan lifecycle changes.

Open vulkan-asset-manager.cpp and replace its code with the following - for the moment we will leave many of the functions as stubs but later in this article we will fill in the pipeline code:

#include "vulkan-asset-manager.hpp"
#include "../../core/assets.hpp"

using ast::VulkanAssetManager;

struct VulkanAssetManager::Internal
{
    Internal() {}

    void loadAssetManifest(const ast::VulkanPhysicalDevice& physicalDevice,
                           const ast::VulkanDevice& device,
                           const ast::VulkanRenderContext& renderContext,
                           const ast::AssetManifest& assetManifest)
    {
        // TODO: Load everything in the asset manifest.
    }

    void reloadContextualAssets(const ast::VulkanPhysicalDevice& physicalDevice,
                                const ast::VulkanDevice& device,
                                const ast::VulkanRenderContext& renderContext)
    {
        // TODO: Reload any context sensitive assets that are already cached.
    }
};

VulkanAssetManager::VulkanAssetManager() : internal(ast::make_internal_ptr<Internal>()) {}

void VulkanAssetManager::loadAssetManifest(const ast::VulkanPhysicalDevice& physicalDevice,
                                           const ast::VulkanDevice& device,
                                           const ast::VulkanRenderContext& renderContext,
                                           const ast::AssetManifest& assetManifest)
{
    internal->loadAssetManifest(physicalDevice, device, renderContext, assetManifest);
}

void VulkanAssetManager::reloadContextualAssets(const ast::VulkanPhysicalDevice& physicalDevice,
                                                const ast::VulkanDevice& device,
                                                const ast::VulkanRenderContext& renderContext)
{
    internal->reloadContextualAssets(physicalDevice, device, renderContext);
}

Observe that we now have the two key functions loadAssetManifest and reloadContextualAssets to help us both initialise and to regenerate assets. Since our Vulkan asset manager has had some changes to its interface we need to revisit the main Vulkan application code to get it working again.

Update Vulkan application

Open up vulkan-application.cpp and observe our existing asset manager which is held in a std::shared_ptr field:

struct VulkanApplication::Internal
{
    const std::shared_ptr<ast::VulkanAssetManager> assetManager;

The assetManager field is currently passed into our scene class in the prepare function which is now incorrect. In our OpenGL application we only needed to make some minor adjustments to get the asset manager working properly again but on Vulkan we (yet again) face a dilemma. We know that our Vulkan asset manager will need to be able to reload itself using new instances of other Vulkan components, such as the render pass. The problem is that the render pass and other volatile Vulkan objects reside inside the VulkanRenderContext class which itself is nested in the VulkanContext class, not the VulkanApplication class. This means we can’t really tell our asset manager to reload itself directly from our Vulkan application because it won’t have access to the things it needs.

The solution we will apply is to remove the asset manager altogether from the Vulkan application class and shift it into the Vulkan context class. We will expose a new function on the Vulkan context class to load an asset manifest so the role of the Vulkan application will be to delegate to this function with the scene’s asset manifest. I feel like I’m not explaining this terribly well so perhaps the code will explain it better.

Edit vulkan-context.hpp and replace its content with the following:

#pragma once

#include "../../core/asset-manifest.hpp"
#include "../../core/internal-ptr.hpp"
#include "../../core/renderer.hpp"

namespace ast
{
    struct VulkanContext : public ast::Renderer
    {
        VulkanContext();

        void loadAssetManifest(const ast::AssetManifest& assetManifest);

        bool renderBegin();

        void render(
            const ast::assets::Pipeline& pipeline,
            const std::vector<ast::StaticMeshInstance>& staticMeshInstances) override;

        void renderEnd();

    private:
        struct Internal;
        ast::internal_ptr<Internal> internal;
    };
} // namespace ast

The main changes are the removal of the Vulkan asset manager in the constructor and the addition of the new loadAssetManifest function.

Now edit vulkan-context.cpp to apply the changes. Add the Vulkan asset manager include statement since it was removed from the header file but is still required in our implementation:

#include "vulkan-asset-manager.hpp"

In the Internal structure, update the member fields to look like the following:

struct VulkanContext::Internal
{
    const vk::UniqueInstance instance;
    const ast::VulkanPhysicalDevice physicalDevice;
    const ast::SDLWindow window;
    const ast::VulkanSurface surface;
    const ast::VulkanDevice device;
    const ast::VulkanCommandPool commandPool;
    ast::VulkanRenderContext renderContext;
    ast::VulkanAssetManager assetManager;

    Internal()
          ...
          assetManager(ast::VulkanAssetManager())

Notice that we no longer have a std::shared_ptr<ast::VulkanAssetManager> assetManager field, instead we have a simple ast::VulkanAssetManager assetManager which is initialised in the Internal constructor. The Internal constructor also no longer takes an asset manager as an argument. This means the VulkanContext class is now both the creator and owner of the asset manager instance, whereas before our Vulkan application was the creator and owner but passed it around as a shared pointer.

Note: The assetManager initialiser is now the last in the list in the Internal constructor.

Next add a new function to the Internal structure to expose the ability to load an asset manifest:

struct VulkanContext::Internal
{
    ...

    void loadAssetManifest(const ast::AssetManifest& assetManifest)
    {
        assetManager.loadAssetManifest(physicalDevice, device, renderContext, assetManifest);
    }

Update the existing recreateRenderContext function to also call assetManager.reloadContextualAssets after it regenerates the render context:

struct VulkanContext::Internal
{
    ...

    void recreateRenderContext()
    {
        device.getDevice().waitIdle();
        renderContext = renderContext.recreate(window, physicalDevice, device, surface, commandPool);
        assetManager.reloadContextualAssets(physicalDevice, device, renderContext);
    }

Finally update the public constructor and implement the loadAssetManifest public function at the bottom of the file, changing it from this:

VulkanContext::VulkanContext(std::shared_ptr<ast::VulkanAssetManager> assetManager)
    : internal(ast::make_internal_ptr<Internal>(assetManager)) {}

To this:

VulkanContext::VulkanContext() : internal(ast::make_internal_ptr<Internal>()) {}

void VulkanContext::loadAssetManifest(const ast::AssetManifest& assetManifest)
{
    internal->loadAssetManifest(assetManifest);
}

That should be it for the Vulkan context. Now edit vulkan-application.cpp to remove the asset manager field and to update the scene creation code. Here is the complete code for the change:

#include "vulkan-application.hpp"
#include "../../core/graphics-wrapper.hpp"
#include "../../core/sdl-wrapper.hpp"
#include "../../scene/scene-main.hpp"
#include "vulkan-context.hpp"

using ast::VulkanApplication;

namespace
{
    std::unique_ptr<ast::Scene> createMainScene(ast::VulkanContext& context)
    {
        std::pair<uint32_t, uint32_t> displaySize{ast::sdl::getDisplaySize()};
        std::unique_ptr<ast::Scene> scene{std::make_unique<ast::SceneMain>(
            static_cast<float>(displaySize.first),
            static_cast<float>(displaySize.second))};

        context.loadAssetManifest(scene->getAssetManifest());
        scene->prepare();

        return scene;
    }
} // namespace

struct VulkanApplication::Internal
{
    ast::VulkanContext context;
    std::unique_ptr<ast::Scene> scene;

    Internal() : context(ast::VulkanContext()) {}

    ast::Scene& getScene()
    {
        if (!scene)
        {
            scene = ::createMainScene(context);
        }

        return *scene;
    }

    void update(const float& delta)
    {
        getScene().update(delta);
    }

    void render()
    {
        if (context.renderBegin())
        {
            getScene().render(context);
            context.renderEnd();
        }
    }
};

VulkanApplication::VulkanApplication() : internal(ast::make_internal_ptr<Internal>()) {}

void VulkanApplication::update(const float& delta)
{
    internal->update(delta);
}

void VulkanApplication::render()
{
    internal->render();
}

Observe that we no longer have an asset manager member field in the Internal structure, and that the createMainScene function now takes the Vulkan context as an argument, which it uses to load the scene’s asset manifest:

namespace
{
    std::unique_ptr<ast::Scene> createMainScene(ast::VulkanContext& context)
    {
        ...

        context.loadAssetManifest(scene->getAssetManifest());
        scene->prepare();

        ...
    }

Phew, this is getting pretty dense and we haven’t even started writing the Vulkan pipeline yet. Grab a coffee and run the application again to prove that everything is still functioning after the refactoring. Next we’ll write the pipeline code!


The Vulkan pipeline

Ok so we finally made it to the point where we can fill in our asset manager to be able to construct Vulkan pipelines, which are the components that are responsible for stitching together all the things required to interact with our shaders and renderer. The core of what we need to do is detailed here: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkGraphicsPipelineCreateInfo.html.

Basically we need to formulate an instance of a vk::GraphicsPipelineCreateInfo object, populating it with all the juicy Vulkan componentry we’ve been collating during this series of articles. Here is a summary of the key bits of information we need to have to create a pipeline:

To help us implement all the gory details we’ll introduce a new class into our Vulkan application. Create vulkan-pipeline.hpp and vulkan-pipeline.cpp in the application/vulkan folder. Edit the header file first with the following:

#pragma once

#include "../../core/graphics-wrapper.hpp"
#include "../../core/internal-ptr.hpp"
#include "../../core/static-mesh-instance.hpp"
#include "vulkan-device.hpp"
#include "vulkan-physical-device.hpp"
#include <string>
#include <vector>

namespace ast
{
    struct VulkanAssetManager;

    struct VulkanPipeline
    {
        VulkanPipeline(const ast::VulkanPhysicalDevice& physicalDevice,
                       const ast::VulkanDevice& device,
                       const std::string& shaderName,
                       const vk::Viewport& viewport,
                       const vk::Rect2D& scissor,
                       const vk::RenderPass& renderPass);

        void render(
            const ast::VulkanAssetManager& assetManager,
            const std::vector<ast::StaticMeshInstance>& staticMeshInstances) const;

    private:
        struct Internal;
        ast::internal_ptr<Internal> internal;
    };
} // namespace ast

The constructor takes in a bunch of arguments that are needed to be able to define the pipeline. In particular notice the presence of the viewport, scissor and renderPass arguments - these are all Vulkan objects that get regenerated in our Vulkan lifecycle handling code and are the reason we had to perform much of the refactoring earlier in this article.

Now edit vulkan-pipeline.cpp and fill in a skeleton implementation like so:

#include "vulkan-pipeline.hpp"
#include "../../core/assets.hpp"
#include "../../core/vertex.hpp"

using ast::VulkanPipeline;

namespace
{
    // The default shader will have one descriptor for texture mapping which
    // will be made available in the fragment shader pipeline stage only. Note
    // that this pipeline does not include a descriptor set for vertex data as
    // we will use Vulkan push constants for this instead of uniform buffers.
    vk::UniqueDescriptorSetLayout createDescriptorSetLayout(const ast::VulkanDevice& device)
    {
        // TODO: Implement me.
    }

    vk::UniquePipelineLayout createPipelineLayout(const ast::VulkanDevice& device,
                                                  const vk::DescriptorSetLayout& descriptorSetLayout)
    {
        // TODO: Implement me.
    }

    vk::UniquePipeline createPipeline(const ast::VulkanPhysicalDevice& physicalDevice,
                                      const ast::VulkanDevice& device,
                                      const vk::PipelineLayout& pipelineLayout,
                                      const std::string& shaderName,
                                      const vk::Viewport& viewport,
                                      const vk::Rect2D& scissor,
                                      const vk::RenderPass& renderPass)
    {
        // TODO: Implement me.
    }
} // namespace

struct VulkanPipeline::Internal
{
    const vk::UniqueDescriptorSetLayout descriptorSetLayout;
    const vk::UniquePipelineLayout pipelineLayout;
    const vk::UniquePipeline pipeline;

    Internal(const ast::VulkanPhysicalDevice& physicalDevice,
             const ast::VulkanDevice& device,
             const std::string& shaderName,
             const vk::Viewport& viewport,
             const vk::Rect2D& scissor,
             const vk::RenderPass& renderPass)
        : descriptorSetLayout(::createDescriptorSetLayout(device)),
          pipelineLayout(::createPipelineLayout(device, descriptorSetLayout.get())),
          pipeline(::createPipeline(physicalDevice,
                                    device,
                                    pipelineLayout.get(),
                                    shaderName,
                                    viewport,
                                    scissor,
                                    renderPass)) {}

    void render(const ast::VulkanAssetManager& assetManager,
                const std::vector<ast::StaticMeshInstance>& staticMeshInstances) const
    {
        // TODO: Implement me.
    }
};

VulkanPipeline::VulkanPipeline(const ast::VulkanPhysicalDevice& physicalDevice,
                               const ast::VulkanDevice& device,
                               const std::string& shaderName,
                               const vk::Viewport& viewport,
                               const vk::Rect2D& scissor,
                               const vk::RenderPass& renderPass)
    : internal(ast::make_internal_ptr<Internal>(physicalDevice, device, shaderName, viewport, scissor, renderPass)) {}

void VulkanPipeline::render(const ast::VulkanAssetManager& assetManager,
                            const std::vector<ast::StaticMeshInstance>& staticMeshInstances) const
{
    internal->render(assetManager, staticMeshInstances);
}

In our Internal structure we will be creating and holding onto the following components:

Note that the Internal constructor is calling ::createDescriptorSetLayout, ::createPipelineLayout and ::createPipeline which don’t yet exist - we’ll author them now.


Descriptor set layout

In our default shader we will be using push constants to feed the MVP (model, view and projection) matrix in. The push constants will be delivered to the vertex shader through the following shader structure:

layout(push_constant) uniform PushConstants {
    mat4 mvp;
} pushConstants;

We don’t include push constants in the descriptor set layout - they will be configured in the pipeline layout later. If we used a uniform buffer instead of push constants to set the mvp field then the descriptor set layout would need to know about it. For our use case the only element we need to describe in a descriptor set layout is the following in our fragment shader:

layout(binding = 0) uniform sampler2D texSampler;

This texSampler object is something we will bind into the fragment shader ourselves later on when we wire up our texture asset loading code. We must tell Vulkan about it so it can pass it into our shader in the correct place. The descriptor set layout will define this requirement. Edit the ::createDescriptorSetLayout free function with the following:

namespace
{
    // The default shader will have one descriptor for texture mapping which
    // will be made available in the fragment shader pipeline stage only. Note
    // that this pipeline does not include a descriptor set for vertex data as
    // we will use Vulkan push constants for this instead of uniform buffers.
    vk::UniqueDescriptorSetLayout createDescriptorSetLayout(const ast::VulkanDevice& device)
    {
        vk::DescriptorSetLayoutBinding textureBinding{
            0,                                         // Binding
            vk::DescriptorType::eCombinedImageSampler, // Descriptor type
            1,                                         // Descriptor count
            vk::ShaderStageFlagBits::eFragment,        // Shader stage flags
            nullptr};                                  // Immutable samplers

        vk::DescriptorSetLayoutCreateInfo info{
            vk::DescriptorSetLayoutCreateFlags(), // Flags
            1,                                    // Binding count
            &textureBinding};                     // Bindings

        return device.getDevice().createDescriptorSetLayoutUnique(info);
    }

    ...
} // namespace

The textureBinding variable declares that at binding location 0 - which is reflected in the fragment shader as layout(binding = 0) - there should be a combined image sampler represented by the vk::DescriptorType::eCombinedImageSampler descriptor type, which should be made available only in the fragment shader reflected by the vk::ShaderStageFlagBits::eFragment shader stage flags.

A new descriptor set layout is then constructed through our logical device and returned.

Note: As mentioned, we are using push constants instead of uniform buffers. If we were using uniform buffers we would need to add another descriptor set layout binding entry to model it as well.


Pipeline layout

The pipeline layout takes the descriptor set layout and combines it with the required definition of the push constant we declared in the vertex shader - which looks like this:

layout(push_constant) uniform PushConstants {
    mat4 mvp;
} pushConstants;

Edit the ::createPipelineLayout free function with the following:

namespace
{
    ...

    vk::UniquePipelineLayout createPipelineLayout(const ast::VulkanDevice& device,
                                                  const vk::DescriptorSetLayout& descriptorSetLayout)
    {
        // We use push constants for delivering the vertex MVP data to the shader. This is
        // a lightweight alternative to using Vulkan uniform buffers with the caveat that
        // we can only push data up to a certain size, which the minimum guaranteed Vulkan
        // spec claims is 128 bytes. Note that this definition directly correlates to the
        // push constant in the shader code and must match up. A 4x4 matrix will consume
        // 16 floating point values, each of which is 4 bytes making a total of 64 bytes
        // which is half of our 128 byte quota. If we ever needed to push more than 128
        // bytes of data into our shader we must instead implement Vulkan uniform buffers
        // which require more setup work and are less performant than push constants.
        vk::PushConstantRange pushConstantRange{
            vk::ShaderStageFlagBits::eAllGraphics, // Flags
            0,                                     // Offset
            sizeof(glm::mat4)};                    // Size

        // We use the descriptor set layout combined with our definition of how we intend
        // to use push constants during the pipeline to produce an overall pipeline layout.
        vk::PipelineLayoutCreateInfo info{
            vk::PipelineLayoutCreateFlags(), // Flags
            1,                               // Layout count
            &descriptorSetLayout,            // Layouts,
            1,                               // Push constant range count,
            &pushConstantRange               // Push constant ranges
        };

        return device.getDevice().createPipelineLayoutUnique(info);
    }
} // namespace

We first create a variable pushConstantRange which is how we describe to Vulkan about any push constants we will be passing into the pipeline shaders. We specify the push constant being available for all graphics operations via vk::ShaderStageFlagBits::eAllGraphics and specify how many bytes to expect based on the size of the data we will be passing into the push constant. We are passing a single glm::mat4 object, which will represent the model, view, projection matrix for each primitive, so we use sizeof(glm::mat4) to calculate how many bytes to expect.

We then use the previously created descriptor set layout along with the push constant range to create a new pipeline layout object which is returned to the caller.


Creating a shader module

In Vulkan we need to load SPIR-V files and generate shader modules from them. In a way this is similar to our OpenGL program where we had to load and compile the shader script files into shader programs. The Vulkan logical device provides a way to create a shader module given an input of SPIR-V bytes to parse. Before proceeding with our Vulkan pipeline code, we need to take a small detour into our VulkanDevice class to expose a way to create a shader module. Edit vulkan-device.hpp and add a createShaderModule signature like so:

namespace ast
{
    struct VulkanDevice
    {
        ...
        vk::UniqueShaderModule createShaderModule(const std::vector<char>& shaderCode) const;

Now edit vulkan-device.cpp and add a new free function into the anonymous namespace that creates a shader module from the input data:

namespace
{
    ...

    vk::UniqueShaderModule createShaderModule(const vk::Device& device, const std::vector<char>& shaderCode)
    {
        vk::ShaderModuleCreateInfo info{
            vk::ShaderModuleCreateFlags(),                         // Flags
            shaderCode.size(),                                     // Code size
            reinterpret_cast<const uint32_t*>(shaderCode.data())}; // Code

        return device.createShaderModuleUnique(info);
    }
} // namespace

We build up a vk::ShaderModuleCreateInfo object to describe the shader code to parse then return a shader module object via the logical device. See the official docs for more: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkShaderModuleCreateInfo.html.

Scroll to the bottom of vulkan-device.cpp and add the public function implementation - note that our Internal struct doesn’t need to participate this time:

vk::UniqueShaderModule VulkanDevice::createShaderModule(const std::vector<char>& shaderCode) const
{
    return ::createShaderModule(internal->device.get(), shaderCode);
}

Cool, we now have the ability to create Vulkan shader modules given a list of SPIR-V bytes. Return back to vulkan-pipeline.cpp to continue our pipeline implementation.


Pipeline

Ok this is the big one, creating a pipeline requires us to use the pipeline layout we generated (which itself required the descriptor set layout we defined) along with the vertex and fragment shader modules and a bunch of configuration settings.

I’ll paste the entire createPipeline function below but we’ll walk through each part as well:

namespace
{
    ...

    vk::UniquePipeline createPipeline(const ast::VulkanPhysicalDevice& physicalDevice,
                                      const ast::VulkanDevice& device,
                                      const vk::PipelineLayout& pipelineLayout,
                                      const std::string& shaderName,
                                      const vk::Viewport& viewport,
                                      const vk::Rect2D& scissor,
                                      const vk::RenderPass& renderPass)
    {
        // Create a vertex shader module from asset file.
        vk::UniqueShaderModule vertexShaderModule{
            device.createShaderModule(ast::assets::loadBinaryFile("assets/shaders/vulkan/" + shaderName + ".vert"))};

        // Describe how to use the vertex shader module in the pipeline.
        vk::PipelineShaderStageCreateInfo vertexShaderInfo{
            vk::PipelineShaderStageCreateFlags(), // Flags
            vk::ShaderStageFlagBits::eVertex,     // Shader stage
            vertexShaderModule.get(),             // Shader module
            "main",                               // Name
            nullptr};                             // Specialisation info

        // Define the data format that will be passed into the vertex shader.
        vk::VertexInputBindingDescription vertexBindingDescription{
            0,                           // Binding
            sizeof(ast::Vertex),         // Stride
            vk::VertexInputRate::eVertex // Input rate
        };

        // Define the shape of the vertex position (x, y, z) attribute.
        vk::VertexInputAttributeDescription vertexPositionDescription{
            0,                                // Location
            0,                                // Binding
            vk::Format::eR32G32B32Sfloat,     // Format
            offsetof(ast::Vertex, position)}; // Offset

        // Define the shape of the texture coordinate (u, v) attribute.
        vk::VertexInputAttributeDescription textureCoordinateDescription{
            1,                                // Location
            0,                                // Binding
            vk::Format::eR32G32Sfloat,        // Format
            offsetof(ast::Vertex, texCoord)}; // Offset

        // Collate all the vertex shader attributes that will be used in the pipeline.
        std::array<vk::VertexInputAttributeDescription, 2> vertexAttributeDescriptions{
            vertexPositionDescription,
            textureCoordinateDescription};

        // Bundle up the collated descriptions defining how vertex data will be passed into the shader.
        vk::PipelineVertexInputStateCreateInfo vertexInputState{
            vk::PipelineVertexInputStateCreateFlags(),                 // Flags
            1,                                                         // Vertex binding description count
            &vertexBindingDescription,                                 // Vertex binding description
            static_cast<uint32_t>(vertexAttributeDescriptions.size()), // Vertex attribute descriptions
            vertexAttributeDescriptions.data()};                       // Vertex attribute descriptions

        // Create a fragment shader module from asset file.
        vk::UniqueShaderModule fragmentShaderModule{
            device.createShaderModule(ast::assets::loadBinaryFile("assets/shaders/vulkan/" + shaderName + ".frag"))};

        // Describe how to use the fragment shader module in the pipeline.
        vk::PipelineShaderStageCreateInfo fragmentShaderInfo{
            vk::PipelineShaderStageCreateFlags(), // Flags
            vk::ShaderStageFlagBits::eFragment,   // Shader stage
            fragmentShaderModule.get(),           // Shader module
            "main",                               // Name
            nullptr};                             // Specialisation info

        // Collate both vertex and fragment shaders into the list of pipeline shaders to use.
        std::array<vk::PipelineShaderStageCreateInfo, 2> stages{
            vertexShaderInfo,
            fragmentShaderInfo};

        // Define what variety of data will be sent into the pipeline - for us a triangle list.
        vk::PipelineInputAssemblyStateCreateInfo inputAssemblyState{
            vk::PipelineInputAssemblyStateCreateFlags(), // Flags
            vk::PrimitiveTopology::eTriangleList,        // Topology
            0};                                          // Primitive restart enable

        // Declare the viewport and scissor to apply to the shader output.
        vk::PipelineViewportStateCreateInfo viewportState{
            vk::PipelineViewportStateCreateFlags(), // Flags
            1,                                      // Viewport count
            &viewport,                              // Viewports
            1,                                      // Scissor count
            &scissor};                              // Scissors

        // Define how the pipeline should process output during rendering.
        vk::PipelineRasterizationStateCreateInfo rasterizationState{
            vk::PipelineRasterizationStateCreateFlags(), // Flags
            VK_FALSE,                                    // Depth clamp enable
            VK_FALSE,                                    // Rasterizer discard enable
            vk::PolygonMode::eFill,                      // Polygon mode
            vk::CullModeFlagBits::eBack,                 // Cull mode
            vk::FrontFace::eCounterClockwise,            // Front face
            VK_FALSE,                                    // Depth bias enable
            0.0f,                                        // Depth bias constant factor
            0.0f,                                        // Depth bias clamp
            0.0f,                                        // Depth bias slop factor
            1.0f};                                       // Line width

        // Define how to apply multi sampling to the shader output.
        vk::PipelineMultisampleStateCreateInfo multisampleState{
            vk::PipelineMultisampleStateCreateFlags(), // Flags
            physicalDevice.getMultiSamplingLevel(),    // Rasterization samples
            VK_FALSE,                                  // Sample shading enabled
            0.0f,                                      // Min sample shading
            nullptr,                                   // Sample mask
            VK_FALSE,                                  // Alpha to coverage enable
            VK_FALSE};                                 // Alpha to one enable

        // If our physical device can do multisampling at the shader level, enable it.
        if (physicalDevice.isShaderMultiSamplingSupported())
        {
            multisampleState.sampleShadingEnable = VK_TRUE;
            multisampleState.minSampleShading = 0.2f;
        }

        // Determine the way that depth testing will be performed for the pipeline.
        vk::PipelineDepthStencilStateCreateInfo depthStencilState{
            vk::PipelineDepthStencilStateCreateFlags(), // Flags
            VK_TRUE,                                    // Depth test enable
            VK_TRUE,                                    // Depth write enable
            vk::CompareOp::eLess,                       // Depth compare operation
            VK_FALSE,                                   // Depth bounds test enable
            VK_FALSE,                                   // Stencil test enable
            vk::StencilOpState(),                       // Stencil front operation
            vk::StencilOpState(),                       // Stencil back operation
            0.0f,                                       // Min depth bounds
            0.0f};                                      // Max depth bounds

        // Define how colors should be written during blending.
        vk::ColorComponentFlags colorWriteMask{
            vk::ColorComponentFlagBits::eR |
            vk::ColorComponentFlagBits::eG |
            vk::ColorComponentFlagBits::eB |
            vk::ColorComponentFlagBits::eA};

        // Define how colors should blend together during rendering.
        vk::PipelineColorBlendAttachmentState colorBlendAttachment{
            VK_TRUE,                            // Blend enable
            vk::BlendFactor::eSrcAlpha,         // Source color blend factor
            vk::BlendFactor::eOneMinusSrcAlpha, // Destination color blend factor
            vk::BlendOp::eAdd,                  // Color blend operation
            vk::BlendFactor::eOne,              // Source alpha blend factor
            vk::BlendFactor::eZero,             // Destination alpha blend factor
            vk::BlendOp::eAdd,                  // Alpha blend operation
            colorWriteMask};                    // Color write mask

        // Take the blending attachment and collate it into a pipeline state definition.
        vk::PipelineColorBlendStateCreateInfo colorBlendState{
            vk::PipelineColorBlendStateCreateFlags(), // Flags
            VK_FALSE,                                 // Logic operation enable
            vk::LogicOp::eClear,                      // Logic operation
            1,                                        // Attachment count
            &colorBlendAttachment,                    // Attachments
            {{0, 0, 0, 0}}};                          // Blend constants

        // Collate all the components into a single graphics pipeline definition.
        vk::GraphicsPipelineCreateInfo pipelineCreateInfo{
            vk::PipelineCreateFlags(),            // Flags
            static_cast<uint32_t>(stages.size()), // Stage count (vertex + fragment)
            stages.data(),                        // Stages
            &vertexInputState,                    // Vertex input state
            &inputAssemblyState,                  // Input assembly state
            nullptr,                              // Tesselation state
            &viewportState,                       // Viewport state
            &rasterizationState,                  // Rasterization state
            &multisampleState,                    // Multi sample state
            &depthStencilState,                   // Depth stencil state
            &colorBlendState,                     // Color blend state
            nullptr,                              // Dynamic state
            pipelineLayout,                       // Pipeline layout
            renderPass,                           // Render pass
            0,                                    // Subpass
            vk::Pipeline(),                       // Base pipeline handle
            0};                                   // Base pipeline index

        return device.getDevice().createGraphicsPipelineUnique(nullptr, pipelineCreateInfo);
    }
} // namespace

Function signature

We start off with the function signature itself, notice that it takes a number of Vulkan components as arguments including the pipeline layout we made and the asset name for the shader files:

vk::UniquePipeline createPipeline(const ast::VulkanPhysicalDevice& physicalDevice,
                                  const ast::VulkanDevice& device,
                                  const vk::PipelineLayout& pipelineLayout,
                                  const std::string& shaderName,
                                  const vk::Viewport& viewport,
                                  const vk::Rect2D& scissor,
                                  const vk::RenderPass& renderPass)

Vertex shader - load shader

Near the start of this article we introduced the loadBinaryFile function to our assets namespace. This will be the first time we use that function to load up the SPIR-V Vulkan shader file for the vertex shader, using the shaderName argument to know which asset file to load. Note the use of the createShaderModule function which we just added to our VulkanDevice class:

vk::UniqueShaderModule vertexShaderModule{
    device.createShaderModule(ast::assets::loadBinaryFile("assets/shaders/vulkan/" + shaderName + ".vert"))};

Reminder: Be sure to have compiled the Vulkan shader files via our setup scripts or through the compile_shaders script we wrote earlier so our SPIR-V shader files are in the assets folder!

Vertex shader - stage configuration

The vertex shader module we loaded doesn’t really know if it is a vertex shader or a fragment shader so we need to define a shader stage configuration object to associate it to the right stage within our pipeline.

vk::PipelineShaderStageCreateInfo vertexShaderInfo{
    vk::PipelineShaderStageCreateFlags(), // Flags
    vk::ShaderStageFlagBits::eVertex,     // Shader stage
    vertexShaderModule.get(),             // Shader module
    "main",                               // Name
    nullptr};                             // Specialisation info

We specify vk::ShaderStageFlagBits::eVertex to tell Vulkan that this is a vertex shader. We supply the name of the main entry point function within the shader to run, which for us - and likely most of the time - will be "main", here is our vertex shader script main function - note that the void main() is what maps to the "main" configuration field. If we had void banana() then we would specify "banana":

void main() {
    gl_Position = pushConstants.mvp * vec4(inPosition, 1.0f);
    outTexCoord = inTexCoord;
}

We also add the vertex shader module itself with vertexShaderModule.get().

Vertex shader - input binding

Next up we need to tell Vulkan how it will expect to receive vertex data from our main application:

vk::VertexInputBindingDescription vertexBindingDescription{
    0,                           // Binding
    sizeof(ast::Vertex),         // Stride
    vk::VertexInputRate::eVertex // Input rate
};

We register the vertexBindingDescription to be bound to slot 0 which we reference later when describing the vertex position attribute. The stride will be how many bytes to traverse to move between each vertex data structure and the input rate specifies that we should expect this for every vertex of the incoming geometry.

Vertex shader - vertex position attribute

Vulkan needs us to specify the actual data to expect in each shader attribute, starting with our vertex attribute. Recall that in our vertex shader script we have the following attribute:

layout(location = 0) in vec3 inPosition;

So the inPosition attribute should be given a vec3 which is basically 3 floating point values. We describe the attribute using the vk::VertexInputAttributeDescription type https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkVertexInputAttributeDescription.html:

vk::VertexInputAttributeDescription vertexPositionDescription{
    0,                                // Location
    0,                                // Binding
    vk::Format::eR32G32B32Sfloat,     // Format
    offsetof(ast::Vertex, position)}; // Offset

The location relates to the layout(location = 0) component in our shader script, whereas the binding relates to the binding slot we chose in the vertexBindingDescription we wrote. The format looks a bit odd since it appears to be a colour format (RGB) but it works because R32, G32 and B32 simply represent three floating point numbers (because floating point numbers are 32 bits in size). The offset is to tell Vulkan where to find the specific 3 floating point numbers within the data structure passed in for every vertex - remember that our ast::Vertex class contains more than just a vec3 representing a position so we must tell Vulkan where inside each vertex object it should read the position values from.

Vertex shader - texture coordinate attribute

The second attribute in our vertex shader script represents the UV texture coordinates for the current vertex and looks like this:

layout(location = 1) in vec2 inTexCoord;

Note that this time the location is 1 and the data is a vec2 which means two floating point numbers. The configuration for this attribute is reflected in the textureCoordinateDescription object:

vk::VertexInputAttributeDescription textureCoordinateDescription{
    1,                                // Location
    0,                                // Binding
    vk::Format::eR32G32Sfloat,        // Format
    offsetof(ast::Vertex, texCoord)}; // Offset

This configuration object is fairly similar to the vertex attribute description, however the location is 1 instead of 0, the format is just R32G32 because there are only two floating point numbers, and we specify the offset into our ast::Vertex class where the texture coordinate data should be read from in the shader.

Vertex shader - input state

We then combine all the vertex shader input configuration objects into a single vk::PipelineVertexInputStateCreateInfo object:

std::array<vk::VertexInputAttributeDescription, 2> vertexAttributeDescriptions{
    vertexPositionDescription,
    textureCoordinateDescription};

vk::PipelineVertexInputStateCreateInfo vertexInputState{
    vk::PipelineVertexInputStateCreateFlags(),                 // Flags
    1,                                                         // Vertex binding description count
    &vertexBindingDescription,                                 // Vertex binding description
    static_cast<uint32_t>(vertexAttributeDescriptions.size()), // Vertex attribute descriptions
    vertexAttributeDescriptions.data()};                       // Vertex attribute descriptions

Fragment shader - load shader

Just like we did for the vertex shader, we need to load our fragment shader script file from our assets and process it into a Vulkan shader module:

vk::UniqueShaderModule fragmentShaderModule{
    device.createShaderModule(ast::assets::loadBinaryFile("assets/shaders/vulkan/" + shaderName + ".frag"))};

Fragment shader - stage configuration

For a fragment shader we specify the vk::ShaderStageFlagBits::eFragment flag to indicate the pipeline stage it should participate in. Similar to the vertex shader we also pass in the shader module - this time the fragment shader module, and specify the main entry point for the fragment shader which is also "main":

vk::PipelineShaderStageCreateInfo fragmentShaderInfo{
    vk::PipelineShaderStageCreateFlags(), // Flags
    vk::ShaderStageFlagBits::eFragment,   // Shader stage
    fragmentShaderModule.get(),           // Shader module
    "main",                               // Name
    nullptr};                             // Specialisation info

Combine vertex and fragment shaders

We take the stage configuration for both the vertex and fragment shaders and combine them in an array which we’ll later use in the pipeline construction.

std::array<vk::PipelineShaderStageCreateInfo, 2> stages{
    vertexShaderInfo,
    fragmentShaderInfo};

Pipeline input assembly state

Vulkan needs to know what topology the data represents that it will receive. We are going to specify vk::PrimitiveTopology::eTriangleList though more specialised use cases might make use of the other types. Read here for more info: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkPipelineInputAssemblyStateCreateInfo.html:

vk::PipelineInputAssemblyStateCreateInfo inputAssemblyState{
    vk::PipelineInputAssemblyStateCreateFlags(), // Flags
    vk::PrimitiveTopology::eTriangleList,        // Topology
    0};                                          // Primitive restart enable

Pipeline viewport state

The pipeline will take the size of the viewport we will be rendering to along with the scissor region of where to clip the rendering output. Recall that both the viewport and scissor are data fields we create in our VulkanRenderContext class - which by the way are some of the fields that can change at runtime due to Vulkan lifecycle scenarios. We will take the viewport and scissor arguments to create the viewport state object:

vk::PipelineViewportStateCreateInfo viewportState{
    vk::PipelineViewportStateCreateFlags(), // Flags
    1,                                      // Viewport count
    &viewport,                              // Viewports
    1,                                      // Scissor count
    &scissor};                              // Scissors

Rasterization state

We can influence the way in which Vulkan draws geometry through the following configuration object:

vk::PipelineRasterizationStateCreateInfo rasterizationState{
    vk::PipelineRasterizationStateCreateFlags(), // Flags
    VK_FALSE,                                    // Depth clamp enable
    VK_FALSE,                                    // Rasterizer discard enable
    vk::PolygonMode::eFill,                      // Polygon mode
    vk::CullModeFlagBits::eBack,                 // Cull mode
    vk::FrontFace::eCounterClockwise,            // Front face
    VK_FALSE,                                    // Depth bias enable
    0.0f,                                        // Depth bias constant factor
    0.0f,                                        // Depth bias clamp
    0.0f,                                        // Depth bias slop factor
    1.0f};                                       // Line width

You can read more about this at https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkPipelineRasterizationStateCreateInfo.html.

Pay attention in particular to the front face setting. Your 3D modelling software will export mesh objects using either clockwise or counter clockwise winding. If you use a winding mode different from your mesh vertex data your models will appear to be rendering inside out. You can read about winding here (it’s an OpenGL document but the principle is the same): https://www.khronos.org/opengl/wiki/Face_Culling.

Multisampling state

The multisampling state configuration uses the multisampling level we computed much earlier on via our VulkanPhysicalDevice class at the rasterization stage. We also check if shader based multisampling is available and update the state configuration object to enable it if so:

vk::PipelineMultisampleStateCreateInfo multisampleState{
    vk::PipelineMultisampleStateCreateFlags(), // Flags
    physicalDevice.getMultiSamplingLevel(),    // Rasterization samples
    VK_FALSE,                                  // Sample shading enabled
    0.0f,                                      // Min sample shading
    nullptr,                                   // Sample mask
    VK_FALSE,                                  // Alpha to coverage enable
    VK_FALSE};                                 // Alpha to one enable

if (physicalDevice.isShaderMultiSamplingSupported())
{
    multisampleState.sampleShadingEnable = VK_TRUE;
    multisampleState.minSampleShading = 0.2f;
}

Depth testing state

The depthStencilState object describes how the pipeline should perform depth testing. Some special types of shaders might change this to implement various visual effects.

vk::PipelineDepthStencilStateCreateInfo depthStencilState{
    vk::PipelineDepthStencilStateCreateFlags(), // Flags
    VK_TRUE,                                    // Depth test enable
    VK_TRUE,                                    // Depth write enable
    vk::CompareOp::eLess,                       // Depth compare operation
    VK_FALSE,                                   // Depth bounds test enable
    VK_FALSE,                                   // Stencil test enable
    vk::StencilOpState(),                       // Stencil front operation
    vk::StencilOpState(),                       // Stencil back operation
    0.0f,                                       // Min depth bounds
    0.0f};                                      // Max depth bounds

Colour state

Next up we define how colours should be written and what to do if the renderer needs to blend colours together:

vk::ColorComponentFlags colorWriteMask{
    vk::ColorComponentFlagBits::eR |
    vk::ColorComponentFlagBits::eG |
    vk::ColorComponentFlagBits::eB |
    vk::ColorComponentFlagBits::eA};

vk::PipelineColorBlendAttachmentState colorBlendAttachment{
    VK_TRUE,                            // Blend enable
    vk::BlendFactor::eSrcAlpha,         // Source color blend factor
    vk::BlendFactor::eOneMinusSrcAlpha, // Destination color blend factor
    vk::BlendOp::eAdd,                  // Color blend operation
    vk::BlendFactor::eOne,              // Source alpha blend factor
    vk::BlendFactor::eZero,             // Destination alpha blend factor
    vk::BlendOp::eAdd,                  // Alpha blend operation
    colorWriteMask};                    // Color write mask

vk::PipelineColorBlendStateCreateInfo colorBlendState{
    vk::PipelineColorBlendStateCreateFlags(), // Flags
    VK_FALSE,                                 // Logic operation enable
    vk::LogicOp::eClear,                      // Logic operation
    1,                                        // Attachment count
    &colorBlendAttachment,                    // Attachments
    {{0, 0, 0, 0}}};                          // Blend constants

Pipeline creation object

Finally we stitch up all these configuration objects into a single pipeline configuration object which is used to generate a new Vulkan pipeline. Take note that we need to include the renderPass argument - recall that the render pass is something from our VulkanRenderContext which is regenerated on Vulkan lifecycle changes:

vk::GraphicsPipelineCreateInfo pipelineCreateInfo{
    vk::PipelineCreateFlags(),            // Flags
    static_cast<uint32_t>(stages.size()), // Stage count (vertex + fragment)
    stages.data(),                        // Stages
    &vertexInputState,                    // Vertex input state
    &inputAssemblyState,                  // Input assembly state
    nullptr,                              // Tesselation state
    &viewportState,                       // Viewport state
    &rasterizationState,                  // Rasterization state
    &multisampleState,                    // Multi sample state
    &depthStencilState,                   // Depth stencil state
    &colorBlendState,                     // Color blend state
    nullptr,                              // Dynamic state
    pipelineLayout,                       // Pipeline layout
    renderPass,                           // Render pass
    0,                                    // Subpass
    vk::Pipeline(),                       // Base pipeline handle
    0};                                   // Base pipeline index

return device.getDevice().createGraphicsPipelineUnique(nullptr, pipelineCreateInfo);

Using the pipeline class

We now have our VulkanPipeline class ready to be used in our Vulkan asset manager to load up shader pipelines. Earlier we updated the VulkanAssetManager class to stub out the ability to load an asset manifest - now we will revisit its implementation to use the new pipeline code.

Edit vulkan-asset-manager.cpp and update the include statements to match the following:

#include "vulkan-asset-manager.hpp"
#include "../../core/assets.hpp"
#include "../../core/log.hpp"
#include "vulkan-pipeline.hpp"
#include <unordered_map>

Create a new anonymous namespace with the following free function in it to act as a factory for creating pipeline instances:

...

using ast::VulkanAssetManager;

namespace
{
    ast::VulkanPipeline createPipeline(const ast::assets::Pipeline& pipeline,
                                       const ast::VulkanPhysicalDevice& physicalDevice,
                                       const ast::VulkanDevice& device,
                                       const ast::VulkanRenderContext& renderContext)
    {
        const std::string pipelinePath{ast::assets::resolvePipelinePath(pipeline)};

        ast::log("ast::VulkanAssetManager::createPipeline", "Creating pipeline: " + pipelinePath);

        return ast::VulkanPipeline(physicalDevice,
                                   device,
                                   pipelinePath,
                                   renderContext.getViewport(),
                                   renderContext.getScissor(),
                                   renderContext.getRenderPass());
    }
} // namespace

struct VulkanAssetManager::Internal
...

Not much going on here, we take in a pipeline enum then resolve what asset name it should map to through the ast::assets::resolvePipelinePath invocation. Apart from a small amount of logging, we simply construct a new ast::VulkanPipeline object and return it. We do however have some syntax errors for this code:

renderContext.getViewport(),
renderContext.getScissor(),
renderContext.getRenderPass());

Our VulkanRenderContext currently owns the viewport, scissor and renderPass but doesn’t expose them. Edit vulkan-render-context.hpp and add the following three function signatures:

namespace ast
{
    struct VulkanRenderContext
    {

        ...

        const vk::Viewport& getViewport() const;

        const vk::Rect2D& getScissor() const;

        const vk::RenderPass& getRenderPass() const;

        ...

Update vulkan-render-context.cpp and add the function implementations at the bottom of the file like so:

const vk::Viewport& VulkanRenderContext::getViewport() const
{
    return internal->viewport;
}

const vk::Rect2D& VulkanRenderContext::getScissor() const
{
    return internal->scissor;
}

const vk::RenderPass& VulkanRenderContext::getRenderPass() const
{
    return internal->renderPass.getRenderPass();
}

Save your changes and return to vulkan-asset-manager.cpp and the syntax errors should disappear.

Next we will update the Internal structure to implement the pipeline cache and loading mechanisms. First off we will add a new member field to the Internal structure named pipelineCache to hold a hash map of our pipeline enumerations and their loaded instances. This is the same approach we used in the OpenGL asset manager to cache assets:

struct VulkanAssetManager::Internal
{
    std::unordered_map<ast::assets::Pipeline, ast::VulkanPipeline> pipelineCache;

The previously stubbed loadAssetManifest function can now be updated to load all the pipelines in the asset manifest that were specified. Nothing too tricky here, we simply iterate the pipeline enumerations in the manifest and for each one if our pipeline cache doesn’t yet have an entry for it we construct a new pipeline object and insert it into our hash map cache:

struct VulkanAssetManager::Internal
{
    ...

    void loadAssetManifest(const ast::VulkanPhysicalDevice& physicalDevice,
                           const ast::VulkanDevice& device,
                           const ast::VulkanRenderContext& renderContext,
                           const ast::AssetManifest& assetManifest)
    {
        for (const auto& pipeline : assetManifest.pipelines)
        {
            if (pipelineCache.count(pipeline) == 0)
            {
                pipelineCache.insert(std::make_pair(
                    pipeline,
                    ::createPipeline(pipeline, physicalDevice, device, renderContext)));
            }
        }
    }

We can also update the previously stubbed reloadContextualAssets function which will simply iterate all the pipelines that are already in our pipeline cache, recreating and replacing them on the way through. Remember that we need to do this due to Vulkan lifecycle changes which cause the viewport, scissor, render pass and other properties in previously cached pipeline instances to now be invalid. It may not be necessary to reload other types of assets such as static meshes if they have no relationship with the Vulkan lifecycle.

struct VulkanAssetManager::Internal
{
    ...

    void reloadContextualAssets(const ast::VulkanPhysicalDevice& physicalDevice,
                                const ast::VulkanDevice& device,
                                const ast::VulkanRenderContext& renderContext)
    {
        for (auto& element : pipelineCache)
        {
            element.second = ::createPipeline(element.first, physicalDevice, device, renderContext);
        }
    }

Drum roll please!

Ok, if everything went according to plan you should be able to run the program and see the following log output, indicating that our Vulkan asset manager has taken the asset manifest from the main scene and loaded its pipelines successfully:

ast::VulkanAssetManager::createPipeline: Creating pipeline: default

You can also resize the window while the program is running and you will see the asset manager log message appear again, indicating that the cached pipelines have been recreated due to Vulkan lifecycle changes. This is important to note as it was the reason for a lot of the refactoring work we did in this article.


Summary

It is likely that a complex application would have multiple shader pipelines running within a single scene. I hope that by walking through the implementation of our default shader pipeline we have covered enough material that if you needed to create another shader pipeline you could do so.

In the next article we will start filling out the Vulkan asset implementation in preparation for rendering them through our pipeline.

The code for this article can be found here.

Continue to Part 26: Vulkan load meshes.

End of part 25