a-simple-triangle / Part 24 - Vulkan render loop

Vulkan - Render Loop

We are finally ready to wire up our core Vulkan render loop, using a number of the Vulkan components we’ve worked so hard to implement.

The goal of this article is to:

We will start using a few new Vulkan components in this article as well, including semaphores and fences.

The (awesome) Vulkan Tutorial site has a great walkthrough of the key parts we will be implementing: https://vulkan-tutorial.com/Drawing_a_triangle/Drawing/Rendering_and_presentation and is worth a read especially to get familiar with semaphores and fences.

Some of our implementation will follow cues given in that tutorial site.


The flow of the render loop

Before diving into code I’ll highlight the key parts that we’ll be implementing to give us our render loop. The render loop will have a few prerequisites that we haven’t yet implemented including the following:

The loop itself would then look like:

start render loop

    calculate which swapchain image index we should be using as a frame index.
        - if we detect the swapchain is out of date, recreate it.

    reset and begin the command buffer at the frame index position within the command buffers list.
    set the viewport of the command buffer to our precomputed viewport.
    create a render pass info object with our render pass instance, frame buffer and colour / depth clear attributes.
    request the command buffer to begin the render pass with the render pass info

        [ future state - iterate all our models in the 3d world, drawing them using the command buffer ]

    request the command buffer to end its render pass and its command recording
    for the current frame index, issue a submit object with the command buffers to the graphics queue
    for the current frame index, issue another submit object with the swapchain to the presentation queue
        - if we detect the swapchain is out of date or sub-optimal, recreate it.

    wait for the presentation submission to be completed
    increment the current frame index, wrapping it to 0 when needed

end render loop

Prerequisite: Command buffers

During each render loop, the current swapchain frame index will record the command buffers that should be executed to render the scene for that swapchain image. So, if we had 3 swapchain images, we need a list of 3 command buffers to cycle through as well. We will hold the list of command buffers in the VulkanRenderContext.

Something we don’t yet expose is how many swapchain images there are for a given swapchain. Edit vulkan-swapchain.hpp and add a new function to allow us to access this information:

uint32_t getImageCount() const;

Now update vulkan-swapchain.cpp and add the implementation at the bottom of the file:

uint32_t VulkanSwapchain::getImageCount() const
{
    return static_cast<uint32_t>(internal->imageViews.size());
}

Note: We are using the length of the imageViews list to tell us how many images the swapchain has so the source of truth is the swapchain itself rather than the minimum / maximum image count values we computed when creating the swapchain originally.

Cool, now whenever we have a swapchain instance we can query it to know how many images it has. Creating a list of command buffers will require the use of our VulkanCommandPool class as command buffers are provisioned from the command pool. We will add a factory function to our VulkanCommandPool class to provide a way to construct a list of command buffers returned in a std::vector component. Edit vulkan-command-pool.hpp and add the header for std::vector:

#include <vector>

Then add a new function definition to let a consumer create a list of count command buffers:

std::vector<vk::UniqueCommandBuffer> createCommandBuffers(const ast::VulkanDevice& device,
                                                          const uint32_t& count) const;

Add a free function to vulkan-command-pool.cpp which can create a list of command buffers:

namespace
{
    ...

    std::vector<vk::UniqueCommandBuffer> createCommandBuffers(const vk::Device& device,
                                                              const vk::CommandPool& commandPool,
                                                              const uint32_t& count)
    {
        vk::CommandBufferAllocateInfo info{
            commandPool,                      // Command pool
            vk::CommandBufferLevel::ePrimary, // Level
            count                             // Command buffer count
        };

        return device.allocateCommandBuffersUnique(info);
    }
}

To allocate command buffers, we create a vk::CommandBufferAllocationInfo object, tell it which command pool to draw from and how many command buffers to create, resulting in a list which is handed back for ownership to the caller.

Add the public function implementation to the bottom of the class file - it uses the internal command pool instance to perform the operation:

std::vector<vk::UniqueCommandBuffer> VulkanCommandPool::createCommandBuffers(const ast::VulkanDevice& device,
                                                                             const uint32_t& count) const
{
    return ::createCommandBuffers(device.getDevice(), internal->commandPool.get(), count);
}

We can now return to our VulkanRenderContext class to provision the list of command buffers. Add a new member field to the Internal struct and initialise it in the constructor, noting that we pass in the swapchain image count as the number of command buffers to create:

struct VulkanRenderContext::Internal
{
    ...
    const std::vector<vk::UniqueCommandBuffer> commandBuffers;

    Internal(...)
        : ...
          commandBuffers(commandPool.createCommandBuffers(device, swapchain.getImageCount())),
};

Prerequisite: Semaphores

Semaphores in Vulkan are signals that can be associated with different operations within the rendering pipeline. We use semaphores as sentinels to wait for when performing Vulkan commands and to signal when commands are completed. By using them this way we can be sure that the order of operations is always respected.

Note: Semaphores are used to instruct Vulkan to wait for something to be signaled to have happened, or to signal that something has happened within the GPU. We do not use semaphores in our own CPU driven code to wait for or signal something in Vulkan - if we find ourselves needing to wait for something related to the GPU in our own code we would instead use fences, which I’ll talk about soon. Think of semaphores as being a GPU <-> GPU relationship, whereas fences are a GPU <-> CPU relationship, with our own application code residing in the CPU side.

You can read the official docs here: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkSemaphore.html.

As mentioned earlier, a fantastic resource about this topic can be found here: https://vulkan-tutorial.com/Drawing_a_triangle/Drawing/Rendering_and_presentation.

Also of help is this discussion thread, which also talks about fences which we will need to use in this article as well: https://www.reddit.com/r/vulkan/comments/47tc3s/differences_between_vkfence_vkevent_and/.

We will need two lists of semaphores to control the maximum number of frames we should allow to be in flight before waiting. For our application we will allow 2 frames.

Note: The maximum number of render frames is not related to the number of swapchain images - it is totally up to us to decide what a good number is.

Add a new field to hold the maximum value - no need to initialise it in the constructor we can just inline it:

struct VulkanRenderContext::Internal
{
    ...
    const uint32_t maxRenderFrames{2};

The reason we need two lists is that we will have a different collection of semaphores to control the synchronisation of the graphics commands versus the presentation commands. This will become more obvious when we start writing the render loop code.

Next we’ll need the two lists of vk::UniqueSemaphore objects, each containing a semaphore up to the maximum number of frames we will allow to be in flight. In our application the lists will both have 2 semaphores in them because we decided on a maximum number of render frames of 2. To help us provision lists of semaphores we will give our VulkanDevice a new function which can produce them. Edit vulkan-device.hpp and add the std::vector header:

#include <vector>

Next add a new function definition to create a list of semaphores with a given count:

std::vector<vk::UniqueSemaphore> createSemaphores(const uint32_t& count) const;

Add the following free function to vulkan-device.cpp to implement it:

namespace
{
    ...

    std::vector<vk::UniqueSemaphore> createSemaphores(const vk::Device& device,
                                                      const uint32_t& count)
    {
        std::vector<vk::UniqueSemaphore> semaphores;
        vk::SemaphoreCreateInfo info;

        for (uint32_t i = 0; i < count; i++)
        {
            semaphores.push_back(device.createSemaphoreUnique(info));
        }

        return semaphores;
    }
}

To create a semaphore we just need a basic default vk::SemaphoreCreateInfo object - no additional properties are needed. Then we simply loop through the count adding new semaphores to a list which is then returned.

Implement the public function at the bottom of the vulkan-device.cpp file to delegate to our new free function:

std::vector<vk::UniqueSemaphore> VulkanDevice::createSemaphores(const uint32_t& count) const
{
    return ::createSemaphores(internal->device.get(), count);
}

Hop back to vulkan-render-context.cpp again and use the new device function to initialise our lists of semaphores for graphics and presentation:

struct VulkanRenderContext::Internal
{
    ...
    const std::vector<vk::UniqueSemaphore> graphicsSemaphores;
    const std::vector<vk::UniqueSemaphore> presentationSemaphores;

    Internal(...)
        : ...
          graphicsSemaphores(device.createSemaphores(maxRenderFrames)),
          presentationSemaphores(device.createSemaphores(maxRenderFrames)) {}
};

Note that we are using our maxRenderFrames to determine how many semaphores should be created.


Prerequisite: Fences

Fences are used to allow our CPU to wait for the GPU to have completed some kind of operation. In our rendering code we will need to use a fence to wait until we can safely acquire the next swapchain image to be used in our render loop. After waiting for the fence, we will reset it.

Again, this will be shown in context when we start writing the render code. To create lists of fences we will take a similar approach to our semaphore code by adding to our VulkanDevice class. Edit vulkan-device.hpp and add a new function to let us produce a list of fences:

std::vector<vk::UniqueFence> createFences(const uint32_t& count) const;

Then add a new free function into vulkan-device.cpp to implement the construction of a list of fences:

namespace
{
    ...

    std::vector<vk::UniqueFence> createFences(const vk::Device& device, const uint32_t& count)
    {
        std::vector<vk::UniqueFence> fences;
        vk::FenceCreateInfo info{vk::FenceCreateFlagBits::eSignaled};

        for (uint32_t i = 0; i < count; i++)
        {
            fences.push_back(device.createFenceUnique(info));
        }

        return fences;
    }
}

We are specifying the eSignaled state for each fence so it starts in that state. Find more info about this here: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkFenceCreateFlagBits.html.

Add the public function implementation to the bottom of the file to delegate to the free function:

std::vector<vk::UniqueFence> VulkanDevice::createFences(const uint32_t& count) const
{
    return ::createFences(internal->device.get(), count);
}

We can now return to vulkan-render-context.cpp and add a new member field which holds a list of fences. Note that we only need one list here as we will only use it to wait for the graphics commands in our rendering code and not specifically presentation:

struct VulkanRenderContext::Internal
{
    ...
    const std::vector<vk::UniqueFence> graphicsFences;

    Internal(...)
        : ...
          graphicsFences(device.createFences(maxRenderFrames)) {}

Observe again that we are using the maxRenderFrames to define how many fences to create.


Prerequisite: Scissor area

The scissor area defines what the clipping region should be when processing the graphics output and is basically a rectangle which would normally mirror the dimensions of the swapchain extent. Our command buffer during the render loop will need to be given this.

Scissors and viewports are explained really well here: https://vulkan-tutorial.com/Drawing_a_triangle/Graphics_pipeline_basics/Fixed_functions#page_Viewports-and-scissors.

Add a free function into vulkan-render-context.cpp to create our scissor object:

namespace
{
    ...

    vk::Rect2D createScissor(const ast::VulkanSwapchain& swapchain)
    {
        vk::Offset2D offset{0, 0};

        return vk::Rect2D{
            offset,
            swapchain.getExtent()};
    }
}

Creating a scissor is pretty straight forward, we just need to define a rectangular area with an x/y offset and a width/height extent - we take the swapchain extent here.

Add another member field to hold our scissor:

struct VulkanRenderContext::Internal
{
    ...
    const vk::Rect2D scissor;

    Internal(...)
        : ...
          scissor(::createScissor(swapchain)) {}

Prerequisite: Viewport

The viewport is another structure that the command buffer will need to know about during the render loop which defines the area to draw into - not to be confused with scissors which define what area to clip.

The doco can be found here: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkViewport.html.

Enter the following free function to create our viewport:

namespace
{
    ...

    vk::Viewport createViewport(const ast::VulkanSwapchain& swapchain)
    {
        const vk::Extent2D extent{swapchain.getExtent()};
        const float viewportWidth{static_cast<float>(extent.width)};
        const float viewportHeight{static_cast<float>(extent.height)};

        return vk::Viewport{
            0.0f,           // X
            0.0f,           // Y
            viewportWidth,  // Width
            viewportHeight, // Height
            0.0f,           // Min depth
            1.0f};          // Max depth
    }
}

Observe that we are using the width and height of the swapchain extent.

Add another member field to hold our viewport:

struct VulkanRenderContext::Internal
{
    ...
    const vk::Viewport viewport;

    Internal(...)
        : ...
          viewport(::createViewport(swapchain)) {}

Prerequisite: Presentation queue

We have already exposed the graphics queue from our VulkanDevice class, but we haven’t yet exposed the presentation queue. The presentation queue is where we submit the result of our graphics operations during rendering so they can be presented to the display hardware. Recall a while ago we wrote code to choose the best presentation mode that is supported by the physical device - the presentation queue is where we will start submitting the output of the graphics commands.

We will keep a reference to our presentation queue inside the VulkanDevice class, add the following function definition to vulkan-device.hpp:

const vk::Queue& getPresentationQueue() const;

Edit vulkan-device.cpp and add a new member field to hold the presentation queue, initialising it with our existing getQueue function - notice that we are passing in queueConfig.presentationQueueIndex to determine which queue we are looking for:

struct VulkanDevice::Internal
{
    ...
    const vk::Queue presentationQueue;

    Internal(...)
        : ...
          presentationQueue(::getQueue(device.get(), queueConfig.presentationQueueIndex)) {}

And also add the public function implementation:

const vk::Queue& VulkanDevice::getPresentationQueue() const
{
    return internal->presentationQueue;
}

Prerequisite: Colour and depth clear values

On each render frame before rendering geometry we will clear the existing frame to a colour and also clear its depth values. This is somewhat similar to the OpenGL code we wrote a while ago like this:

glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

We will create an array containing two vk:ClearValue objects to help with this - one for the colour to clear to and one for the depth to clear. We will set the colour to be the Vulkan red colour so we can see the screen turn red as a result of our rendering code working, but probably you’d want to set it to black or something in a proper application.

Add a new free function to vulkan-render-context.cpp to create the array of clear values:

namespace
{
    ...

    std::array<vk::ClearValue, 2> createClearValues()
    {
        vk::ClearValue color;
        color.color = vk::ClearColorValue(std::array<float, 4>{
            164.0f / 256.0f, // Red
            30.0f / 256.0f,  // Green
            34.0f / 256.0f,  // Blue
            1.0f});          // Alpha

        vk::ClearValue depth;
        depth.depthStencil = vk::ClearDepthStencilValue{1.0f, 0};

        return std::array<vk::ClearValue, 2>{color, depth};
    }
}

Not too much going on here, the color represents what colour to clear the background to, with RGB: 164, 30, 34 being a Vulkan-y red colour. The depth value specifies to clear to depth 1.0f which is the limit of our depth buffer.

Note: We divide typical RGB values by 256.0f to give the colour in a range from 0..1f.

Add a new member field to hold the clear values and instantiate it in the constructor:

struct VulkanRenderContext::Internal
{
    ...
    const std::array<vk::ClearValue, 2> clearValues;

    Internal(...)
        : ...
          clearValues(::createClearValues()) {}

Render loop plumbing

Now that we have our prerequisite Vulkan components in place we can update our Vulkan application plumbing to support the introduction of our render loop.

Vulkan asset manager

In the OpenGL application we authored an OpenGL implementation of the ast::AssetManager contract which is used when creating scenes and which is also provided to the OpenGL renderer implementation. We will need to introduce the Vulkan equivalent before creating our scene.

Note: Creating a Vulkan asset manager was in our minds previously when we were authoring the OpenGL asset manager. As we reach the same point in our Vulkan application we will start to see the value of our earlier decision to abstract the asset types instead of hard coding them in an OpenGL specific way.

Create new files vulkan-asset-manager.hpp and vulkan-asset-manager.cpp in the vulkan/application folder. Edit the header file with the following:

#pragma once

#include "../../core/asset-manager.hpp"
#include "../../core/internal-ptr.hpp"

namespace ast
{
    struct VulkanAssetManager : public ast::AssetManager
    {
        VulkanAssetManager();

        void loadPipelines(const std::vector<ast::assets::Pipeline>& pipelines) override;

        void loadStaticMeshes(const std::vector<ast::assets::StaticMesh>& staticMeshes) override;

        void loadTextures(const std::vector<ast::assets::Texture>& textures) override;

    private:
        struct Internal;
        ast::internal_ptr<Internal> internal;
    };
} // namespace ast

You can see we are adopting the AssetManager contract. Enter the following into vulkan-asset-manager.cpp:

#include "vulkan-asset-manager.hpp"
#include "../../core/assets.hpp"

using ast::VulkanAssetManager;

struct VulkanAssetManager::Internal
{
    Internal() {}
};

VulkanAssetManager::VulkanAssetManager() : internal(ast::make_internal_ptr<Internal>()) {}

void VulkanAssetManager::loadPipelines(const std::vector<ast::assets::Pipeline>& pipelines)
{
    // TODO: Implement me
}

void VulkanAssetManager::loadStaticMeshes(const std::vector<ast::assets::StaticMesh>& staticMeshes)
{
    // TODO: Implement me
}

void VulkanAssetManager::loadTextures(const std::vector<ast::assets::Texture>& textures)
{
    // TODO: Implement me
}

We won’t implement the functions in this article as they need at least a full article of their own to work through it. Leaving the functions stubbed is fine for now as they contractually meet their obligations even if they don’t do anything real. Remember that the goal of this article is to see a red screen to prove our rendering loop works.

Close the Vulkan asset manager class and return to vulkan-application.cpp.

Render loop support classes

Our VulkanApplication class is responsible for responding to each render request by our underlying engine. At the moment we have the following code in vulkan-application.cpp which shows that our render function is just a stub:

struct VulkanApplication::Internal
{
    const ast::VulkanContext context;

    Internal() : context(ast::VulkanContext()) {}

    void update(const float& delta) {}

    void render() {}
};

If we cast our minds back to the OpenGL application implementation we see that we used a scene and handed it a renderer object on every frame:

void render()
{
    SDL_GL_MakeCurrent(window.getWindow(), context);

    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    getScene().render(renderer);

    SDL_GL_SwapWindow(window.getWindow());
}

The renderer object was of the type ast::OpenGLRenderer which in turn implemented the ast::Renderer contract. We will follow the same concept for our Vulkan application, meaning we need to choose a class that can fulfill the ast::Renderer contract for the Vulkan implementation.

A convenient choice for us is to use the existing ast::VulkanContext, updating it to fulfill the ast::Renderer contract which then through polymorphism allows it to become the object that is passed to a scene to be rendered.

Open vulkan-context.hpp and make it implement the contract and take in a shared Vulkan asset manager. We will also include two additional functions which aren’t part of the contract - renderBegin and renderEnd - so our main Vulkan application class can properly prepare and finish the rendering loop each frame.

Note: The renderBegin function will return a bool to indicate whether we were able to successfully begin the render loop for the current frame. There are scenarios where we won’t be able to begin a frame correctly and we need to deal with them gracefully. Although the renderEnd could internally hit the same scenarios, a consumer of our VulkanContext class won’t need to care which is why we aren’t returning a bool for the renderEnd function too.

#pragma once

#include "../../core/internal-ptr.hpp"
#include "../../core/renderer.hpp"
#include "vulkan-asset-manager.hpp"

namespace ast
{
    struct VulkanContext : public ast::Renderer
    {
        VulkanContext(std::shared_ptr<ast::VulkanAssetManager> assetManager);

        bool renderBegin();

        void render(
            const ast::assets::Pipeline& pipeline,
            const std::vector<ast::StaticMeshInstance>& staticMeshInstances) override;

        void renderEnd();

    private:
        struct Internal;
        ast::internal_ptr<Internal> internal;
    };
} // namespace ast

Note the addition of the public ast::Renderer and the declaration of the render function as an override. We are also taking in a shared_ptr for the asset manager.

Edit vulkan-context.cpp and update the constructor to pass in the asset manager and hold it in the Internal struct. We will also stub out the render function for now:

...

struct VulkanContext::Internal
{
    const std::shared_ptr<ast::VulkanAssetManager> assetManager;
    ...

    Internal(std::shared_ptr<ast::VulkanAssetManager> assetManager)
        : assetManager(assetManager),
          ...
    {
        ast::log("ast::VulkanContext", "Initialized Vulkan context successfully.");
    }

    bool renderBegin()
    {
        // TODO: Implement me
        return true;
    }

    void render(const ast::assets::Pipeline& pipeline,
                const std::vector<ast::StaticMeshInstance>& staticMeshInstances)
    {
        // TODO: Implement me
    }

    void renderEnd()
    {
        // TODO: Implement me
    }
};

VulkanContext::VulkanContext(std::shared_ptr<ast::VulkanAssetManager> assetManager)
    : internal(ast::make_internal_ptr<Internal>(assetManager)) {}

bool VulkanContext::renderBegin()
{
    return internal->renderBegin();
}

void VulkanContext::render(const ast::assets::Pipeline& pipeline,
                           const std::vector<ast::StaticMeshInstance>& staticMeshInstances)
{
    internal->render(pipeline, staticMeshInstances);
}

void VulkanContext::renderEnd()
{
    internal->renderEnd();
}

Points of note:

These changes to the Vulkan context class will now allow it to be used as the ast::Renderer within our Vulkan application.

Update Vulkan application

We can now revisit our Vulkan application and wire up a scene to be rendered in a similar way to our OpenGL application. Edit vulkan-application.cpp and update it to look like so:

#include "vulkan-application.hpp"
#include "../../core/graphics-wrapper.hpp"
#include "../../core/sdl-wrapper.hpp"
#include "../../scene/scene-main.hpp"
#include "vulkan-asset-manager.hpp"
#include "vulkan-context.hpp"

using ast::VulkanApplication;

namespace
{
    std::shared_ptr<ast::VulkanAssetManager> createAssetManager()
    {
        return std::make_shared<ast::VulkanAssetManager>(ast::VulkanAssetManager());
    }

    std::unique_ptr<ast::Scene> createMainScene(ast::AssetManager& assetManager)
    {
        std::pair<uint32_t, uint32_t> displaySize{ast::sdl::getDisplaySize()};
        std::unique_ptr<ast::Scene> scene{std::make_unique<ast::SceneMain>(
            static_cast<float>(displaySize.first),
            static_cast<float>(displaySize.second))};
        scene->prepare(assetManager);
        return scene;
    }
} // namespace

struct VulkanApplication::Internal
{
    const std::shared_ptr<ast::VulkanAssetManager> assetManager;
    ast::VulkanContext context;
    std::unique_ptr<ast::Scene> scene;

    Internal() : assetManager(::createAssetManager()),
                 context(ast::VulkanContext(assetManager)) {}

    ast::Scene& getScene()
    {
        if (!scene)
        {
            scene = ::createMainScene(*assetManager);
        }

        return *scene;
    }

    void update(const float& delta)
    {
        getScene().update(delta);
    }

    void render()
    {
        if (context.renderBegin())
        {
            getScene().render(context);
            context.renderEnd();
        }
    }
};

VulkanApplication::VulkanApplication() : internal(ast::make_internal_ptr<Internal>()) {}

void VulkanApplication::update(const float& delta)
{
    internal->update(delta);
}

void VulkanApplication::render()
{
    internal->render();
}

A fair bit of the code looks the same as for our OpenGL application - with the main difference being the use of the Vulkan implementors of the asset manager and renderer and the renderBegin and renderEnd wrapping the scene’s render function. We could do some more work to deduplicate this code but I’ll leave it as a future optimisation.

One point of interest is that inside the render function we only proceed with rendering the scene and ending the render loop if the call to context.renderBegin returns true.


Begin render loop

We can now focus on the render function in our application. If you refresh your memory of our OpenGL render function it looked like this:

void render()
{
    SDL_GL_MakeCurrent(window.getWindow(), context);

    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    getScene().render(renderer);

    SDL_GL_SwapWindow(window.getWindow());
}

We would perform some OpenGL commands before asking the scene to render itself, followed by a swap window operation to present the rendered frame. We will need to do something similar for Vulkan:

  1. Begin the next render frame within the render context, preparing it for use.
  2. Delegate to the current scene’s render function supplying our renderer to it.
  3. End the render frame within the render context to present the render.

In Vulkan we will actually need to deal with a couple of error flows typically caused by the swapchain entering an invalid state - the most common cause of this is the window size changing while the application is running. When we find ourselves in this state we can’t continue rendering the current frame - instead we need to recreate the swapchain and all the other Vulkan components that relate to its use. We deliberately created the VulkanRenderContext class to encapsulate all the volatile Vulkan components within it for this reason - allowing us to simply destroy the render context and make a new one when the invalid state condition is detected.

Why am I telling you this at this point? Well, the two places within a typical render loop where this could happen is in the begin and end phases so we’ll need to introduce functions into our render context which performs those phases and returns whether they succeeded or not. Edit vulkan-render-context.hpp and introduce two new functions which return a boolean result, these will be used by the host VulkanContext class to know if / when it should recreate its render context instance:

namespace ast
{
    struct VulkanRenderContext
    {
        ...

        bool renderBegin(const ast::VulkanDevice& device);

        bool renderEnd(const ast::VulkanDevice& device);

Render begin function

We will begin our render loop with the following steps:

start render loop

    calculate which swapchain image index we should be using as a frame index.
        - if we detect the swapchain is out of date, recreate it.

Edit vulkan-render-context.cpp and add the renderBegin function to the Internal struct then the public function implementation which simply delegates to the internal implementation:

struct VulkanRenderContext::Internal
{
    ...

    bool renderBegin(const ast::VulkanDevice& device)
    {
        return true;
    }
};

...

bool VulkanRenderContext::renderBegin(const ast::VulkanDevice& device)
{
    return internal->renderBegin(device);
}

The first thing we will need to do is work out which swapchain image index we should target for the current render frame. Remember that our swapchain has a number of images which are rotated through during rendering - allowing frames to be prepared while other frames are being presented. Once we know the swapchain image index we use it to choose the correct frame buffer and command buffer in the lists we created earlier in this article.

So how do we figure out what swapchain index to use? We do this:

  1. We track the value of a current frame index field which will cycle from 0 through to maxRenderFrames, incrementing on each render loop.
  2. We ask our device to wait until the graphics fence for the current frame index is in a signalled state after which we will reset it so it is ready for use for the next frame.
  3. We ask our device to acquire the next swapchain image for us, providing it with the graphics semaphore for the current frame index. The semaphore will be signalled when the swapchain image has been acquired. This is important because later we will use the same semaphore as the signal to wait on before presenting the current frame - this gives Vulkan enough information to understand how to synchronise the ordering of the commands it receives related to graphics/presentation.
  4. We return the value of the acquired swapchain image index which is the swapchain image index for the current render frame.

We don’t yet have a currentFrameIndex property so we should add that first as a member field, initialising it to 0. We will also keep the current swapchain image index as a member field as well because it needs to hold its value between the begin and end render functions:

struct VulkanRenderContext::Internal
{
    ...
    uint32_t currentFrameIndex{0};
    uint32_t currentSwapchainImageIndex{0};

Note: Theses fields cannot be const because we will be updating them on every render loop.

Now add a new free function to compute the correct swapchain image index to use:

namespace
{
    ...

    uint32_t acquireNextImageIndex(const vk::Device& device,
                                   const vk::SwapchainKHR& swapchain,
                                   const vk::Fence& fence,
                                   const vk::Semaphore& semaphore)
    {
        static constexpr uint64_t timeOut{std::numeric_limits<uint64_t>::max()};

        device.waitForFences(
            1,        // Number of fences to wait for
            &fence,   // Fences to wait for
            VK_TRUE,  // Wait for all fences
            timeOut); // Timeout while waiting

        // The fence should now be reset, ready for the next use case.
        device.resetFences(1, &fence);

        vk::ResultValue nextImageIndex{device.acquireNextImageKHR(
            swapchain, // Swapchain to acquire from
            timeOut,   // Timeout while waiting
            semaphore, // Which semaphore to signal
            nullptr)}; // Which fence to signal

        return nextImageIndex.value;
    }
}

The first part of this function performs a waitForFences operation.

The documentation for this operation can be found here: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/vkWaitForFences.html.

After waiting for the graphics fence, we will reset it so it is ready for the next time it is used. We must do this because Vulkan does not reset fences automatically after they’ve been signalled.

The last part of the function asks the device to acquire the next available image from the swapchain, providing it with which semaphore to signal when it has successfully acquired it. We aren’t passing a fence in as the semaphore is enough to signal further Vulkan operations later in our render loop - specifically around presentation. The doco is here: https://khronos.org/registry/vulkan/specs/1.1-extensions/man/html/vkAcquireNextImageKHR.html.

Finally we return the value of the result, which will be the actual swapchain image index itself.

Let’s now use this function in our render loop:

bool renderBegin(const ast::VulkanDevice& device)
{
    // Get the appropriate graphics fence and semaphore for the current render frame.
    const vk::Fence& graphicsFence{graphicsFences[currentFrameIndex].get()};
    const vk::Semaphore& graphicsSemaphore{graphicsSemaphores[currentFrameIndex].get()};

    try
    {
        // Attempt to acquire the next swapchain image index to target.
        currentSwapchainImageIndex = ::acquireNextImageIndex(device.getDevice(),
                                                             swapchain.getSwapchain(),
                                                             graphicsFence,
                                                             graphicsSemaphore);
    }
    catch (vk::OutOfDateKHRError outOfDateError)
    {
        // We cannot render with the current swapchain - it needs to be recreated.
        return false;
    }

    return true;
}

First off, we use the currentFrameIndex to find out which graphics fence and semaphore should be associated with the graphics operations for the current render loop frame.

We then call our acquireNextImageIndex function to get the currentSwapchainImageIndex, catching any vk::OutOfDateKHRError errors - these errors tell us that something about the swapchain has caused it to become invalid and we should recreate it. The VulkanContext will handle the recreation of the VulkanRenderContext if it receives a result of false from the beginRender function. We’ll implement the recreation of the render context later in this article.

Assuming we were able to acquire the next swapchain image index, we can then proceed to configure the other Vulkan components ready for rendering.

Begin the command buffer

reset and begin the command buffer at the frame index position within the command buffers list.
set the viewport of the command buffer to our precomputed viewport.

The command buffer is the destination for all of our drawing commands during the render loop. We already have our list of command buffers - one for each swapchain image stored in the commandBuffers field. Now that we have the swapchain image index, we can identify which command buffer should be used for the remainder of the current render frame:

bool renderBegin(const ast::VulkanDevice& device)
{
    ...

    // Grab the command buffer to use for the current swapchain image index.
    const vk::CommandBuffer& commandBuffer{commandBuffers[currentSwapchainImageIndex].get()};

    // Reset the command buffer to a fresh state.
    commandBuffer.reset(vk::CommandBufferResetFlagBits::eReleaseResources);

    // Begin the command buffer.
    vk::CommandBufferBeginInfo commandBufferBeginInfo{vk::CommandBufferUsageFlagBits::eOneTimeSubmit, nullptr};
    commandBuffer.begin(&commandBufferBeginInfo);

    // Configure the scissor.
    commandBuffer.setScissor(
        0,         // Which scissor to start at
        1,         // How many scissors to apply
        &scissor); // Scissor data

    // Configure the viewport.
    commandBuffer.setViewport(
        0,          // Which viewport to start at
        1,          // How many viewports to apply
        &viewport); // Viewport data

    return true;
}

The flow of this function is:

Begin the render pass

create a render pass info object with our render pass instance, frame buffer and colour / depth clear attributes.
request the command buffer to begin the render pass with the render pass info

We will now record a command into the command buffer to begin the render pass. This is the point where we will use our VulkanRenderPass instance along with the frame buffers we created in the previous article as well as the clear values for wiping out the display to a colour and depth buffer at the start of the render. The command will be of the type vk::RenderPassBeginInfo - you can read about it in the doco: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkRenderPassBeginInfo.html.

After creating the render pass info object, we instruct our command buffer to beginRenderPass, passing it the info object and specifying vk::SubpassContents::eInline for the subpass content type.

Read up on the render pass info object here: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/vkCmdBeginRenderPass.html.

The subpass content type is described here: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkSubpassContents.html. We are using eInline as we are not using any secondary command buffers in our application.

bool renderBegin(const ast::VulkanDevice& device)
{
    ...

    // Define the render pass attributes to apply.
    vk::RenderPassBeginInfo renderPassBeginInfo{
        renderPass.getRenderPass(),                     // Render pass to use
        framebuffers[currentSwapchainImageIndex].get(), // Current frame buffer
        scissor,                                        // Render area
        2,                                              // Clear value count
        clearValues.data()};                            // Clear values

    // Record the begin render pass command.
    commandBuffer.beginRenderPass(&renderPassBeginInfo, vk::SubpassContents::eInline);

    return true;
}

That completes the renderBegin function of our render context class.

We can start to call this function now within our VulkanContext class. Edit vulkan-context.cpp and update its renderBegin internal function to look like the following - note we have introduced a new recreateRenderContext function and have removed the const key word from the renderContext member field as it needs to be mutable for the recreateRenderContext function:

struct VulkanContext::Internal
{
    ...
    ast::VulkanRenderContext renderContext;

    ...
    void recreateRenderContext()
    {
        device.getDevice().waitIdle();
        renderContext = ast::VulkanRenderContext(window, physicalDevice, device, surface, commandPool);
    }

    bool renderBegin()
    {
        if (!renderContext.renderBegin(device))
        {
            recreateRenderContext();
            return false;
        }

        return true;
    }

The new recreateRenderContext function is what we invoke if we can’t successfully begin a render frame - it will wait for the logical device to be idle then completely regenerate our renderContext field - we know that the renderBegin failed if we get back a false result from the renderContext.renderBegin. If this happens we would propagate the false result back to the Vulkan application class to deal with.

If the renderBegin invocation succeeded, we simply return true to the Vulkan application class. There is actually a bug in this code which will cause our application to crash - we will resolve this bug toward the end of this article but for the moment we’ll focus on completing our render loop.


End render loop

The renderEnd function complements the renderBegin function. The basic responsibilities of this function are to end the command buffer that was used and orchestrate the submission of the commands to the graphics queue for processing and then to the presentation queue for displaying to the screen. We will also need to accommodate the swapchain being in an invalid state in a similar way to our renderBegin function did.

From our notes at the beginning of this article, the end of the render loop will perform the following operations from the renderEnd implementation point:

begin render loop
    renderBegin implementation ...
    ------------------------------
    renderEnd implementation ...

    request the command buffer to end its render pass and its command recording
    for the current frame index, issue a submit object with the command buffers to the graphics queue
    for the current frame index, issue another submit object with the swapchain to the presentation queue
        - if we detect the swapchain is out of date or sub-optimal, recreate it.

    wait for the presentation submission to be completed
    increment the current frame index, wrapping it to 0 when needed

end render loop

Before stepping through the implementation, edit vulkan-render-context.cpp and add the renderEnd function to the Internal struct then the public function implementation which simply delegates to it:

struct VulkanRenderContext::Internal
{
    ...

    bool renderEnd(const ast::VulkanDevice& device)
    {
        return true;
    }
};

...

bool VulkanRenderContext::renderEnd(const ast::VulkanDevice& device)
{
    return internal->renderEnd(device);
}

Request the current frame command buffer to end

The first thing we will do inside the renderEnd function is ask the command buffer that had been used for the current frame to record Vulkan commands to end the current render pass, then end its recording of commands. This places the command buffer in a state where it can be submitted to our graphics queue to be executed. Notice that we need to use the currentSwapchainImageIndex to look up the same command buffer that had been used during the renderBegin function:

bool renderEnd(const ast::VulkanDevice& device)
{
    // Grab the command buffer to use for the current swapchain image index.
    const vk::CommandBuffer& commandBuffer{commandBuffers[currentSwapchainImageIndex].get()};

    // Request the command buffer to end its recording phase.
    commandBuffer.endRenderPass();
    commandBuffer.end();

    return true;
}

Submit the command buffer to the graphics queue

Our command buffer can now be submitted into the graphics queue for Vulkan to begin processing it:

bool renderEnd(const ast::VulkanDevice& device)
{
    ...

    // Get the appropriate graphics fence and semaphores for the current render frame.
    const vk::Fence& graphicsFence{graphicsFences[currentFrameIndex].get()};
    const vk::Semaphore& graphicsSemaphore{graphicsSemaphores[currentFrameIndex].get()};
    const vk::Semaphore& presentationSemaphore{presentationSemaphores[currentFrameIndex].get()};
    const vk::PipelineStageFlags pipelineStageFlags{vk::PipelineStageFlagBits::eColorAttachmentOutput};

    // Build a submission object for the graphics queue to process.
    vk::SubmitInfo submitInfo{
        1,                       // Wait semaphore count
        &graphicsSemaphore,      // Wait semaphores
        &pipelineStageFlags,     // Pipeline stage flags
        1,                       // Command buffer count
        &commandBuffer,          // Command buffer
        1,                       // Signal semaphore count
        &presentationSemaphore}; // Signal semaphores

    // Submit our command buffer and configuration to the graphics queue.
    device.getGraphicsQueue().submit(1, &submitInfo, graphicsFence);

    return true;
}

Let’s walk through this. Firstly we grab a reference to the graphicsFence, graphicsSemaphore, presentationSemaphore and pipelineStageFlags, using the currentFrameIndex to locate the correct ones. These fields are then fed into a vk::SubmitInfo object which is how we submit requests to perform operations in Vulkan. The fields used in the submit info are:

pWaitDstStageMask is a pointer to an array of pipeline stages at which each corresponding semaphore wait will occur.

We are selecting eColorAttachmentOutput which the documentation describes as:

VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT specifies the stage of the pipeline after blending where the final color values are output from the pipeline. This stage also includes subpass load and store operations and multisample resolve operations for framebuffer attachments with a color or depth/stencil format.

Submitting to the graphics queue

The submitInfo object is forwarded into the graphics queue awaiting Vulkan to pick it up and process it:

device.getGraphicsQueue().submit(1, &submitInfo, graphicsFence);

We supply the graphicsFence to the submission to inform Vulkan that once it has completed its tasks it should signal the graphicsFence in addition to signalling the presentation semaphore. This is important because the first thing our acquireNextImageIndex function does is wait for the current graphics fence to be signalled before proceeding.

Presenting the current frame

for the current frame index, issue another submit object with the swapchain to the presentation queue
    - if we detect the swapchain is out of date or sub-optimal, recreate it.

While the graphics queue will be used to process all our rendering commands it won’t actually deliver the final rendered images to the display hardware. This is the role of the presentation queue. To submit our rendering to the presentation queue we create a vk::PresentInfoKHR object, which is a bit similar to the vk::SubmitInfo object we created before, only this is specifically for the purposes of presentation:

bool renderEnd(const ast::VulkanDevice& device)
{
    ...

    // Construct an info object to describe what to present to the screen.
    vk::PresentInfoKHR presentationInfo{
        1,                           // Semaphore count
        &presentationSemaphore,      // Wait semaphore
        1,                           // Swapchain count
        &swapchain.getSwapchain(),   // Swapchain
        &currentSwapchainImageIndex, // Image indices
        nullptr};                    // Results

    try
    {
        // Attempt to submit our graphics output to the presentation queue for display.
        // If we receive an out of date error, or the result comes back as sub optimal
        // we will return false as it indicates our swapchain should be recreated.
        if (device.getPresentationQueue().presentKHR(presentationInfo) == vk::Result::eSuboptimalKHR)
        {
            return false;
        }
    }
    catch (vk::OutOfDateKHRError outOfDateError)
    {
        return false;
    }

    return true;
}

The fields in the presentation info object are:

Submit presentation info

The call to device.getPresentationQueue().presentKHR(...) is wrapped with a try / catch because it is another potential source of a vk::OutOfDateKHRError indicating the swapchain cannot be used for rendering any more. The actual submission of the presentation info is done through the presentKHR function of the presentation queue. This invocation can also potentially produce a vk::Result::eSuboptimalKHR result which is another cue for us to regenerate our swapchain. If the presentation throws the out of date error, or returns the suboptimal result we will return false from the renderEnd function, which will subsequently cause our render context to be regenerated.

Wait for presentation to complete

wait for the presentation submission to be completed

To wait for the presentation submission to complete we simply ask the presentation queue to wait until it is idle. If we didn’t do this we may end up queueing presentation commands faster than they can be executed which would cause us memory consumption issues:

bool renderEnd(const ast::VulkanDevice& device)
{
    ...

    // We now wait for the presentation to have been completed before continuing.
    device.getPresentationQueue().waitIdle();

    return true;
}

Increment render frame index

increment the current frame index, wrapping it to 0 when needed

Before leaving the renderEnd function we will increment our current render frame index, setting it back to 0 if it exceeds our maximum number of render frames. The final line returns true as we have successfully reached the end of our function:

bool renderEnd(const ast::VulkanDevice& device)
{
    ...

    // Increment our current frame index, wrapping it when it hits our maximum.
    currentFrameIndex = (currentFrameIndex + 1) % maxRenderFrames;

    return true;
}

Update context to end frame

With our renderEnd function implemented in the Vulkan render context class, we can hop back to our VulkanContext class to use it. Edit vulkan-context.cpp and update the renderEnd function with the following:

struct VulkanContext::Internal
{
    ...

    void renderEnd()
    {
        if (!renderContext.renderEnd(device))
        {
            recreateRenderContext();
        }
    }
};

If the renderContext.renderEnd invocation returns false it means our swapchain is in a bad state and we need to recreate the render context again via the recreateRenderContext function.

If you run your application now you should finally see a blank red screen!!!


Almost there!

I’m sure you are incredibly excited about our suave Vulkan red screen however we have a bug to fix before we are done. To see this bug happen, try to resize the window of your application while it is running. The application will crash with a Vulkan validation message similar to this:

UNASSIGNED-CoreValidation-DrawState-SwapchainAlreadyExists(ERROR / SPEC): msgNum: 0 - vkCreateSwapChainKHR(): surface has an existing swapchain other than oldSwapchain
    Objects: 1
       [0] 0x1245b780080, type: 3, name: (null)
Validation(ERROR): msg_code: 0:  [ UNASSIGNED-CoreValidation-DrawState-SwapchainAlreadyExists ]  [ UNASSIGNED-CoreValidation-DrawState-SwapchainAlreadyExists ] Object: 0x1245b780080 (Type = 3) | vkCreateSwapChainKHR(): surface has an existing swapchain other than oldSwapchain

We can get a hint about this problem via the part of the validation message reading:

vkCreateSwapChainKHR(): surface has an existing swapchain other than oldSwapchain

Way back in part 21 of this series we created the Vulkan swapchain class which internally had a free function named vk::UniqueSwapchainKHR createSwapchain(...) whose responsibility was to actually create a swapchain instance.

Within the createSwapchain function we defined a vk::SwapchainCreateInfoKHR object for which its last argument was the old swapchain. As you can see we simply passed in a new instance of an empty swapchain as the old swapchain:

namespace
{
    ...

    vk::UniqueSwapchainKHR createSwapchain(...)
    {
        vk::SwapchainCreateInfoKHR createInfo{
            ...
            vk::SwapchainKHR()};                      // Old swapchain

The issue is that when we are recreating a swapchain, we should actually pass in the old swapchain that was being used before. At the moment because we had hard coded a blank empty swapchain as the old swapchain, Vulkan can’t know how to transition from old to new and therefore is producing the crash when it tries to migrate to a new swapchain instance.

We will fix this bug using the following approach:

New swapchain constructor argument

Open up vulkan-swapchain.hpp and add a new constructor argument representing the old swapchain to use:

namespace ast
{
    struct VulkanSwapchain
    {
        VulkanSwapchain(...
                        const vk::SwapchainKHR& oldSwapchain);

Edit vulkan-swapchain.cpp, adding an oldSwapchain argument to the existing createSwapchain free function in the anonymous namespace. Also change the last parameter in the createInfo object definition to use the oldSwapchain - this is the actual fix to the bug:

namespace
{
    ...

    vk::UniqueSwapchainKHR createSwapchain(
        ...
        const vk::SwapchainKHR& oldSwapchain)
    {
        vk::SwapchainCreateInfoKHR createInfo{
            ...
            oldSwapchain};                            // Old swapchain

Update the Internal struct constructor to expect the oldSwapchain argument and pass it into the createSwapchain function:

struct VulkanSwapchain::Internal
{
    ...

    Internal(...
             const vk::SwapchainKHR& oldSwapchain)
        : ...
          swapchain(::createSwapchain(physicalDevice, device, surface, format, presentationMode, extent, transform, oldSwapchain)),
          ...

Finally motor down to the bottom of the file and update the public constructor implementation, adding the oldSwapchain argument and passing it into the Internal constructor:

VulkanSwapchain::VulkanSwapchain(const ast::SDLWindow& window,
                                 const ast::VulkanPhysicalDevice& physicalDevice,
                                 const ast::VulkanDevice& device,
                                 const ast::VulkanSurface& surface,
                                 const vk::SwapchainKHR& oldSwapchain)
    : internal(ast::make_internal_ptr<Internal>(window, physicalDevice, device, surface, oldSwapchain)) {}

New render context constructor argument

Next up we will tweak our VulkanRenderContext class to allow an old swapchain to be passed into its constructor. We will actually make the new argument have a default value if it isn’t supplied by the caller. Edit vulkan-render-context.hpp and update the signature of the constructor to include the oldSwapchain argument:

namespace ast
{
    struct VulkanRenderContext
    {
        VulkanRenderContext(const ast::SDLWindow& window,
                            const ast::VulkanPhysicalDevice& physicalDevice,
                            const ast::VulkanDevice& device,
                            const ast::VulkanSurface& surface,
                            const ast::VulkanCommandPool& commandPool,
                            const vk::SwapchainKHR& oldSwapchain = vk::SwapchainKHR());

Note that the oldSwapchain argument has a default value of = vk::SwapchainKHR() which is what we had originally in the createSwapchain function.

Next we will update vulkan-render-context.cpp to accomodate the new argument. Edit the Internal constructor to take the old swapchain, then pass it into the ast::VulkanSwapchain constructor:

struct VulkanRenderContext::Internal
{
    ...

    Internal(...
             const vk::SwapchainKHR& oldSwapchain)
        : swapchain(ast::VulkanSwapchain(window, physicalDevice, device, surface, oldSwapchain)),

Also update the public constructor implementation to take the old swapchain and forward it to the internal constructor:

VulkanRenderContext::VulkanRenderContext(...
                                         const vk::SwapchainKHR& oldSwapchain)
    : internal(ast::make_internal_ptr<Internal>(window, physicalDevice, device, surface, commandPool, oldSwapchain)) {}

Add recreate function

The trick about recreating our render context is that the old swapchain will already exist inside the render context we want to dispose of. So how can we create a new render context that somehow needs the private swapchain field of the old render context? The approach we will take is to add a new function named recreate to the render context class itself, allowing us to ask a render context to create a new render context using itself as the source of the old swapchain. Clear as mud?

Edit vulkan-render-context.hpp and add a new function signature to allow us to ask a render context to create another render context:

namespace ast
{
    struct VulkanRenderContext
    {
        ...

        ast::VulkanRenderContext recreate(const ast::SDLWindow& window,
                                          const ast::VulkanPhysicalDevice& physicalDevice,
                                          const ast::VulkanDevice& device,
                                          const ast::VulkanSurface& surface,
                                          const ast::VulkanCommandPool& commandPool);

The recreate function looks very similar to the constructor, except it does not take in an old swapchain - that will be handled internally in the implementation. Open vulkan-render-context.cpp and add the implementation at the bottom of the file:

ast::VulkanRenderContext VulkanRenderContext::recreate(const ast::SDLWindow& window,
                                                       const ast::VulkanPhysicalDevice& physicalDevice,
                                                       const ast::VulkanDevice& device,
                                                       const ast::VulkanSurface& surface,
                                                       const ast::VulkanCommandPool& commandPool)
{
    return ast::VulkanRenderContext(window,
                                    physicalDevice,
                                    device,
                                    surface,
                                    commandPool,
                                    internal->swapchain.getSwapchain());
}

Pretty much all thats happening here is the construction of a new VulkanRenderContext with the only real interesting thing being that the oldSwapchain argument is sourced from the current render context’s internal swapchain instance.

Update context class

All that’s left to do is update our VulkanContext class again to change how it creates and recreates the render context field. Edit vulkan-context.cpp and update the recreateRenderContext function within the Internal struct from this:

void recreateRenderContext()
{
    device.getDevice().waitIdle();
    renderContext = ast::VulkanRenderContext(window, physicalDevice, device, surface, commandPool);
}

to this:

void recreateRenderContext()
{
    device.getDevice().waitIdle();
    renderContext = renderContext.recreate(window, physicalDevice, device, surface, commandPool);
}

Run the application again and you should be able to resize the window as much as you like and our application will no longer crash - each time you resize the window the entire renderContext field is being recreated and the older instance destroyed.


Summary

For quite some time I struggled with some of the moving parts in getting a render loop working. In particular I found semaphores and fences difficult to understand, but once the relationship between waiting and signalling clicked in my brain it started to make a lot more sense!

Our Vulkan application now has a very basic render loop up and running which seems a little demoralizing considering how much effort it took to get this far, but which gives us a foundation to grow our renderer with the ability to draw our static meshes and apply texture mapping to them. In the next article we will start filling in the implementation of the Vulkan asset classes which we only stubbed out in this article.

The code for this article can be found here.

Continue to Part 25: Vulkan shader pipeline.

End of part 24