Now that loading mesh data is out of the way we can implement the final asset type for our Vulkan application - textures. There is actually no such thing as a built in ‘texture’ object in Vulkan, rather the expectation is that the developer implement a data structure that has the characteristics of a ‘texture’ that Vulkan can read from during rendering.
Specifically in this article we will:
Anisotropic filtering is a device feature that we can enable if it is available which can help to dramatically improve the visual quality of textures that are rendered at an angle from the 3D camera - which in a 3D scene is pretty much always going to happen.
When we author our texture class we will be creating a texture sampler - this is the mechanism through which our Vulkan shader will calculate what pixel from a texture bitmap to apply to a fragment during rendering. A texture sampler defines whether anisotropic filtering should be applied to the texture as it is being rendered, so we need a way to find out if the feature is actually available at run time when constructing the sampler.
To check if anisotropic filtering is available in the current physical device we will take a similar approach to when we checked if shader multisampling was available in an earlier article. Additionally we will need to activate the anisotropic filtering feature in our logical device configuration if we want to use it in the Vulkan instance.
Edit vulkan-physical-device.hpp
and add a new function signature so we can find out if the feature is available:
namespace ast
{
struct VulkanPhysicalDevice
{
...
bool isAnisotropicFilteringSupported() const;
Next edit vulkan-physical-device.cpp
and add a new free function in the anonymous namespace that calculates if the feature is available:
namespace
{
...
bool getAnisotropicFilteringSupport(const vk::PhysicalDevice& physicalDevice)
{
return physicalDevice.getFeatures().samplerAnisotropy;
}
} // namespace
We will initialise a new field in our Internal
struct named anisotropicFilteringSupported
to store whether the feature is available and assign its value using the getAnisotropicFilteringSupport
free function in the constructor:
struct VulkanPhysicalDevice::Internal
{
...
const bool anisotropicFilteringSupported;
Internal(...)
...
anisotropicFilteringSupported(::getAnisotropicFilteringSupport(physicalDevice)) {}
And lastly we add the public function implementation at the bottom of the file:
bool VulkanPhysicalDevice::isAnisotropicFilteringSupported() const
{
return internal->anisotropicFilteringSupported;
}
So now that our physical device can be queried to discover the availability of the feature we need to update our logical device class to activate it. Edit vulkan-device.cpp
and find the code in the createDevice
free function that looks like this:
// If shader based multisampling is available we will activate it.
if (physicalDevice.isShaderMultiSamplingSupported())
{
physicalDeviceFeatures.sampleRateShading = true;
}
Add the physicalDevice.isAnisotropicFilteringSupported
conditional block of code underneath to check and activate anisotropic filtering:
// If shader based multisampling is available we will activate it.
if (physicalDevice.isShaderMultiSamplingSupported())
{
physicalDeviceFeatures.sampleRateShading = true;
}
// If anisotropic filtering is available we will activate it.
if (physicalDevice.isAnisotropicFilteringSupported())
{
physicalDeviceFeatures.samplerAnisotropy = true;
}
That’s it - if the feature is available it will be activated in the logical device and we will now be able to safely add anisotropic filtering to our texture sampler code, which we’ll write very soon.
As I mentioned at the start of this article, Vulkan does not have a ‘texture’ class itself - it is up to us to create something that can be used as a texture. The key components that need to be in our texture class are:
More about texture samplers
Now is a good time to reflect back on our default Vulkan shader - specifically the fragment shader. Review the fragment shader script in vulkan_shader_source/default.frag
which looks like this:
#version 460
layout(binding = 0) uniform sampler2D texSampler;
layout(location = 0) in vec2 inTexCoord;
layout(location = 0) out vec4 outColor;
void main() {
outColor = texture(texSampler, inTexCoord);
}
The important parts of the fragment shader that relate to our texture class are:
layout(binding = 0) uniform sampler2D texSampler
: This represents a texture sampler which we create ourselves and provide to Vulkan during the rendering pipeline as a source for where to lookup the colour of the texture for the current texture coordinates.layout(location = 0) in vec2 inTexCoord
: This is the U,V
coordinate of where in the texture sampler to look to choose the colour for the current fragment (pixel). We feed the texture coordinates into the rendering pipeline as part of our ast::Vertex
data structure within the mesh vertices data.outColor = texture(texSampler, inTexCoord)
: Inside the main
function we assign the output colour of the fragment based on the GLSL texture
shader API function along with the texSampler
as a sampler source and the inTexCoord
as the location in the sampler source to choose the colour from.We already have the texture coordinates in our ast::VulkanMesh
class within its list of vertices. What we don’t yet have is the texture sampler that needs to be bound into the texSampler
uniform.
Something else of interest that is probably not obvious is a segment of code we wrote when authoring the ast::VulkanPipeline
class (vulkan-pipeline.cpp
) which looks like this:
namespace
{
vk::UniqueDescriptorSetLayout createDescriptorSetLayout(const ast::VulkanDevice& device)
{
vk::DescriptorSetLayoutBinding textureBinding{
0, // Binding
vk::DescriptorType::eCombinedImageSampler, // Descriptor type
1, // Descriptor count
vk::ShaderStageFlagBits::eFragment, // Shader stage flags
nullptr}; // Immutable samplers
vk::DescriptorSetLayoutCreateInfo info{
vk::DescriptorSetLayoutCreateFlags(), // Flags
1, // Binding count
&textureBinding}; // Bindings
return device.getDevice().createDescriptorSetLayoutUnique(info);
}
The descriptor set layout that is created in this function defines a textureBinding
object, which has its binding slot as 0
, descriptor type as eCombinedImageSampler
and shader stage as eFragment
. In actual fact this texture binding definition which is then baked into the descriptor set layout is the exact thing that will allow us to bind a texture sampler object to our pipeline during rendering and is directly associated with populating the layout(binding = 0) uniform sampler2D texSampler
uniform in the fragment shader.
I won’t show how to take a texture sampler and bind it to this descriptor set layout just yet - we will cover that in the next article - but I felt it was important to give some context about what a texture sampler is for and how it will be used before we steam ahead creating one.
We’ll start our texture class implementation with the image it needs in order to operate. Create vulkan-texture.hpp
and vulkan-texture.cpp
in the Vulkan source folder. Edit the header file with the following:
#pragma once
#include "../../core/asset-inventory.hpp"
#include "../../core/bitmap.hpp"
#include "../../core/graphics-wrapper.hpp"
#include "../../core/internal-ptr.hpp"
#include "vulkan-command-pool.hpp"
#include "vulkan-device.hpp"
#include "vulkan-image-view.hpp"
#include "vulkan-physical-device.hpp"
namespace ast
{
struct VulkanTexture
{
VulkanTexture(const ast::assets::Texture& textureId,
const ast::VulkanPhysicalDevice& physicalDevice,
const ast::VulkanDevice& device,
const ast::VulkanCommandPool& commandPool,
const ast::Bitmap& bitmap);
const ast::assets::Texture& getTextureId() const;
const ast::VulkanImageView& getImageView() const;
const vk::Sampler& getSampler() const;
private:
struct Internal;
ast::internal_ptr<Internal> internal;
};
} // namespace ast
The constructor takes a few Vulkan components needed to carry out the internal Vulkan implementation along with the bitmap
which will be the source image for our texture. The idea is that the bitmap data will be loaded through our existing assets system agnostically of Vulkan (and OpenGL) and translated into something Vulkan specific. We did the same thing in OpenGL for its texture class (opengl-texture.hpp
).
The getImageView
and getSampler
functions allow access to the Vulkan image view and sampler, the getTextureId
function lets a visitor know what the unique key is for the texture - these will be required in the next article in order to generate a descriptor set for the texture to feed into our pipeline.
We’ll build up the texture class a chunk at a time. Edit vulkan-texture.cpp
and paste the following code:
#include "vulkan-texture.hpp"
#include "vulkan-buffer.hpp"
#include "vulkan-image.hpp"
#include <cmath>
using ast::VulkanTexture;
namespace
{
void generateMipMaps(const ast::VulkanDevice& device,
const ast::VulkanCommandPool& commandPool,
const ast::VulkanImage& image)
{
// TODO: Implement me.
}
ast::VulkanImage createImage(const ast::VulkanPhysicalDevice& physicalDevice,
const ast::VulkanDevice& device,
const ast::VulkanCommandPool& commandPool,
const ast::Bitmap& bitmap)
{
uint32_t imageWidth{bitmap.getWidth()};
uint32_t imageHeight{bitmap.getHeight()};
uint32_t mipLevels{static_cast<uint32_t>(std::floor(std::log2(std::max(imageWidth, imageHeight)))) + 1};
vk::DeviceSize bufferSize{imageWidth * imageHeight * 4};
ast::VulkanBuffer stagingBuffer{
physicalDevice,
device,
bufferSize,
vk::BufferUsageFlagBits::eTransferSrc,
vk::MemoryPropertyFlagBits::eHostVisible | vk::MemoryPropertyFlagBits::eHostCoherent,
bitmap.getPixelData()};
ast::VulkanImage image{
commandPool,
physicalDevice,
device,
imageWidth,
imageHeight,
mipLevels,
vk::SampleCountFlagBits::e1,
vk::Format::eR8G8B8A8Unorm,
vk::ImageTiling::eOptimal,
vk::ImageUsageFlagBits::eTransferDst | vk::ImageUsageFlagBits::eTransferSrc | vk::ImageUsageFlagBits::eSampled,
vk::MemoryPropertyFlagBits::eDeviceLocal,
vk::ImageLayout::eUndefined,
vk::ImageLayout::eTransferDstOptimal};
vk::UniqueCommandBuffer commandBuffer{commandPool.beginCommandBuffer(device)};
vk::ImageSubresourceLayers imageSubresource{
vk::ImageAspectFlagBits::eColor, // Aspect mask
0, // Mip level
0, // Base array layer
1}; // Layer count
vk::Extent3D imageExtent{
imageWidth, // Width
imageHeight, // Height
1}; // Depth
vk::BufferImageCopy bufferImageCopy{
0, // Buffer offset
0, // Buffer row length
0, // Buffer image height
imageSubresource, // Image subresource
vk::Offset3D(), // Image offset
imageExtent}; // Image extent
commandBuffer->copyBufferToImage(stagingBuffer.getBuffer(),
image.getImage(),
vk::ImageLayout::eTransferDstOptimal,
1,
&bufferImageCopy);
commandPool.endCommandBuffer(commandBuffer.get(), device);
::generateMipMaps(device, commandPool, image);
return image;
}
} // namespace
struct VulkanTexture::Internal
{
const ast::assets::Texture textureId;
const ast::VulkanImage image;
Internal(const ast::assets::Texture& textureId,
const ast::VulkanPhysicalDevice& physicalDevice,
const ast::VulkanDevice& device,
const ast::VulkanCommandPool& commandPool,
const ast::Bitmap& bitmap)
: textureId(textureId),
image(::createImage(physicalDevice, device, commandPool, bitmap)) {}
};
VulkanTexture::VulkanTexture(const ast::assets::Texture& textureId,
const ast::VulkanPhysicalDevice& physicalDevice,
const ast::VulkanDevice& device,
const ast::VulkanCommandPool& commandPool,
const ast::Bitmap& bitmap)
: internal(ast::make_internal_ptr<Internal>(textureId,
physicalDevice,
device,
commandPool,
bitmap)) {}
We begin with the textureId
field in the Internal
structure which simply stores the textureId
constructor argument. We can also add the public function implementation to get the texture id a the bottom of the file:
const ast::assets::Texture& VulkanTexture::getTextureId() const
{
return internal->textureId;
}
Next is the image
field in the Internal
structure which has a data type of ast::VulkanImage
. The createImage
free function constructs this field and starts off like so:
namespace
{
ast::VulkanImage createImage(const ast::VulkanPhysicalDevice& physicalDevice,
const ast::VulkanDevice& device,
const ast::VulkanCommandPool& commandPool,
const ast::Bitmap& bitmap)
{
uint32_t imageWidth{bitmap.getWidth()};
uint32_t imageHeight{bitmap.getHeight()};
The bitmap
argument is first queried to find out how wide and tall the image should be.
Mipmap calculation
The next line of code performs a calculation to determine how many mipmaps should be generated:
uint32_t mipLevels{static_cast<uint32_t>(std::floor(std::log2(std::max(imageWidth, imageHeight)))) + 1};
Mipmapping gives us improved quality in our rendering by creating multiple versions of a texture image at different sizes, such that the distance from the camera determines the optimal size of the texture to use to avoid visual anomalies. A summary description can be found here: https://en.wikipedia.org/wiki/Mipmap.
In our OpenGL application the generation of mipmaps for a texture was trivial (see opengl-texture.cpp
):
GLuint createTexture(const ast::Bitmap& bitmap)
{
...
glGenerateMipmap(GL_TEXTURE_2D);
However, as usual, Vulkan doesn’t really come with an automatic way to generate mipmaps or even determine how many to generate. The formula in our Vulkan calculation operates by choosing the larger of the width or height of the source bitmap, then finding the log base 2 of it which tells us how many times we can multiply by 2 to reach the size. For example if the image size is ( 1024 x 512 ):
This tells us that we need to specify a mipmap level of 11
.
Staging buffer
Creating an image requires us to use buffers to stage the image data so we need to know ahead of time how many bytes of data we are dealing with. We will take the imageWidth
multiplied by the imageHeight
then multiply the result by 4
because we know that our source bitmap
object is always in an RGBA
(red, green, blue, alpha) format, meaning there are 4 bytes representing each pixel:
vk::DeviceSize bufferSize{imageWidth * imageHeight * 4};
The next operation should seem quite familiar if you recall the createDeviceLocalBuffer
function we wrote in the previous article - basically we must create a staging buffer which we copy our bitmap data into, then ask Vulkan to move it into a proper image object on our behalf. The staging buffer is created like so:
ast::VulkanBuffer stagingBuffer{
physicalDevice,
device,
bufferSize,
vk::BufferUsageFlagBits::eTransferSrc,
vk::MemoryPropertyFlagBits::eHostVisible | vk::MemoryPropertyFlagBits::eHostCoherent,
bitmap.getPixelData()};
Note that we specify that the staging buffer is used as a transfer source and that it should be stored in host visible, coherent memory. We also supply the bitmap.getPixelData()
as the content to copy into the staging buffer memory.
We then create a Vulkan image object using our existing ast::Image
class like so:
ast::VulkanImage image{
commandPool,
physicalDevice,
device,
imageWidth,
imageHeight,
mipLevels,
vk::SampleCountFlagBits::e1,
vk::Format::eR8G8B8A8Unorm,
vk::ImageTiling::eOptimal,
vk::ImageUsageFlagBits::eTransferDst | vk::ImageUsageFlagBits::eTransferSrc | vk::ImageUsageFlagBits::eSampled,
vk::MemoryPropertyFlagBits::eDeviceLocal,
vk::ImageLayout::eUndefined,
vk::ImageLayout::eTransferDstOptimal};
We are passing in the basic image properties such as width, height and mipmap levels. The multisampling level is set to e1
and the colour format to 8 bit RGBA
via vk::Format::eR8G8B8A8Unorm
. One of the interesting image usage flags here is vk::ImageUsageFlagBits::eSampled
which allows this image to be used as the source for a texture sampler - which we’ll need. The image is also marked to be stored in device local memory through the vk::MemoryPropertyFlagBits::eDeviceLocal
which is also why we need the staging buffer as we cannot manipulate device local memory directly ourselves.
Update image transition layouts
The final arguments declare that the image will transition from eUndefined
to eTransferDstOptimal
which actually requires us to make a small change to our ast::VulkanImage
class. Hop over to vulkan-image.cpp
and locate the transitionLayout
free function which is where we model all the transition states that we allow in our application. If you browse the scenarios we have defined in that function you’ll see that we do not have a scenario that takes undefined
and transitions to transfer destination optimal
. We need to add this otherwise our application would crash when attempting to create our texture image. Add the following conditional block under the existing ones:
namespace
{
...
void transitionLayout(...)
{
...
// Scenario: undefined -> transfer destination optimal
if (oldLayout == vk::ImageLayout::eUndefined && newLayout == vk::ImageLayout::eTransferDstOptimal)
{
barrier.dstAccessMask = vk::AccessFlagBits::eTransferWrite;
return ::applyTransitionLayoutCommand(device,
commandPool,
vk::PipelineStageFlagBits::eTopOfPipe,
vk::PipelineStageFlagBits::eTransfer,
barrier);
}
Transfer staging buffer to image
You can now close vulkan-image.cpp
and return to vulkan-texture.cpp
where the next step is to perform a Vulkan command to transfer our staging buffer into the new image object. This is achieved through the copyBufferToImage
Vulkan function on the command buffer, taking the staging buffer to copy from and the image to copy into. The copy operation also targets mip level 0
via the imageSubresource
configuration, so really we are just copying the full sized image as the entry for mip level 0. We will be generating the remaining mipmap images afterward:
vk::UniqueCommandBuffer commandBuffer{commandPool.beginCommandBuffer(device)};
vk::ImageSubresourceLayers imageSubresource{
vk::ImageAspectFlagBits::eColor, // Aspect mask
0, // Mip level
0, // Base array layer
1}; // Layer count
vk::Extent3D imageExtent{
imageWidth, // Width
imageHeight, // Height
1}; // Depth
vk::BufferImageCopy bufferImageCopy{
0, // Buffer offset
0, // Buffer row length
0, // Buffer image height
imageSubresource, // Image subresource
vk::Offset3D(), // Image offset
imageExtent}; // Image extent
commandBuffer->copyBufferToImage(stagingBuffer.getBuffer(),
image.getImage(),
vk::ImageLayout::eTransferDstOptimal,
1,
&bufferImageCopy);
commandPool.endCommandBuffer(commandBuffer.get(), device);
Generate remaining mipmaps
Now that we have a Vulkan image that is seeded with the original full sized bitmap data we need to generate all the remaining mipmap sub images then return the fully formed image object:
::generateMipMaps(device, commandPool, image);
return image;
}
Populate the stubbed generateMipMaps
free function with the following:
void generateMipMaps(const ast::VulkanDevice& device,
const ast::VulkanCommandPool& commandPool,
const ast::VulkanImage& image)
{
vk::ImageSubresourceRange barrierSubresourceRange{
vk::ImageAspectFlagBits::eColor, // Aspect mask
0, // Base mip level
1, // Level count
0, // Base array layer
1}; // Layer count
vk::ImageMemoryBarrier barrier{
vk::AccessFlags(), // Source access mask
vk::AccessFlags(), // Destination access mask
vk::ImageLayout::eUndefined, // Old layout
vk::ImageLayout::eUndefined, // New layout
VK_QUEUE_FAMILY_IGNORED, // Source queue family index
VK_QUEUE_FAMILY_IGNORED, // Destination queue family index
image.getImage(), // Image
barrierSubresourceRange}; // Subresource range
vk::UniqueCommandBuffer commandBuffer{commandPool.beginCommandBuffer(device)};
int32_t mipWidth{static_cast<int32_t>(image.getWidth())};
int32_t mipHeight{static_cast<int32_t>(image.getHeight())};
uint32_t mipLevels{image.getMipLevels()};
for (uint32_t mipLevel = 1; mipLevel < mipLevels; mipLevel++)
{
barrier.subresourceRange.baseMipLevel = mipLevel - 1;
barrier.oldLayout = vk::ImageLayout::eTransferDstOptimal;
barrier.newLayout = vk::ImageLayout::eTransferSrcOptimal;
barrier.srcAccessMask = vk::AccessFlagBits::eTransferWrite;
barrier.dstAccessMask = vk::AccessFlagBits::eTransferRead;
commandBuffer->pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eTransfer,
vk::DependencyFlags(),
0, nullptr,
0, nullptr,
1, &barrier);
vk::ImageSubresourceLayers sourceSubresource{
vk::ImageAspectFlagBits::eColor, // Aspect mask
mipLevel - 1, // Mip level
0, // Base array layer
1}; // Layer count
std::array<vk::Offset3D, 2> sourceOffsets{
vk::Offset3D{0, 0, 0},
vk::Offset3D{mipWidth, mipHeight, 1}};
vk::ImageSubresourceLayers destinationSubresource{
vk::ImageAspectFlagBits::eColor, // Aspect mask
mipLevel, // Mip level
0, // Base array layer
1}; // Layer count
std::array<vk::Offset3D, 2> destinationOffsets{
vk::Offset3D{0, 0, 0},
vk::Offset3D{mipWidth > 1 ? mipWidth / 2 : 1, mipHeight > 1 ? mipHeight / 2 : 1, 1}};
vk::ImageBlit blit{
sourceSubresource, // Source subresource
sourceOffsets, // Source offsets
destinationSubresource, // Destination subresource
destinationOffsets}; // Destination offsets
commandBuffer->blitImage(image.getImage(), vk::ImageLayout::eTransferSrcOptimal,
image.getImage(), vk::ImageLayout::eTransferDstOptimal,
1, &blit,
vk::Filter::eLinear);
barrier.oldLayout = vk::ImageLayout::eTransferSrcOptimal;
barrier.newLayout = vk::ImageLayout::eShaderReadOnlyOptimal;
barrier.srcAccessMask = vk::AccessFlagBits::eTransferRead;
barrier.dstAccessMask = vk::AccessFlagBits::eShaderRead;
commandBuffer->pipelineBarrier(vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eFragmentShader,
vk::DependencyFlags(),
0, nullptr,
0, nullptr,
1, &barrier);
if (mipWidth > 1)
{
mipWidth /= 2;
}
if (mipHeight > 1)
{
mipHeight /= 2;
}
}
barrier.subresourceRange.baseMipLevel = mipLevels - 1;
barrier.oldLayout = vk::ImageLayout::eTransferDstOptimal;
barrier.newLayout = vk::ImageLayout::eShaderReadOnlyOptimal;
barrier.srcAccessMask = vk::AccessFlagBits::eTransferWrite;
barrier.dstAccessMask = vk::AccessFlagBits::eShaderRead;
commandBuffer->pipelineBarrier(
vk::PipelineStageFlagBits::eTransfer,
vk::PipelineStageFlagBits::eFragmentShader,
vk::DependencyFlags(),
0, nullptr,
0, nullptr,
1, &barrier);
commandPool.endCommandBuffer(commandBuffer.get(), device);
}
The code in this function is heavily influenced by the very well written article here: https://vulkan-tutorial.com/Generating_Mipmaps. The overview of what happens is:
blitImage
operation specifying where and what region of the source image to copy from, and where and what region of the destination image to copy to.eFragmentShader
state which is the state the main image needs to end up in.As we’ve learned in previous articles, a Vulkan image is often coupled with an image view which behaves like a wrapper to interact with the underlying image. We will now add an image view to our texture class that wraps the image we just created.
Add a new anonymous free function to create an image view like so:
namespace
{
...
ast::VulkanImageView createImageView(const ast::VulkanDevice& device, const ast::VulkanImage& image)
{
return ast::VulkanImageView(device.getDevice(),
image.getImage(),
image.getFormat(),
vk::ImageAspectFlagBits::eColor,
image.getMipLevels());
}
} // namespace
Not too complicated, in fact it really just instantiates an ast::VulkanImageView
with our previously created image
object. Revisit the Internal
structure and add a new field named imageView
, initialising it with the createImageView
function:
struct VulkanTexture::Internal
{
const ast::assets::Texture textureId;
const ast::VulkanImage image;
const ast::VulkanImageView imageView;
Internal(const ast::assets::Texture& textureId,
const ast::VulkanPhysicalDevice& physicalDevice,
const ast::VulkanDevice& device,
const ast::VulkanCommandPool& commandPool,
const ast::Bitmap& bitmap)
: textureId(textureId),
image(::createImage(physicalDevice, device, commandPool, bitmap)),
imageView(::createImageView(device, image)) {}
};
We can also now fill in the public getImageView
function at the bottom of the file:
const ast::VulkanImageView& VulkanTexture::getImageView() const
{
return internal->imageView;
}
The final component our texture class needs is a texture sampler which I mentioned earlier. Add the createSampler
anonymous free function to help us create our texture sampler:
namespace
{
...
vk::UniqueSampler createSampler(const ast::VulkanPhysicalDevice& physicalDevice,
const ast::VulkanDevice& device,
const ast::VulkanImage& image)
{
float maxLod{static_cast<float>(image.getMipLevels())};
vk::Bool32 anisotropyEnabled = physicalDevice.isAnisotropicFilteringSupported() ? VK_TRUE : VK_FALSE;
vk::SamplerCreateInfo info{
vk::SamplerCreateFlags(), // Flags
vk::Filter::eLinear, // Mag filter
vk::Filter::eLinear, // Min filter
vk::SamplerMipmapMode::eLinear, // Mipmap mode
vk::SamplerAddressMode::eRepeat, // Address mode U
vk::SamplerAddressMode::eRepeat, // Address mode V
vk::SamplerAddressMode::eRepeat, // Address mode W
0.0f, // Mip LOD bias
anisotropyEnabled, // Anisotropy enabled
anisotropyEnabled ? 8.0f : 1.0f, // Max anisotropy
VK_FALSE, // Compare enable
vk::CompareOp::eNever, // Compare op
0.0f, // Min LOD
maxLod, // Max LOD
vk::BorderColor::eIntOpaqueBlack, // Border color
VK_FALSE}; // UnnormalizedCoordinates
return device.getDevice().createSamplerUnique(info);
}
} // namespace
We start the function by figuring out the maxLod
, or maximum level of detail:
float maxLod{static_cast<float>(image.getMipLevels())};
This correlates to the number of mipmap levels we baked into the texture image, but is required to be specified as a float
so it can be passed into the vk::SamplerCreateInfo
structure.
There is a slightly curious data type used to store if anisotropic filtering is enabled in the form of a vk::Bool32
:
vk::Bool32 anisotropyEnabled = physicalDevice.isAnisotropicFilteringSupported() ? VK_TRUE : VK_FALSE;
We can’t use a regular bool
here or the info
object won’t accept it. In any case this is the point where we use the isAnisotropicFilteringSupported
function we wrote at the beginning of the article to know if anisotropic filtering is available.
Note: I am setting anisotropic filtering to
8
if it is supported. For higher quality we could go to16
however be aware that it does affect performance and even8
might be high for mobile devices with weaker GPUs.
The info
object then defines all the attributes of the texture sampler we want to create. Some of the interesting attributes are related to what kind of filtering effects to apply to the sampled texture. You can read about the vk::SamplerCreateInfo
type here: https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkSamplerCreateInfo.html.
Once again we’ll revisit our Internal
structure to add a field for the texture sampler, constructing it via our createSampler
function:
struct VulkanTexture::Internal
{
const ast::assets::Texture textureId;
const ast::VulkanImage image;
const ast::VulkanImageView imageView;
const vk::UniqueSampler sampler;
Internal(const ast::assets::Texture& textureId,
const ast::VulkanPhysicalDevice& physicalDevice,
const ast::VulkanDevice& device,
const ast::VulkanCommandPool& commandPool,
const ast::Bitmap& bitmap)
: textureId(textureId),
image(::createImage(physicalDevice, device, commandPool, bitmap)),
imageView(::createImageView(device, image)),
sampler(::createSampler(physicalDevice, device, image)) {}
};
We can also fill in the public getSampler
function too at the bottom of the file:
const vk::Sampler& VulkanTexture::getSampler() const
{
return internal->sampler.get();
}
That completes our Vulkan texture class for now. The next step is to start using it in our Vulkan asset manager.
Our Vulkan asset manager will load and cache textures in much the same way as for mesh objects. We also need a way to ask the asset manager for a texture from the outside. Open vulkan-asset-manager.hpp
and add the following header:
#include "vulkan-texture.hpp"
Then add a new public function signature to get a texture from the asset manager:
namespace ast
{
struct VulkanAssetManager
{
...
const ast::VulkanTexture& getTexture(const ast::assets::Texture& texture) const;
Now jump into vulkan-asset-manager.cpp
and add a new anonymous free function to help us create a texture:
namespace
{
...
ast::VulkanTexture createTexture(const ast::assets::Texture& texture,
const ast::VulkanPhysicalDevice& physicalDevice,
const ast::VulkanDevice& device,
const ast::VulkanCommandPool& commandPool,
const ast::VulkanRenderContext& renderContext)
{
std::string texturePath{ast::assets::resolveTexturePath(texture)};
ast::log("ast::VulkanAssetManager::createTexture", "Creating texture from " + texturePath);
ast::Bitmap bitmap{ast::assets::loadBitmap(texturePath)};
return ast::VulkanTexture(texture,
physicalDevice,
device,
commandPool,
bitmap);
}
} // namespace
The createTexture
function first fetches the path by using resolveTexturePath
in our asset utilities:
std::string texturePath{ast::assets::resolveTexturePath(texture)};
Then after some logging output the bitmap data is loaded through our loadBitmap
asset function, recalling that the ast::Bitmap
is agnostic to Vulkan and OpenGL and represents an RBGA
encoded view of the byte data to create a texture from. Afterward we simply construct a new ast::VulkanTexture
object and return it.
Now go to the Internal
structure and add a new hash map to act as our texture cache, alongside the other ones we already have:
struct VulkanAssetManager::Internal
{
std::unordered_map<ast::assets::Pipeline, ast::VulkanPipeline> pipelineCache;
std::unordered_map<ast::assets::StaticMesh, ast::VulkanMesh> staticMeshCache;
std::unordered_map<ast::assets::Texture, ast::VulkanTexture> textureCache;
We can now update the existing loadAssetManifest
function to load our textures as well as our pipelines and meshes. Add the following code to iterate the textures in the manifest argument after the pipeline and mesh loading code:
void loadAssetManifest(...)
{
...
for (const auto& texture : assetManifest.textures)
{
if (textureCache.count(texture) == 0)
{
textureCache.insert(std::make_pair(
texture,
::createTexture(texture, physicalDevice, device, commandPool, renderContext)));
}
}
}
We have no need to update the reloadContextualAssets
function because the loaded textures won’t be influenced by Vulkan lifecycle changes.
To finish off, add the public function implementation to get a texture to the bottom of the file:
const ast::VulkanTexture& VulkanAssetManager::getTexture(const ast::assets::Texture& texture) const
{
return internal->textureCache.at(texture);
}
All done! Run the application now and you should see the logging output indicating that our textures have loaded successfully:
Sweet! We are sooooooo close now - in the next article we will update our Vulkan rendering code to at very long last render our 3D scene!
The code for this article can be found here.
Continue to Part 28: Vulkan render scene.
End of part 27