Render rewrite: Clyde goes HRT

Well since we have a forum for these topics now, might as well.

Current status

Our current renderer, Clyde, is an old OpenGL 3 renderer I wrote when I didn’t know what I was doing. It is plagued by many issues:

  • Internals are extremely messy.
  • Advanced rendering things have to be made directly in Robust, not flexible enough for content.
  • The shader parsing is a pretty crappy handrolled thing.
  • OpenGL is a broken API and causes tons of platform integration issues.

The problem space

Platform native graphics APIs are a mess:

  • Direct3D 11 is decently nice to use, has good hardware support, but is Windows only.
  • Direct3D 12 is still Windows only and very complicated.
  • OpenGL is utterly broken and terrible in every way, but “works everywhere” for some definition of “works”.
  • Vulkan is native only on Linux. On Windows it is supported by drivers, but I don’t trust it enough to rely upon. MoltenVK is also ehhh. And it’s still too complicated like D3D12.
  • Metal is Apple-only but apparently pretty nice.

We would like to not have to make multiple graphics backends to support all these platforms. Furthermore, we’d need to design a powerful API that content can use to render custom graphics without the constraints of our existing AP.

The plan

The plan (that I started working on 2 years ago and didn’t finish) is to move the renderer to WebGPU. Despite being designed web-first and having a lot of stupid moments in their standards process, it is looking like a decent option all things considered.

One of the core ideas I have is that we should directly expose WebGPU’s API to content. It should be safe given that it is after all designed to run in web browsers. It’ll be a very powerful API suitable for many things we currently have to hardcode into the engine.

The entire plan will involve a lot of Rust code to tie everything into wgpu. This is something I want to do anyways.

For shaders, it’s probably best to use WESL or something.

Possible issues

Native library stability

WebGPU and wgpu are not yet ABI stable. This means future upgrades may break older engine versions. Likely the sanest way to solve this is to have the launcher dynamically download the native library build like it does the engine, but it will require signing the binaries for me to be comfortable with that. We’d kinda want this for better HWID in the future anyways though.

Hardware support

wgpu supports OpenGL 3.3+, D3D12 FL 11_1, Vulkan, and Metal. The tightest part is that this may mean we’d be dropping 10+ year old Intel iGPUs. According to Steam’s HW Survey, there’s some 1% of people that do not have proper D3D12 GPU. Some of those may be able to play with the D3D12 backend regardless (due to the way FLs work) but the Intel players can’t becase Intel removed the support from their driver due to a vulnerability.

I honestly don’t know if we want to drop this or not, honestly.

No offense, but wouldn’t it be easier to refactor Clyde to simply use 3+ GL features? Most modern vendors already support OpenGL 4.5 with its direct state access, which could speed up the performance exponentially depending on how its used

Whichever way you’ll choose I still think it’ll be a huge improvement to the engine (& the game), so you do you :stuck_out_tongue:

As I clearly stated in this very topic, OpenGL is a broken API that causes tons of platform integration issues. It is literally impossible for us to continue using it.

Broken vsync. Graphical corruption. Shitass multi-window support. Terrible multithreading.

Please do not ignore the contents of the main post if you don’t know what you’re talking about.

wgpu on desktop uses Vulkan, DX12, or Metal under the hood, so it’s not really “using WebGPU”, just something that looks like it. Not saying that’s a bad thing, but something to keep in mind.

Neither wgpu’s API nor the Rust ABI in general are stable, and the latter changes even between compiler versions. You’ll either have to write a Rust library that exposes a C interface that C# can access, or write bindings to C# in Rust (I can’t find any tools on the Rust side for doing that, but I don’t know the dot net ecosystem that well).

Is there a more higher-level rendering library for C# that may work better than trying to interface with the underlying GPU APIs directly? The modern APIs are intentionally very complicated so that rendering engine authors can squeeze out maximum performance, by doing things asynchronously and leaving synchronization to the application.

I am not sure what the point of this remark is. It’s unnecessary and pedantic. We are “using WebGPU” as far as the C# code sees. We are not using “the native platform API” as that’s a given for the OS anyways.

This already exists

Doubtful, as this is often the thing the game engine handles. And we’re the game engine here.

Wasn’t aware of wgpu-native. That should hopefully work. Seems like they have some dot net bindings to it already as well.

RE The Intel issue, the notice you link to mentions that it only applies to 4th generation Intel processors. I don’t think other processors are affected - newer processors list that they support DX12 and Vulkan.

Correct. This is what I was saying in the original post.

The only wobble with using WebGPU over Vulkan/DX12/etc is developer familiarity in the video game space. That said, I did eyeball where it’s currently being used before sticking my oar in - both Three.js and Cocos support it. For shaders, maximising compatibility with what tech artists are likely to be familiar with is best.

For SS14, as a freeware game with light hardware requirements it’s much more likely to be being played by people on decrepit hardware than normal for games. I think if you wanted to answer this question with confidence you’d have to gather hardware survey data from players. But realistically what is the alternative?

The kind of developer that is experienced enough with Vulkan/D3D12 would have no trouble picking up WebGPU, which cannot be said in reverse.

Realistically there’s not much that’s gonna be able to be done here. The closest we could possibly do here is to “use HLSL”, and I’m really not sure that’s a good idea.

Yeah that’s the rub really. I either make a WebGPU backend or I make both a D3D11 and WebGPU backend.

According to Techpowerup, the iGPUs included in the cutoff do appear to support OpenGL 4.3 at minimum (see: the HD Graphics 4200). However, the issue is that it also states DX12 is supported, which is outdated information.

It’s likely that these iGPUs would still be supported just fine under WGPU’s OpenGL backend, given that OpenGL implementations are often well-isolated from DirectX implementations in GPU drivers. This would undoubtedly need to be tested by someone with the affected iGPUs laying around. Everything would need testing, in fact, given the bugginess of older Intel drivers.

Generally though, letting users select their backend would be a good way to throw a bone for compatibility with legacy hardware (it wouldn’t necessarily ensure it, but it’d definitely be going above and beyond by today’s standards of hardware compatibility). The OpenGL 3.3 spec was released in 2010 (right alongside OpenGL 4.0’s spec), meaning it’s far more likely to be compatible with legacy hardware than DX12, which was released in 2015 (same year as Windows 10, the minimum Windows version that dotnet can run on).

All this being said though, there’s the caveat that WGPU’s GL backend is explicitly labelled experimental and unsupported within the documentation. It’s entirely possible that the GL backend could be dropped outright, or the minimum GL version could change to something that’d exclude the targeted legacy hardware.

There is a cost to trying to maximize compatibility, though. Remember that merge of that new radial menu shader that was just broken on legacy mode? And that was just one bool box in a menu that wasn’t tested.

What RT itself supports is one thing, the test suite requirements for SS14 development are a different topic.

Tbh that one is just due to OpenGL’s shader model being terrible, and it’s something that would be avoided by a better approach to handling shaders.

1 Like

Have you considered SDL 3’s GPU API? It is basically a easy to use layer over vulkan, DX12, and Metal. From my limited testing it supports most modern rendering techniques other than highly complex stuff like ray tracing, tessellation, or mesh shaders. Additionally, SDL_shadercross can take in SPIR-V or HLSL and output DXBC, DXIL, SPIRV, MSL, or HLSL.

For various reasons (sandbox safety, general vibe) I’d personally rather bet on wgpu than sdl_gpu.