Category Archives: Development

Iteration

It is amazing how useful having a blog is.  Simply trying to explain something to someone else can give insight into how complicated or confusing the topic may be.

Case in point: my blog post about the Void Engine. It bothered me that despite most things being automatic, the API looked and sounded over complicated. As a result I’ve refactored the API so that the ‘physicals’ layer is now no longer exposed to the user. The functionality that used to exist is now either only required privately by the implementation or has been exposed in a cleaner way, such as adding cloning of any resource and adding functions to get parent objects from every API.

Suffice to say, the user facing API is now about half the size it used to be with no loss in functionality.

Iteration can be time consuming but is necessary, even more important is to iterate early before too much has been built on top of immature systems.

The Void Engine

As promised here is an overview of ‘the Void Engine’.

The Void Engine API does not look very much like the DirectX or OpenGL APIs, but it does share some of the concepts.

The engine design was heavily influenced by my experience with the Playstation 4 and while the current implementation is onto the DirectX 11 API it actually maps better to DirectX 12. Given the design basis of the engine, I also expect that it should be *relatively* straight forward to port to the PS4, though I do have some concerns about the reflection of the PS4 shaders.

The main sections of the API are: the factory; shaders; bindings; resources; and physicals. Every object in the API is a reference counted smart pointer.

Core to the system are shaders. A shader in the Void Engine represents the entire configuration of the GPU required to perform any action on the GPU. There are no separate shader stages and there is no distinction between graphics and compute shaders.

Shaders are created by a shader library object which represents a blob of binary data containing compiled shaders and all their configuration and binding details. The binary shader library data is constructed by a bespoke tool which runs as part of the general build process. The shader library tool uses reflection to validate the shader configuration and to construct a compact representation of the operations required to setup the GPU and to bind resources to the shader. In the run-time, shader libraries are created by the factory object from the binary blob data.

Bindings are created by the shader library, they can be created by name or automatically for an instance of a particular shader. Bindings represent resources referenced by the shader code or the shader configuration. Bindings are typed and named (the name comes from the shader source code), so bindings can have the same name as long as they have different types.

Bindings can be bound to a shader instance and act as the interface for selecting specific resources. Binding types represent the different ways that data can be connected to the GPU, these are: the various resource view types; constant buffers; vertex streams; stream-out streams; and states.

To automate most binding, bindings can also be held in anonymous binding groups which can be bound en-masse to a shader.

For every binding type, there is a resource type. Resources are constructed by the factory and represent the configuration of a resource including which physical resource to select. For instance, a stream resource configures the offset and stride of the stream and references the physical buffer holding the stream data. In addition to holding configuration data, resources simplify the renaming of physical buffers as data is updated.

Resources can be registered by name into a resource registry. This is a mechanism for sharing resources and to simplify the assignment of resources. Resource registries can be searched for specific resources using a combination of name and type, they can also automatically create bindings for a specific shader or can create a binding group for future use.

Physicals represent physical memory buffers (including textures, render targets and depth-stencils), views and states. Physicals are created by the factory object. Dynamic physical objects can be updated directly. Physical objects need to be bound to a resource to be used. The relationship between physicals and resources is complicated as some resources access physical memory indirectly via ‘view’ physicals and different parts of a single buffer may be referenced by multiple resources. I am considering moving some of the view and state handling API from the physicals to become directly part of the resource system, but currently the API operates in a consistent manner for all physical resources. If this change does go ahead, it will be mostly cosmetic as the current run-time data pipeline is very effective.

OK, that is pretty much it. It looks quite complicated when laid out in full, but in practice most cases are handled automatically. A shader process that uses specific resources named in the shader source code will have all its bindings, resources and physicals assigned or created automatically. It is only if a custom resource setup is required that specific bindings, resources or physicals need to be worked with.

In the simplest cases, the only code required is to request a shader instance and execute it.

Detour on the detour

While I was working on the engine update, a friend pointed out that some of the app behaviour in Windows was not as expected. So in a brief detour I have fixed a number of the issues and updated the downloadable demo. The engine resource handling update is still in progress.

Anyway, the fixes in the latest version are:

The standard Windows keyboard shortcuts for positioning of a window (Win+cursor key) now work.

The app no longer exits if it is minimised.

Handling of window movement and re-sizing in general should be more stable.

Alt+F4 closes the app.

Alt+space now correctly brings up the window menu.

General mouse handling is more consistent.

Engine work

While adding some lookup textures for the shader noise functions it became apparent that I needed to do some work on the engine that I’d been hoping to leave for later. As I type, I’m about a half way through this work.

So what am I doing and why is it needed?

The engine underlying Locality (‘the Void engine’) abstracts the GPU and resources to allow easy porting to new devices and APIs and to automate most of the hardware settings such as resource bindings and shader chains. There is actually significantly more code in the tools which prepare the data for the run-time engine than in the engine itself.

In theory the Void engine has fully automatic management of physical resources (textures, render targets, constant buffers etc.), unfortunately in practice I left the creation of physical resources up to an older engine that I already had. Doing this allowed me to move forward with Locality specific work at an earlier stage in development.

Having two systems operating at the same time has now become a problem. The new system expects to be fully managing resources, but the requirements of the old system mean I have to bundle duplicate data with every new resource and manually manage it, even though the new system is automatically managing caching and garbage collection of ‘proxies’ for these resources.

This has been workable for a while, but now is starting to create a lot of additional work and is cluttering up layers of the engine with dependencies that should not be needed. On top of that is the need to keep a clear understanding of two significantly different architectures in my head all the time and watch out for interactions which don’t map well between the two.

I’m not sure if anyone is reading this blog or can take anything useful away from the above explanation, but anyway, once this update is complete I’ll try to do a new post explaining the architecture and abstractions used by the Void engine.

Objects

The last week has been mostly spent writing generators for the various objects required by the game. If anyone is interested, the objects are: assorted pressure pads, a simple switch, a ‘capture’ switch, a door target selector, a power selector, a sphere, a pole, a bridge, assorted boxes and stair generators.

Still a bit of work to do on some of them and I have some shader work to make others operate properly.

The biggest advantages of using generators rather than building and loading models, is the ability to have variations on a theme and dynamic configuration with little cost. For instance the number of sides of a switch can be adjusted on demand and the iconography dynamically selected to match whatever is needed.

Anyway, as I said, there is still a bit of work required on this and I need to add some additional support for procedural textures, including decent Perlin and Simplex noise generator in the shaders.

New controls, old hardware

Well getting the keyboard and mouse controls in and playing nicely with both windows and my existing debugging input was a lot fiddlier than I had expected. It is in now though and you can check out the changes by getting the latest demo from the downloads page.

Also new is in-demo controls information which you can bring up at any time by pressing the ‘tab’ key.

Other changes are not visible to the user but include some architectural tidying up, optimisations and bug fixes.

Putting in the mouse controls raised an interesting issue. I’m developing on a 2011 HP laptop, which is generally pretty good. Unfortunately there was an issue with Radeon graphics drivers around 2011 which causes them to stutter, chewing through CPU time and locking the GPU when discarding constant and vertex buffers. Even more unfortunately I can’t update the drivers to a later version without the problem.

With the mouse controls in, the stuttering became even more apparent, though I’ve partly dealt with the issue by scaling the mouse input by the inverse of the update time.

Having older hardware is a double edged sword, it can be frustrating and slow down development progress – but the positives are also significant. It makes fixing these kinds of issue (which might not effect players with newer hardware) a priority and means that there are less likely to be nasty surprises once the game is released. With older hardware you also have to pay attention to performance early in development, which means better support for slower hardware and as a result the game is likely to be accessible to more people.

 

Behind the scenes

Most of the work for the last week or so has been behind the scenes changes fixes, optimizations and prep work for the main physics.

I have added support for door frames and enabled them on some of the doors. I’ve also added outlining to the geometry which proved to be a useful test of the geometry building code and got used to validate the collision data. The outlining will also prove useful when it comes to dealing with anti-aliasing, but that’s still quite a way off.

Tomorrow I’ve got a little shader code to tidy up, but then I think it’s about time I hooked up keyboard and mouse controls. A lot of the input work has already been done, so hopefully it should just be a case of hooking it all up.

Anyway, the demo on the downloads page has been updated, so if you are interested in seeing the changes head on over and get it.

 

Geometry construction

Well I’ve got most of my paperwork done and the update to the geometry building is almost complete.

Like many things in programming, the best approach to the geometry building was not immediately obvious, but the end result feels quite elegant.

The base of the construct deals with pure geometry and works out where all the convex and concave edges and vertices are. Within this, the construct keeps a note of what usages (such as collision detection) or shader each triangle is going to have and keeps a separate list of mappings of vertex data to the underlying geometry.

The construct then has a layering manager to allow some items to be separated (for instance pure collision geometry, solid rendering, transparent rendering and so on).

It’s all pretty abstract at that level, but above the layering there are interfaces, functions and enumerations which translate it all into a more human friendly form.

I still need to update the portal geometry generation to generate the optional door frames.

This was supposed to be just a quick update, so I’ll end here for now and get back to actual work!

Another week begins

Well, I think I’ve finished fiddling with this site for a while, though I intend to be blogging every day.

Today I have some paperwork to complete, I need to finally update my LinkedIn page and then I’m back to updating my geometry generation tool.

The geometry tool is being updated to allow better identification of shaders and to auto-populate some additional data about convexity which one of the new shaders will need. In other words I’m adding some extra functionality to the material handling.