At Coatsink, we really love a challenge.
Shadow Point, our launch title for the Oculus Quest, combined real-time shadow-casting, mirrors, gravity manipulation and reality-spanning portals into a mind-bending puzzle game – all on a mobile form factor.
In this article, Matt Hubery, Senior Programmer at Coatsink, dives into the challenges of making portals a reality on mobile VR.
In a nutshell, how do portals work in Shadow Point?
We approached rendering portals in a very similar way to rendering a mirror. In fact, every mirror in Shadow Point is effectively a portal that references itself with a flipped Z-axis. As for creating portals, each one takes the player’s position relative to itself, then applies that transformation on the opposite side. Finally, it draws the new scene relative to this position.
What were the limitations?
Since one portal requires the whole scene to be redrawn, a single portal doubles the amount of rendering required, on top of the cost of the game logic when the player interacts.
Due to our target being a mobile VR device, we decided early on to set limits on how the player could interact with portals, and strictly capped the size and complexity of the surrounding scenes.
Most importantly, we placed a hard limit on the number of portals we could render at any one time, with a maximum of three, constrained further still in rare instances to two or one. We also decided we couldn’t support recursion — the ability to see a portal through another portal — which simplified implementation and avoided problems like infinite recursion (the classic ‘endless hallway’ effect).
With portals being a crucially important part of Shadow Point, it was very important that we were meticulous in the planning and implementation of this feature. We’re going to be getting into some very technical and in-depth discussions to explain how we created portals in Shadow Point.
Part 1. Calculating what to draw
To calculate the point-of-view from which to render the portal, we took the position of the player’s view into the portal’s local space by applying the ‘world to local’ transformation matrix (the combined translation, rotation and scale) of the portal to the camera view matrices.
We could then apply custom transformations depending on the portal’s gameplay purpose: Z-flip for a normal mirror, or X- and Z-flip for ‘pass-through’ mirrors. Finally we applied the ‘local to world’ matrix to move back into world space, but this time on the opposite side of the portal.
For any objects moving through the portals — including the player — we applied the same process… with the exception of transformations that would require scaling. These were excluded due to their incompatibility with other systems, and why all the ‘pass-though’ mirrors in Shadow Point are also flipped on the x-axis.
Part 2. Removing what we didn’t want to draw
Having established the correct point-of-view, the next task was to remove all the geometry behind the portal, which was now being drawn over the top of the new scene. While this might look strange, this ‘convergence’ (multiple objects occupying the same space) can also cause discomfort viewed in VR.
Fortunately, this was a problem with several well-established solutions, the same solutions we employed when implementing reflections. The first was to modify the projection matrices to create an oblique ‘near plane’ aligned with the surface of the portal. This ‘clips’ (stops rendering) the objects behind the portal’s surface. This is a straightforward solution, the only requirement being a change to the projection matrices.
Unfortunately, projection matrices can’t represent six independent planes, so modifying the near plane inevitably affects the far plane. As the viewing angle of the portal shallows, the far plane rotates, intersecting the near plane off-screen. This causes problems with depth precision which, although acceptable for normal reflections, become obvious when the player gets too close.
To counteract this, we used Clip/Cull Distances. This is a vertex shader feature that allows extra clipping based on proximity. Unfortunately, this involves modifying every supporting vertex shader and can impact performance (as it requires calculating the vertex world position). In practice, however, these problems solved themselves. The shared code and relatively small amount of unique vertex shaders all helped keep maintenance low, plus most of the shaders required world-space position anyway.
Part 3. Displaying the portal to the player
Having established what we’re going to draw — the world and camera position — the next challenge was deciding how and where we were going to make it visible to the player.
The simplest rendering target for effects like portals and mirrors is a separate, screen-sized texture, that can be applied to its surface — much like a regular texture. This allows you to apply a wide variety of effects, such as distortion and color modification. But, as usual, this presented a handful of new issues with the development of Shadow Point.
Firstly, the size of the texture (and the fact that, as a render target, it cannot use compression) makes it expensive to sample from, especially when the portal fills the player’s view. So while this approach worked on PC, we had to take a different approach for the mobile device. Instead we chose to render the portal contents directly to the screen, utilizing the stencil buffer.
To do this, we used Unity’s new Scriptable Render Pipeline feature and wrote our own render pipeline. This allowed us to define our own render passes and utilise single-pass lighting — a much more efficient way to implementing portals and lighting.
Single-Pass Lighting (applying all lighting calculations with fewer draw calls) which is much more efficient.
The render pass (draw sequence) we used are as follows:
- Opaques from the player’s point-of-view.
- Portal surfaces with unique stencil value per portal.
- Portal surfaces with portal stencil test to depth buffer, clearing it.
- Opaques from the portal’s point-of-view (with stencil test and clip planes).
- Transparents from the portal’s point-of-view (with stencil test and clip planes).
- Portal surface effects with portal stencil test and write depth.
- Finally, transparents from the player’s point-of-view.
Because multiple portals could be visible at once (though obviously not through each other!) each portal had a unique stencil value. Drawing all portal passes between the main opaque and transparent passes maintained the front-to-back ordering for opaques (overdraw reduction) and back-to-front ordering for transparents (for blending correctness). Any objects going through the portal had additional draws added on each side, for a total of four.
Part 4. Handling (literal) edge cases
Because we have two eyes separated by a nose, it’s possible for players to stand inside a portal with one eye on either side. The two eyes cannot be moved apart as we want to use single-pass VR rendering (multiview on the Quest), which draws the same objects to both eyes simultaneously.
We solved this by drawing an inside-out box behind the portal for the eye that’s passed through. We also didn’t draw portal stencil values to the back face of the surface of the portal. This created an inverse stencil effect, where only the pixels where the portal isn’t wrote the stencil value of the portal. We also disabled the portal clip plane for the intersecting eye (as you can now see ‘behind’ it). Now the main render pass has the inverse of the portal clip plane applied.
Here, the right eye is inside the portal. The orange shows portal stencil area, and the wireframe shows the ‘portal box’. The skybox is visible as the clip plane has cut out everything behind the portal. The orange area will be rendered over by the portal pass.
Part 5. Making portals run on Quest
So we’ve figured out what to do, how to do it and what the player will see. Now all that remains is to ensure the smoothest experience possible.
Drawing the scene from the portal point-of-view using the same camera as the main view ensures everything lines-up visually. However, as the frustum (view field) of the portal camera is the same as the main camera, it could contain objects that are likely to be obscured when the portal is at a distance. This means numerous objects could be sent all the way to the GPU only to be discarded. To counteract this problem, first we passed the portal plane in as the near clip plane for the culling and then calculated the screen space rectangle of the portal and modified the other culling planes to tightly fit this.
To counteract this problem, first we told the culling system about the Near Clip plane (the same method used for Clip/Cull Distances). Then we calculated the Screen Rectangle (the area of the screen the portal is visible on) and modified the Side Clip planes to tightly fit this.
While this certainly helped, more work was needed. We were already using the built-in occlusion culling provided by Unity for the main camera, and wanted to apply this to the portals. However, simply enabling it for the portal cameras wasn’t enough because it required a culling matrix rather than a set of planes.
The Screen Rectangle was simple to apply as a scale and offset matrix. The near plane, however, required creating a culling matrix with an oblique near plane, which caused similar issues as when used for the projection matrix… but much worse.
Unfortunately with occlusion, culling the near plane adjustment is essential as objects behind the portal would be counted as occluding objects in front. To get around this, we ran two culling passes. The first went from the far edge of the portal, from the players point-of-view to the far clip plane. The pass used occlusion culling.
The second had no occlusion culling and instead used explicit culling planes to fill the gap between the first pass and the portal surface. This ensured anything very close to the portal was drawn correctly. Although a few objects ended up in both of these volumes (and were consequently drawn twice) this didn’t cause any peculiar visual artefacts or impact performance.
Imagine the invisible
This isn’t a comprehensive list of all the obstacles we encountered creating the portals of Shadow Point. As the different gameplay systems interacted and the puzzles became more complicated, so did the logic required to implement them: carrying objects through portals, passing objects through mirrors, changing an object when viewed through a mobile portal… every combination and utility proved a unique challenge, and it was a massive learning experience.
It’s strange, then, that the success of the portals in Shadow Point could perhaps be measured by how much the player doesn’t see. That portals feel intuitive and natural — often invisible. That, for all the work and energy bringing portals to mobile VR, the player only sees the fantasy.