(Ab)using portals for responsive content generation25th April 2019
Hello! We’re Triangular Pixels and we’re prototyping a Responsive Content Generation Tool. The goal is a system that will generate a VR world specific to a player’s device, location (play area), physical and mental abilities. That way games will adapt to players rather than players having to adapt to games. The prototype will focus on the physical accessibility of immersive content
In 2016 we created Unseen Diplomacy, a room-scale VR game designed as a physical assault course. In it we created a system we call Environmental Redirection, which was a great success in terms of accessibility and immersion.
But it wasn’t without it’s problems. As it was made for an installation it has quite high space requirements – much larger than the average person’s home! Due to the limited time we had to create it, there’s only a limited set of accessibility options for users. And to top it off, the content pipeline was very hacked together, so making the content was error prone and very time consuming compared to a more traditional vr content pipeline. These are some of the things we want to consider when making our prototype.
So let’s start with the content pipeline – why was that a problem? Most VR content is build like a school or a hospital – a series of rooms and spaces connected together, mostly horizontally. But for our environmental redirection technique it has to overlap. Instead imagine a block of flats, but compress it down to a single floor – rooms and spaces all overlapping in a crazy jumble of walls and geometry. It’s a nightmare to visualize and hard to author new content for.
Fortunately, we have a solution – portals. So we’ve been waist-deep in rendering code writing a portal system as part of the prototype. This will be the foundation for the new content pipeline and what we build the rest of the prototype on top of.
Portals, 90s Style
The idea of portals for rendering has been around for a while now. This paper written in 1991 is the originator of the idea, and Portals And Mirrors (1995) develops the idea further by dynamically determining the potentially visible set rather than precalculating it.
The basic idea is:
- Think of the room you’re currently in. Draw the walls and stuff inside it.
- The doorway you can see? That’s a portal to the next room. Since you can see it, draw that room too.
- Windows? Also portals. Draw the ‘room’ (outside) you can see through it.
- Can you see another doorway through that first doorway? Draw that one too.
- Keep going until you run out of visible portals.
This is cool because if you’re in a massive FPS level set in a big building with thousands of rooms the game can very quickly figure out which 4 for so rooms you can actually see right now and just draw those.
Then things got weird.
If the game stores a matrix alongside each portal, the ‘room’ you can see doesn’t have to be where you think it is. Flip the matrix and the game can turn a doorway into a mirror by drawing the original room backwards. Add an offset and you can turn a doorway into a teleporter you can see through. Everyone probably knows the game Portal by Valve which shows some super cool ways of using this, but the original Unreal (1998) was BSP and portal based, and one of it’s levels has an ‘impossible’ teleporter:
(If you want to see this yourself the run the original Unreal and type: load DmRadikus)
But portals kind of fell out of fashion. It worked great for indoors, but sucked for outdoors. It’s a pain to make objects move through portals cleanly, especially as physics in games became more detailed. And as shadows became standard, it was hard to pair those with portals too.
William Scarboro, who programmed the Prey engine, said later:
“There are many ugly problems in maintaining such an engine […] In hindsight, portal tricks such as these should be used as tricks, not as an engine paradigm.”
Unreal moved to ‘anti-portals’ (now occluders) and the rest of the industry moved too.
First person games started having much more mixed indoors and outdoor environments. Also portals required lots of manual markup to place the portals and sectors between them which was tedious and slow. As computers got faster, dynamic methods of culling like occlusion culling became more practical but requiring less human work.
Portals might have become ‘tricks’ only suitable for weird effects and largely forgotten. But weird tricks are still interesting tricks, and a handful of games made them a core part of gameplay. Portal, Antichamberand Prey (2006) all used them in interesting ways to melt your brain.
Unlike these games, we don’t want the obvious ‘weirdness’ that portals bring. Instead we’re going to use them as the foundation of our content pipeline. We can design content in isolation, then splice it together with portals at runtime. That compressed tower block of flats no longer has to be a mad jumble of rooms, but instead a series of individual blueprints which are worked on using traditional content tools. Only when the prototype runs will the blueprints be connected together, at which point it no longer needs to be modified.
Let’s start with the obvious question – why not use an existing portal asset on the Unity Asset Store? When I started looking into portals, I went through every portal asset I could find and evaluated them all, and none are actually suitable for what we wanted to do. Lots of them don’t work in VR for a start , or produce incorrect results like no stereo. Interestingly there wasn’t an obvious ‘best’ asset. They all seemed to be focused differently so do certain things well but not others. Eg. Vive Stereo Rendering Toolkit mostly focused on mirrors and teleporters.
So as tempting as it is to tell you which is the ‘best’ portal asset, I don’t think that would be fair. Every game is going to have different requirements and there’s no ‘best’ existing asset that would always be the right choice.
After going through all of them (with a big spreadsheet of features and compatibility) it becomes much more obvious that ‘portals’ is a catch-all term which covers many individual features, like:
- Spatial discontinuity (teleportation portals like Portal)
- Seamless rendering (no ‘seams’ between portals)
- Seamless movement (no glitching when traversing portals)
- Sometimes physics should be isolated (ie. overlapping rooms)
- Sometimes physics should propagate (ie. boxes half-in doorways)
- Lighting and shadows (casting shadows through portals)
- Sound effects and sound propagation
- AI vision and pathfinding
- Recursive portals (looking at portals through other portals)
Unsurprisingly, doing all of these is super hard, doubly so when trying to crowbar it into an existing engine like Unity! So with no handy asset available, I started our own portal tech. Here’s a few of the techniques behind it that make it work.
What about render textures?
One big snag that lots of existing assets have is they use Render Textures for the actual portals. Render textures basically put a second camera in the world, and instead of drawing it to the screen, draw it into a texture. That texture can be used when you do your actual drawing later. This is often used for in-game security cameras or other behind-the-scenes effects like reflections.
For portals, this seems like a great fit – position another camera for the portal contents, draw it into a texture, then put that texture on the portal for the player to see. There’s some faffing about with maths to get the cameras to line up right, but this works nicely.
However this kinda sucks for VR, because it kills any stereo vision and the portal starts looking like a flat tv screen that shows what’s behind it. This can be improved by doing two cameras (one for each eye) but this needs a lot of fill rate which isn’t good for performance, and it’s super tricky to match the pixel density exactly, so portals look blurry or fuzzy.
Stencil Buffer to the rescue
Stencil buffer based portals are an older technique, but have some nice properties compared to the render texture method. They’re more complicated to code, but always have 1-to-1 pixel density are much better for fill rate and can also be adapted for VR more easily. Because they’re a screen-based technique, seamless transitions through portals are possible, which is why they’re the method used in Portal.
What is a stencil buffer anyway?
The stencil buffer is basically another hidden screen, except instead of coloured pixels, each pixel can have a number from 0 to 255. Materials can read or write to this value and either draw different things or skip drawing altogether. There’s no particular meaning to the values, it’s up to the game to decide what means what.
For portals, that means we can draw each portal as an invisible square that sets the stencil to the id of the portal. Then later drawing can check against it to use it as a mask so only things that should be visible through the portal are shown. As a debug tool I have a post-processing step that will show the stencil values as different colours, you can see here how two nested portals mask out the correct screen areas:
At this point, it’s just a case of setting the sector’s materials to test against the stencil buffer so that they only draw where each colour says they should be using the read mask. The exact details are a bit tedious, but if you’re interested then Ronja Böhringer has a pretty good description of the various bits of shader syntax in action.
That’s the basics, and you’ll see lots of implementations that only need to go this far. Now the fun starts with the really weird stuff!
Intersections and Clip Planes
Objects that are sticking through a portal need special handling. Even though their origin shows they’re in one sector, really it’s in both. Otherwise, we get this:
I tried a few methods of fixing this, the most robust is to really treat it like two objects, one in each sector. Clip planes are a neat trick where with a bit of shader maths, an object can be cut in half based on a flat plane. They’re super useful for making this rez-in our out, but here we can clip the object along the portal surface with different stencil masks.
So first it’s drawn as if it’s in the near sector:
And then again as if it’s in the far sector:
When both done at once the object ‘disappears’ properly into the portal.
Here’s a nice bonus unity tip – if you have a MeshRenderer and you assign it more materials than it needs, it’ll draw twice, once with each material. This is a super cheap and effective way of drawing twice without having to duplicate the object. (It’s also nice if you want to apply temporary effects like highlights on top of an existing object).
You’ll get this warning message because you’re drawing the object twice, but that’s exactly what we’re trying to do, so it’s all good.
Of course drawing an object twice to handle portal intersections has a performance overhead, which is why this behaviour is only triggered when necessary.
VR and Portals
Moving objects are a pain because they can be in two sectors at once. Cameras are ok because they’re really a single point so don’t have this problem… until you add VR and have separate left and right cameras. Put your head halfway through a portal and look sideways, and it’ll all go horribly wrong.
Much like moving objects, the solution is to detect when this happens and set up the stencil masks correctly to handle it, with separate ids for left and right, which looks like this:
One interesting snag is that this has to be linked to the player’s IPD, and for headsets that let you change the physical IPD on the fly then this must be kept in sync otherwise your physical eyes will be on either side of a portal but the code won’t have noticed.
Single pass rendering
Unity has had single-pass stereo rendering (under various names) for a while now, and is an awesome optimisation. It basically draws both eyes at once, which costs a bit more, but means the draw calls don’t double like they would normally. Since lots of VR games are limited by draw calls, this is hugely important. When we made Smash Hit Plunder on PSVR we basically couldn’t have hit our performance targets without this optimisation. Plus it’s super easy, you just turn it on here in your player settings:
So what’s the catch? Well it tends to break very easily. Shaders, materials and post-processing must be updated to work with it, and while everything built-in to unity is being steadily updated, there’s lots of old shaders or post processing that won’t work. Doubly so if you’re using assets from the store.
My biggest tip for anyone starting a VR project would be – turn this option on and leave it on! If something breaks with it (usually objects don’t look right or only show in one eye) track down the problem and fix it or use something different. It’s very tempting to turn it off temporarily, but in my experience once you’ve done that it’s very difficult to turn it back on again.
For portalling, this means updating our stencil id code to handle both cameras at once and sharing the ids between them.
The end result
Throw all of that together, and we get some portals which correctly mask their contents, show correct stereo and depth, and allow us to seamlessly walk through them.
There’s plenty more to consider, but that about wraps it up for now! There’s lots left to do, like making physics and lighting work properly. We’re still working on our portal tech so those features (and blog post) will come later.
Don’t forget we have our Discord channel if you want to come and chat about any of this and hang out.