Jekyll2022-12-26T23:31:34+00:00/feed.xmlMatt Stark’s Game Development PortfolioI'm Matt Stark, game developer and creator of Viewfinder. This is my portfolio blog.Matt StarkWile E. Coyote Effect2019-11-06T11:00:00+00:002019-11-06T11:00:00+00:00/2019/11/06/wile-e-coyote-effect<p>A tweet showing a weird effect I’d created in Unity got rather popular so I thought I’d go into some detail about how the effect was created.</p>
<p>Here’s a video of the effect:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/Z_RQenPprUc" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>Each doorway has a hidden wall which becomes visible when you enter a trigger. At the same time, a texture is generated by a camera and is applied to the wall. Here’s what it looks like from another angle:</p>
<p><img src="/assets/wile%20e%20coyote/Appear.gif" alt="Wall appearing" /></p>
<p>The effect can be broken down into two parts: generating the texture and applying it to the object.</p>
<h3 id="generating-the-texture">Generating the texture</h3>
<p>The effect must work regardless of what the player is looking at. If the object is only partially on screen or behind the player then using a section of the player’s view would be insufficient. This is resolved by creating a temporary second camera at the same position as the player camera but facing towards the object.</p>
<p>The camera’s field of view should be as small as possible while still containing the whole object. If the field of view was 60 degrees but the object was far in the distance, most of the texture would be wasted. Conversely, if the field of view was 60 degrees but the object was a wall right in front of the player, it might not fit in the texture. An appropriate field of view is found by iterating through the corners of the object’s bounding box to find the largest horizontal or vertical field of view that any of them require.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>float maxAngle = 0;
Vector3 min = meshFilter.sharedMesh.bounds.min;
Vector3 max = meshFilter.sharedMesh.bounds.max;
// Iterate through each of the 8 corners of the bounding box
foreach (float bx in new float[] { min.x, max.x })
{
foreach (float by in new float[] { min.y, max.y })
{
foreach (float bz in new float[] { min.z, max.z })
{
// Get the corner's position in camera space
Vector3 cornerInCameraSpace = cam.transform.InverseTransformPoint(transform.TransformPoint(new Vector3(bx, by, bz)));
// Find the horizontal and vertical angles between the camera's forward vector and the corner's position
float horizontalAngle = Mathf.Abs(Mathf.Atan(cornerInCameraSpace.x / cornerInCameraSpace.z));
float verticalAngle = Mathf.Abs(Mathf.Atan(cornerInCameraSpace.y / cornerInCameraSpace.z));
// If either angle is greater than the stored value, replace it
maxAngle = Mathf.Max(maxAngle, horizontalAngle, verticalAngle);
}
}
}
// Set the camera's field of view based on maxAngle. MaxAngle is in radians so must be converted to degrees. Maxangle also only represents the angle between forward and the edge of the view, but field of view is the angle between the top and bottom of the view, so it must be multiplied by two.
cam.fieldOfView = maxAngle * Mathf.Rad2Deg * 2;
</code></pre></div></div>
<p>The camera then renders the image using a render texture as the target. I used a texture with 1024 x 1024 pixels. The texture it produces looks like this:</p>
<p><img src="/assets/wile%20e%20coyote/GeneratedTexture.png" alt="Rendered texture" /></p>
<p>The following image shows the region of the texture that is occupied by the object’s bounding box. The centre of the bounding box (the intersection of the red lines) is at the centre of the texture (the intersection of the blue lines). The largest field of view required for any of the corners was the vertical field of view for the bottom right corner. As a result, the bottom right corner touches the bottom of the texture.</p>
<p><img src="/assets/wile%20e%20coyote/GeneratedTextureDiagram.png" alt="Rendered texture diagram" /></p>
<h3 id="applying-the-texture-to-the-object">Applying the texture to the object</h3>
<p>The next challenge is correctly mapping the texture onto the object. If we just applied the texture using the object’s UV coordinates it would look like this (I’ve enabled wireframe to make the shape clearer):</p>
<p><img src="/assets/wile%20e%20coyote/WrongProjection.png" alt="Incorrectly mapped texture" /></p>
<p>In order to correctly map the texture onto the object, the object’s vertex positions are projected into the screen space of the temporary camera. The projected positions are used to sample the texture.</p>
<p>Before the temporary camera is deleted, the matrix which converts world space to the camera’s clip space is calculated. This is found by multiplying the projection matrix by the world to camera matrix:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>camMatrix = cam.projectionMatrix * cam.worldToCameraMatrix;
</code></pre></div></div>
<p>This matrix is passed to a shader along with the texture. In the vert shader the vertex position is projected first into world space then into the temporary camera’s clip space (clipPos is a float4 added to the v2f struct and _WorldToCam is the matrix that was passed to the shader).</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// Project the vertex into world space
float3 worldPos = mul(unity_ObjectToWorld, v.vertex);
// Project the world space position into the temporary camera's clip space
o.clipPos = mul(_WorldToCam, float4(worldPos, 1));
</code></pre></div></div>
<p>In the frag shader the clip space position is converted into UV coordinates that are used to sample the texture:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// Get the UV coordinate
float2 uv = i.clipPos.xy / i.clipPos.w;
uv = (uv + float2(1, 1)) / 2; // Convert it from the range -1 to 1 to the range 0 to 1
// Sample the texture
fixed4 col = tex2D(_CamTex, uv);
</code></pre></div></div>
<p>The texture is now correctly mapped onto the object!</p>
<p><img src="/assets/wile%20e%20coyote/CorrectProjection.png" alt="Correctly mapped texture" /></p>
<p>An alternative approach would be to project the texture just once and store it in another texture. This would be faster but it would be challenging to implement it in such a way that it could handle objects more complex than planes.</p>Matt StarkA tweet showing a weird effect I’d created in Unity got rather popular so I thought I’d go into some detail about how the effect was created.Smoky Aura Effect2019-10-11T11:00:00+00:002019-10-11T11:00:00+00:00/2019/10/11/smoky-aura-effect<p>I recently created an effect that makes an object emit rippling smoke and thought I’d try explaining my approach.</p>
<p>Here’s a clip of the effect in action:</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/FmKtzEKQUMA" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>The effect consists of a partially transparent texture on a billboard (a quad that rotates so it always faces the camera). It’s roughly centered on the dragon and is large enough to contain the smoke.</p>
<p><img src="/assets/smoky%20aura%20effect/Billboard.png" alt="Billboard" /></p>
<p>First I render a copy of the camera’s view containing only a silhouette of the dragon. I use a camera that has a black background and only renders one layer of the scene. You could alternatively use the stencil buffer, but accessing that after a scene has been rendered is tricky in Unity.</p>
<p><img src="/assets/smoky%20aura%20effect/Mask.png" alt="Mask" /></p>
<p>I then find the screen co-ordinates of the corners of the billboard so I can sample the corresponding region of the texture.</p>
<p><img src="/assets/smoky%20aura%20effect/Mask%20Clipped.png" alt="Cropped Mask" /></p>
<p>I blend this region of the silhouette texture onto the existing billboard texture. This simulates smoke being emitted from the source at a constant rate and the rest of the smoke gradually fading away. I then distort the billboard’s texture using velocity vectors based on a flow map. This is a texture where red and green values represent x and y components of velocity. I also add a constant upwards velocity. Progressively distorting the texture simulates currents in the air pushing it around.</p>
<p><img src="/assets/smoky%20aura%20effect/Flow.png" alt="Billboard Texture" /></p>
<p><img src="/assets/smoky%20aura%20effect/RG%20Perlin.png" alt="Flow Map" /></p>
<p>Finally I use the texture in a material applied to the billboard. The material’s shader converts black and white to transparent and black. It fades out where it’s close to the surface behind it so you don’t see it clipping through the dragon. It also fades out near the edges of the billboard to prevent any sharp edges being visible.</p>
<p>And that’s it! I’m really happy with how this effect looks and how low the performance impact is. It would be possible to create a simpler version for something like a flaming torch that just reads a silhouette from a texture but in that case it might be better to simply use a flipbook texture.</p>Matt StarkI recently created an effect that makes an object emit rippling smoke and thought I’d try explaining my approach.Overlock2019-07-06T11:00:00+00:002019-07-06T11:00:00+00:00/2019/07/06/overlock<p><img src="/assets/overlock/Banner2.png" alt="Play Store banner" /></p>
<p>In June I released another Android game on the Play Store, called Overlock. It’s a minimalist puzzle game about overlapping shapes, with 32 puzzles and a variety of mechanics. I created it because I wanted a slick, accessible game I could show people as an example of my work. It was made using Unity and the project took around 3 weeks.</p>
<p>I created several custom editor tools for designing the puzzles, which made the process much faster and easier. For example, I could modify any puzzle shape by simply clicking and dragging over tiles it should cover.</p>
<p><a href="https://play.google.com/store/apps/details?id=com.MattsGames.Overlock">Link to the game on the Play Store</a></p>Matt StarkVolumetric Clouds2019-02-27T11:00:00+00:002019-02-27T11:00:00+00:00/2019/02/27/volumetric-clouds<p>The video below shows a Unity shader I made which renders volumetric clouds using ray marching. Ray marching is a method of rendering volumetric data where a ray is cast from each pixel and the volumetric data is sampled at evenly spaced points along the ray.</p>
<p><a href="https://github.com/mattstark256/volumetric-clouds">Link to the project on Github</a></p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/8TF5hiHlf7w?rel=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<p>The shape of the clouds is defined by a tiling perlin noise texture. This texture is used as a heightmap for the bottom and top surfaces of the clouds. The heightmap is inverted for the bottom surface. Only the volume above the bottom and below the top is rendered, as shown in the cross section below. A small amount of moving noise is also added to both surfaces to simulate turbulence.</p>
<p><img src="/assets/volumetric%20clouds/CloudDiagram.png" alt="Clouds diagram" /></p>
<p>The ray marching samples all have the same vertical spacing, regardless of the direction of the ray, as shown below. This ensures the clouds can always be rendered using a fixed maximum number of samples. It also means that clouds closer to the viewer are rendered with higher fidelity than clouds further away. The opacity of each ray’s samples is scaled according the distance between samples. Once a ray has been almost entirely blocked by cloud, it doesn’t proceed any further.</p>
<p><img src="/assets/volumetric%20clouds/CloudDiagram2.png" alt="Clouds diagram 2" /></p>
<p>The colour of each sample is based on the distance from the top surface. This is to make the undersides of the clouds look like they are in shadow. The opacity of each sample is based on the distance to the nearest of the two surfaces, which helps to smooth out sharp edges caused by the ray marching.</p>Matt StarkThe video below shows a Unity shader I made which renders volumetric clouds using ray marching. Ray marching is a method of rendering volumetric data where a ray is cast from each pixel and the volumetric data is sampled at evenly spaced points along the ray.Firelight2019-02-15T23:00:00+00:002019-02-15T23:00:00+00:00/2019/02/15/firelight<p>Firelight is a game I made as part of a team of 2 for Global Game Jam 2019. It’s about searching for warmth in the cold dark night. The theme of the jam was “What home means to you”.</p>
<p><a href="https://globalgamejam.org/2019/games/firelight">Link to the Global Game Jam submission</a></p>
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/LCdt8rgqWDE?rel=0" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
<h2 id="cutscenes">Cutscenes</h2>
<p>We wanted the game to have cutscenes with a dialogue system where the player’s choice of response could affect how the conversation progressed. We decided the cutscenes should be in scripts, because this would give us the flexibility and control to make each one unique. It allowed us to use conditional statements, for loops, variables etc. We used coroutines so we could pause execution while waiting for the player’s response or waiting for an animation to finish.</p>
<p>Here’s an example of a section where the camera focuses on an NPC who then asks the player a question:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// Smoothly move the camera to the NPC and wait until that is complete
cutscene.Transition(transform.position);
while (cutscene.IsTransitionInProgress()) { yield return null; }
// Show a speech bubble with two dialogue options and wait until the player has chosen one
cutscene.DialogueDecision("Are you the rescue team? I called to be rescued.", "Yes... I'm part of the rescue team...", "I'm afraid not.", true);
while (cutscene.IsDialogueInProgress()) { yield return null; }
if (cutscene.GetDecisionOutcome())
{
// This is where the response to the first option would be
}
else
{
// This is where the response to the second option would be
}
</code></pre></div></div>
<p>And here’s what that question looks like in game:</p>
<p><img src="/assets/firelight/decision.png" alt="Question screenshot" /></p>Matt StarkFirelight is a game I made as part of a team of 2 for Global Game Jam 2019. It’s about searching for warmth in the cold dark night. The theme of the jam was “What home means to you”.Window Shaders2019-02-15T11:00:00+00:002019-02-15T11:00:00+00:00/2019/02/15/window-shaders<p>Some games use shaders for windows to make it look like there is a three-dimensional interior behind the window without needing any additional geometry. This can have a much lower performance impact than modelled interiors, which is good for situations like city scenes where there are hundreds of windows.</p>
<p>I created some window shaders in Unity using several different techniques to create the appearance of depth.</p>
<h3 id="shader-1-cubemap">Shader 1: Cubemap</h3>
<p>One basic approach is to sample a cubemap using the view direction. This would only be effective for very large interiors.</p>
<p><img src="/assets/window%20shaders/WindowCubemap.gif" alt="Cubemap Interior" /></p>
<h3 id="shader-2-back-wall">Shader 2: Back wall</h3>
<p>A more convincing effect can be created by sampling a 2D texture as though it were positioned some distance behind the window’s surface. This can be very efficient, because almost all of the calculations can be done in the vertex shader. I’ve included the code for this shader at the bottom of the page.</p>
<p><img src="/assets/window%20shaders/WindowPlane.gif" alt="Plane Interior" /></p>
<h3 id="shader-3-walls-ceiling-and-floor">Shader 3: Walls, ceiling and floor</h3>
<p>I tried recreating the window shader used in Forza Horizon 4, which is discussed in <a href="https://www.gamasutra.com/view/news/332409/Game_Tech_Deep_Dive_A_window_into_Playground_Games_latest_shader_development.php">this Gamasutra article</a>. The shader takes a square 2D texture and projects it so it appears to be a cuboid-shaped interior.</p>
<p>The gif below shows my shader in action. I’ve also included the 2D texture, which I made using Blender. I’ve used a bay window to demonstrate that the effect works regardless of the shape of the window’s mesh.</p>
<p><img src="/assets/window%20shaders/WindowForza.gif" alt="Forza Interior" /></p>
<p><img src="/assets/window%20shaders/Room%20Thirds.png" alt="Interior Texture" /></p>
<h3 id="shader-2-code">Shader 2 code</h3>
<p>This is the code for shader 2, described above. The alpha channel of the albedo texture (_MainTex) determines where the interior should be visible. The _IndoorOffset 4D vector represents two 2D vectors, one for the position of the interior texture and one for its scale.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Shader "Custom/Window Shader"
{
Properties
{
_MainTex("Albedo (RGB)", 2D) = "white" {}
_SpecularTex("Specular", 2D) = "black" {}
_NormalTex("Normal", 2D) = "bump" {}
_IndoorTex("Indoor texture", 2D) = "white" {}
_Depth("Depth", Range(0,5)) = 1
_IndoorOffset("Indoor position and scale", Vector) = (0,0,1,1)
_IndoorTint("Indoor tint", Color) = (1,1,1,1)
_BackgroundColor("Background Color", Color) = (0,0,0,1)
}
SubShader
{
Tags { "RenderType"="Opaque" }
LOD 200
CGPROGRAM
// Physically based Standard lighting model, and enable shadows on all light types
#pragma surface surf Standard fullforwardshadows vertex:vert
// Use shader model 3.0 target, to get nicer looking lighting
#pragma target 3.0
struct Input
{
float2 uv_MainTex;
float2 uv_SpecularTex;
float2 uv_NormalTex;
float2 indoorUV;
};
sampler2D _MainTex;
sampler2D _SpecularTex;
sampler2D _NormalTex;
sampler2D _IndoorTex;
half _Depth;
half4 _IndoorOffset;
fixed4 _IndoorTint;
fixed4 _BackgroundColor;
// Add instancing support for this shader. You need to check 'Enable Instancing' on materials that use the shader.
// See https://docs.unity3d.com/Manual/GPUInstancing.html for more information about instancing.
// #pragma instancing_options assumeuniformscaling
UNITY_INSTANCING_BUFFER_START(Props)
// put more per-instance properties here
UNITY_INSTANCING_BUFFER_END(Props)
void vert (inout appdata_full v, out Input o)
{
UNITY_INITIALIZE_OUTPUT(Input, o);
// Find the point on the indoor wall that is being viewed
half3 viewVectorLocal = ObjSpaceViewDir(v.vertex);
half2 indoorPoint = v.vertex + viewVectorLocal * (_Depth - v.vertex.z) / viewVectorLocal.z;
// Transform it using "Indoor position and scale" values
indoorPoint -= half2 (_IndoorOffset.x, _IndoorOffset.y);
indoorPoint /= half2 (_IndoorOffset.z, _IndoorOffset.w);
indoorPoint += half2 (0.5, 0.5);
o.indoorUV = indoorPoint;
}
void surf (Input IN, inout SurfaceOutputStandard o)
{
fixed4 c = tex2D (_MainTex, IN.uv_MainTex);
o.Albedo = c.rgb * c.a;
o.Smoothness = tex2D(_SpecularTex, IN.uv_SpecularTex).r;
o.Normal = UnpackNormal(tex2D(_NormalTex, IN.uv_NormalTex));
if (IN.indoorUV.x > 0 &&
IN.indoorUV.x < 1 &&
IN.indoorUV.y > 0 &&
IN.indoorUV.y < 1)
{
o.Emission = tex2D(_IndoorTex, IN.indoorUV) * _IndoorTint * (1 - c.a);
}
else
{
o.Emission = _BackgroundColor * (1 - c.a);
}
}
ENDCG
}
FallBack "Diffuse"
}
</code></pre></div></div>Matt StarkSome games use shaders for windows to make it look like there is a three-dimensional interior behind the window without needing any additional geometry. This can have a much lower performance impact than modelled interiors, which is good for situations like city scenes where there are hundreds of windows.Flux Rush2019-01-18T11:00:00+00:002019-01-18T11:00:00+00:00/2019/01/18/flux-rush<p>In November 2018 I released a free Android game called Flux Rush. It’s a puzzle-inspired endless runner where you re-arrange the track.</p>
<p>The track is procedurally generated so there is always a possible route. The title animation uses a shader which takes a texture with colour gradients and clips it by colour values which change with time.</p>
<p><a href="https://play.google.com/store/apps/details?id=com.MattsGames.FluxRush">Link to the game on the Play Store</a></p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/SRIlt_k2z8s" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>Matt StarkIn November 2018 I released a free Android game called Flux Rush. It’s a puzzle-inspired endless runner where you re-arrange the track.Topology Flip2018-10-16T15:39:00+00:002018-10-16T15:39:00+00:00/2018/10/16/topology-flip<p>Topology Flip is a puzzle game I made for Android using Unity. It’s a variation of Lights Out, where instead of using a 5 x 5 grid it uses various 3D surfaces. Different shapes require different strategies so the challenge of the game is to figure out how to solve progressively more difficult shapes.</p>
<p>The player can also visually customize the puzzle by choosing colours and images for the tiles.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/aFS0Aj5OVh0?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen=""></iframe>Matt StarkTopology Flip is a puzzle game I made for Android using Unity. It’s a variation of Lights Out, where instead of using a 5 x 5 grid it uses various 3D surfaces. Different shapes require different strategies so the challenge of the game is to figure out how to solve progressively more difficult shapes. The player can also visually customize the puzzle by choosing colours and images for the tiles.Unity Minecraft Clone2018-10-15T08:47:00+00:002018-10-15T08:47:00+00:00/2018/10/15/unity-minecraft-clone<p>In August 2018 I made a Minecraft clone using Unity. I primarily focused on recreating the world generation. The world generates endlessly in all directions and only loads the chunks within a fixed radius of the player. The chunks are persistent, so if you build something then walk until your building is outside the loaded area then return, it will still be there.</p>
<p>Chunks are made up of two types of data: blocks, which are stored as a 3D array of integers, and objects, which are stored as prefab references along with some additional data such as rotation. This means chunks can contain objects with components, for example torches that emit light.</p>
<p>One of the main challenges of the project was trying to get a stable framerate. For example, the process of generating a chunk can have an impact on performance, so the chunks that need to be generated get added to a queue to prevent multiple chunks being generated in the same update. Another challenge was generating objects which span multiple chunks. A tree that is generated in one chunk might overlap a chunk which has not yet been generated. I dealt with this using the same method that Minecraft uses, by generating an unloaded strip of chunks that have not yet had trees generated surrounding the chunks that have.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/emh_jlQ7LcE?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen=""></iframe>Matt StarkIn August 2018 I made a Minecraft clone using Unity. I primarily focused on recreating the world generation. The world generates endlessly in all directions and only loads the chunks within a fixed radius of the player. The chunks are persistent, so if you build something then walk until your building is outside the loaded area then return, it will still be there. Chunks are made up of two types of data: blocks, which are stored as a 3D array of integers, and objects, which are stored as prefab references along with some additional data such as rotation. This means chunks can contain objects with components, for example torches that emit light. One of the main challenges of the project was trying to get a stable framerate. For example, the process of generating a chunk can have an impact on performance, so the chunks that need to be generated get added to a queue to prevent multiple chunks being generated in the same update. Another challenge was generating objects which span multiple chunks. A tree that is generated in one chunk might overlap a chunk which has not yet been generated. I dealt with this using the same method that Minecraft uses, by generating an unloaded strip of chunks that have not yet had trees generated surrounding the chunks that have.Twisty Slidey Cube2018-10-13T19:43:00+00:002018-10-13T19:43:00+00:00/2018/10/13/twisty-slidey-cube<p>Twisty Slidey Cube is a puzzle I made for Android using Unity which combines the mechanics of a Rubik’s Cube and a sliding puzzle. One of the main challenges was finding a way that the player could rotate the view, rotate sections of the puzzle and slide sections of the puzzle using only touch inputs. The program supports puzzles of any size and each puzzle section can have a unique model.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/Ehe1i8v09hI?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen=""></iframe>Matt StarkTwisty Slidey Cube is a puzzle I made for Android using Unity which combines the mechanics of a Rubik’s Cube and a sliding puzzle. One of the main challenges was finding a way that the player could rotate the view, rotate sections of the puzzle and slide sections of the puzzle using only touch inputs. The program supports puzzles of any size and each puzzle section can have a unique model.