cactus flower logo

Maarten vanĀ Beek

blog header

TLDR;

In order to experiment with shaders (programs running on the GPU, utilizing its parallel power), I wrote a shader which renders the cactus flower logo. No premade pictures are used, the entire picture aswell as the animations are calculated real-time. It utilizes signed distance fields (SDFs) as well as the rotational symmetry of the logo to achieve the result. In the post below I elaborate on how it was done. The following is the result, the code is available on Shadertoy.
Render of shader for cactus flower logo

Shaders

A shader is a program which runs on the GPU, commonly to render some visual. They utilize the GPUs capabalities for running a high number of computations in parallel. This particular fragment shader is run for each pixel of the visual, outputting the color it should have for the current frame. After the color has been determined for each pixel, the frame is rendered and can be shown. Then the process repeats for the next frame.
These shaders cannot have a dependency on the computations done for another pixel, as this would break the possibility of them running in parallel. There is a limited amount of input for each computation. For example, the shader does know the coordinates of the pixel it is executing for, and it knows the time since it started running, however, it doesn't know the output of a computation done for a different pixel. These limitations must be taken into account when writing a shader, and techniques must be applied to achieve the desired result despite these limitations. One such technique, are SDFs.

SDFs

Consider a shape you'd like to draw on your canvas. Say you would like to draw a circle. A signed distance field (SDF) is a mathematical function which describes the shortest distance to this shape for any point on your canvas. On the border of your shape this distance will be close to zero, further away from the shape this value will be higher. If the point is within your shape, the distance will be a negative number. Inigo Quilez has some amazing articles on the topic.
These functions are useful, as they can run in parallel for each pixel, independently. Perfect for computations done on the GPU. We can now run the following shader for each pixel on the visual. First, we determine the SDF value for the current pixel. If the value is below zero (within the shape) we color the pixel black. If the value is equal to or above zero, we color the pixel white. Without any knowledge of any neighbouring pixels we can now know exactly what color we have to color our pixel in order to render a circle on the screen.
Another favourable characteristic of SDFs, is that they're easy to combine. If you compute the SDF for two shapes, and take the highest value for these two SDFs, you now have an SDF for the space in which these shapes overlap. This characteristic can be used to assemble more complex shapes from primitives shapes (such as circles, squares, lines, etc).

This particular shader

The shader as seen above utilizes a few techniques. It uses SDFs to construct the design from primitive shapes.
It uses the fact that the design is the same pattern repeated 6 six times rotationally to reduce the amount of computations used. This may be called it's rotational symmetry. To achieve this, it determines whether a point lies in one of the six rotations, and then maps this point to the point in the non-rotated segment. It then renders for this pixel whatever is renders for this point in the non rotated segment.
It warps the coordinate space in order to achieve the shockwave/water effect. Within the range within which the effect is applied, the coordinates of a point are mapped to either a point further towards, or further away from the origin. This creates a radial distortion at a given distance from the origin. The distance from the origin is a function of the runtime of the shader, which makes the effect move radially outward.

A more indepth look

The entire code for this shader can be found on Shadertoy. As reading this code without any prior knowledge can be challenging, I will elaborate on some of the techniques used in the shader.

Normalizing the coordinate space

By default a pixel its coordinates depend on the resolution of the render. (0,0) lies in the bottom left corner. The x coordinates will then be an integer >= 0 and < horizontal_resolution. The y coordinate is the same, but considering its vertical resolution. It's practical for these coordinates to be resolution independent, in order to render on any given resolution. Therefore the coordinates are mapped to values ranging from -1 to 1, where (-1,-1) is the bottom left corner and (1,1) is the top right corner. This is all achieved by the following code
#define shortestWidth min(iResolution.x,iResolution.y)

// fragCoord is the (x,y) coordinate of the current pixel relative to the resolution
vec2 uv = (2.*fragCoord-iResolution.xy)/shortestWidth;

Achieving rotational symmetry

length(uv) * cos((mod(atan(uv.y/uv.x)-PI/3.,PI/3.)+PI/3.)+vec2(0., -0.5*PI));
This requires some dissecting. If you consider the coordinates of the current pixel a vector from the origin, atan(uv.y/uv.x) computes the angle of this vector. We want the segment facing upward to be our default segment, as in this sector the x and y dimensions are running in their usual direction. Taking the modulo of this angle makes sure that if we exceed this angle beyond 1/6th of the full rotation, we normalize it back to this initial segment. The cos(angle, angle+0.5*PI) converts this angle back to coordinates. We have lost the length information for this vector, so these coordinates will now always be at exactly distance 1 from the origin. Therefore we multiple the result with length(uv), the length of the original vector. The result is that any point will be mapped to this 1/6th segment of the canvas, pointing upwards. The consequence is that any rendered pixel is repeated 6 times rotationally.

Shockwave / water effect

float width = 0.2;
float strength = .2;
float progress = min(max(length(uv)-startVectorLength, 0.), width)/width; 
float deformationFactor = 1. + strength*sin(PI*progress+PI); 
return uv*deformationFactor;
This is written less compactly, but isn't more complex than anything that has come before. The width and strength are constants tune the effect to look good. The progress variable takes the distance from the origin to the current point, and determines whether it lies within the effect by considering the startVectorLength (distance at which the effect begins) and the width (distance that the effect lasts). If it does, it determins the deformation strength. This strength is small at the start and end distances of the effect, and peaks in the middle. This strength is then applied to the coordinates of the current pixel, to instead render a pixel that's supposed to be further out or in.

Conclusion

It requires some unusual thinking to get a shader right. Most programs written for a CPU iterate over the given space, maintaining state as they do. Writing a shader is like writing a loop in which each iteration is executed instantaniously, and no state can be maintained. While this might be initially challenging, the results are of higher performance than an iterative approach would likely be. Shadertoy is full of creative ways people have used shaders to create beautiful visuals. This, this and this are great examples of people using shaders to get great realtime results, give them a look!