In computer graphics (or other signal processing disciplines) aliasing is an issue where patterns are formed in a signal that don’t exist, due to the sampling being too wide for the original pattern.
In our field, this mainly happens with sampling of repeating textures on far away objects, where suddenly the step between 2 pixels on your screen isn’t large enough to capture the true nature of the texture, and a different pattern emerges [4].

Generally, to fix this issue, we can use a variety of techniques from mipmaps to msaa. Unfortunately I have tried neither but let’s explore them conceptually.
Multisampling adresses the issue by simply sampling more. If you don’t get the correct signal due to samples being too far apart, sample more! and average out the output. This would probably work but at the cost of blurring our image (and a loss of performance), which goes against the spirit of dithering.
Mipmaps are like the opposite of sampling. Instead of blurring the samples, how about blurring the signal itself? Instead of inventing signals that aren’t there the output will hopefully get the general shape. This could work although the mipmap would have to be applied to the 3d dithering texture itself, and the output would certainly not look like dots either.
<aside> ☝
At the end of the day, both these solutions could be interesting but would not fix the chief problem itself: Your dots should not be so small to be at the mercy of aliasing. Our goal is to make these dots a consistent size, and aliasing is a sign of failure long before we get to a solution.
The chief solution: don’t make your dots too small
If you don’t, you might still experience aliasing issues at sharper angles, which we discuss here.
</aside>