How the Commodore 64 demo “A Mind is Born” generates an excellent music video, using only 256 bytes of code?

182 views

# [Link to Video of Demo](https://youtu.be/sWblpsLZ-O8)

No, I don’t mean megabytes. The entire program is 256 **bytes**. That’s significantly less than 0.0000003 gigabytes, to generate both the video and the audio. Call of Duty: Warzone, before they optimized the install size, was about **1 billion times** the install size of A Mind is Born. This seems completely impossible to me.

There’s an explanation of the code in the video description, but it’s the *opposite* of ELI5. I was hoping someone here could write up something a bit more accessible.

In: Technology

Because that program is just establishing an algorithm to generate patterns.

The base COD game engine is only a small part of the overall game file size. The bulk of the game file is made up of texture images used to provide the detailed graphics. The more detailed and realistic looking the surfaces are, the higher resolution (and therefore larger file size) the images used to cover the model frames are. The game isn’t generating the maps and images progressively, so it has to already contain all the maps, character models, items, and the corresponding images to place on them.

Take what I write with a grain of salt as I am far from a computer expert, but I suppose there is code that is extremely optimized and compressed as much as possible and there seem to be patterns the program reuses by re-iterating code several times, this saving space on writing new code for other parts of the program.

There is one hell of a cool shader that is all geberated by code, and it looks like terrain with incredible details. Cant remember the nane tho, but that was 2 mb or less if im not mistaking!

Based on the desctiption, it uses a lot of Commodore’s hardware abilities.

The sound is generated with a sound chip that only requires a couple of bytes to produce a note.

The sequence for a melody is randomly generated with a hardware register, basically, you only need to send one command to get a next note, the melody isn’t stored anywhere.

And the video pattern is mostly a cellular automation where the note of the melody defines a few initial values and then everything else is computed from it row by row with a very simple algorithm.

Overall, it’s a really technical thing that manages to first generate as much random data as it’s possible to get from the hardware with only a few commands and then use a simple algorithm to “grow” everything else on it.

They can utilize the built in sound synthesizer in the computer to generate the tones. From there the track is just a series of nested loops with some variables external to the loops with conditions to control when additional sound layers get mixed in. The visuals only take a a couple words (32 bits) worth of data to project individually, and they seem to mostly repeat aside from some effects applied to the overall view. So the data running the actual application could be just a couple dozen lines of code where the music is stored in one 2D array and the graphics are stored in a 3D array, but with pretty basic info about what to display and how to affect that display with some sort if bitmask.

It’s all algorithms. You don’t code in the whole sequences of sight and sound, you write a program that will produce them. You can hear how long the loop beat is, so we need just a few bytes of programming looping to keep sending those values to the audio processor. Later you add more to the loops, so you need a few more bytes for counters to tell it when to start introducing the other audio. The video is fairly random signals in random geometric shapes sent to the video processor, and it’s beating in the same loop as the audio.

The old game Ballblazer did something similar for the audio. It’s a really good jazzy soundtrack, but it’s all generated randomly (but with a set of rules) so it takes little memory and CPU to accomplish. [Here’s the audio](https://youtu.be/sqMs2ybpVEE) showing exactly what’s happening.

Note that at least with the Atari you don’t have to keep sending values to the sound chip to keep getting audio. I think the C64 is similar. You send a value for a certain tone with various properties to the sound chip, and it plays that tone until you tell it to do something else. So if you’re not changing a tone more than, say, eight times a second, you don’t need to give the sound chip instructions more than eight times a second, which is really slow for a computer running at one million cycles a second.