Abstract: MelGen is a collection of algorithms for music generation purposes designed after an analysis of both technical and emotive (heuristic) qualities of the music-generative process. They follow a recursive model that considers both micro and macro compositional elements. They were implemented in C and they deliver the generated music in MIDI format.  One of the algorithms from this collection can be perused and downloaded from my GitHub at the following link: . As its open categorization entails, it outputs music with the following qualities: Cminor (based), melancholic, 4/4, mid-tempo.

Classification: Algorithm Design, C Programming


Problem structure

The question I set up to address was the following: can I attempt to ‘humanize’ computer generated music? 

The plan of attack I devised for this problem comprised the following individual procedures: 

  1. Model contrapuntal rules for melody construction as a set of arithmetic rules for a sequence of integers;
  2. Compile sets of harmonic relationships and categorize by emotive qualities;
  3. Generate a collection rhythmic-generation procedures each for different tempos.

With the work derived from these processes I was able to build a collection of algorithms, categorized and apt to different types of music generation purposes.

They all followed a very similar design in their construction: they would only differ among each other in the addition or omission of few specific routines or values for the some of the variables. One of the algorithms is illustrated in detail in the following section.

Algorithm Design

The presented algorithm is classified as “Cminor, 4/4, melancholic, mid-tempo”.

Rhythms are stored into two-dimensional arrays. Each row represents a musical bar and is quantized in this example by 16 total subdivision (the algorithm is 4/4 in tempo). The total dimension of the array is thus n x m with n = total bars, m = 16.

Rhythmic values are picked from an array containing all permitted values: each one has been assigned a different probability of being picked. Specifically, in the in this specific case, the array contains the values for 16ths, 8th, quarter and whole notes. Respectively, they have the following probability of being chosen: 1/10, 3/10, 1/2 and 1/10. The procedure runs to fill the whole matrix with rhythmic values.

Melody generation starts with the initialization of a 15-notes musical scale. Random elements from this scale are chosen to form a sequence and made to comply with specific rules with regards both to micro and macro compositional considerations. Rules are codified as simple arithmetic relationship among the pitches; if such relationships are not met, a new random pick is chosen until all relationships are met for all the elements.

Here are some of such rules:

  • start with Do and follow with a jump (do not repeat the same pitch);
  • do not allow three notes in sequence with the same pitch;
  • do not allow notes that are in a ‘medium-length sequence range’ (defined to be a sequence of 4 numbers) to be in the pitch-range of 5 semitones (this can be seen as follows: do not allow to remain in the same short pitch-range for too long);
  • do not allow a leap in one direction that is greater than 3 semitones to be followed by another leap in the same direction (this can be seen as ‘large leaps in pitch must be followed by a note in the opposite direction’);
  • do not allow ‘too wide gaps’ in medium-length sequences (defined as every 4 notes to be greater than 10 semitones);
  • do not allow ‘too wide single leaps’ (greater than 8 semitones)

…and other more obscure rules that allow for a very musical melody construction (the complete list can be seen by perusing the code on GitHub). Some of the listed rules have been treated as dependent to the 'emotive' classification: for example happy melodies allow for more recurrent leaps, different ranges and base scale.


Accompanying harmonies are chosen conditionally to what the first pitch of each bar has been determined to be - with different choices possible for each pick. The set of all possible harmonies for this type of classification have been chosen following a general heuristic analysis of what ‘melancholic’ harmonies should be.

I’ve used the MIDI specification to output melody and accompanying harmonies as .mid files.

Here below follows a video containing an export example: the music generated from it was used for the introductory 1-minute clip at the top of this page. For as many melody generations as wanted, you can run the GitHub file on any C compiler.