Alphabet Project, Part 4
Previous entries in this series:
Step 4 - adding musicality
Before we move out of the purely generative phase of this project, there is one more dataset that needs to be taken into account. A friend responded to my first blog post asking about factoring in overall letter frequency. The original graphs show only letter distribution, with the color of the curve mapping to an overall frequency legend at the bottom of the chart.
It’s a good question, and one I’d thought about without coming to any specific conclusion. I couldn’t find anywhere to fit in those frequencies in the original polyrhythm generation, and I had hoped an answer would make itself known once I had a score to look at.
And, lo and behold, an answer did present itself after working through the various polyryhthm generators (see Part 3 of the series)
Looking at the generated base score
there are just a lot of note events. This is likely to provide exhausting for the singers and quite probably the audience as well. My first thought is to use the overall letter frequencies to provide some level of thinning on each part so that the final note density for the part reflects the overall frequency of the letter in the English language.
Let’s take a look at those frequencies:
Here’s my first pass for a verbal explanation of how I want thinning to work:
- if
frequency
is >= 1, roundfrequency
tof
and convert everyf
th note in the part to a rest - if
frequency
is < 1, round1 / frequency
tof
and convert every note at an index whereindex % f
!= 0 to a rest
Let’s see what those results look like:
Ok, for the most part that looks good. The only iffy parts are the value of {1, 1}
for k
and v
. Because those frequency floats are so close to 1, the rounding makes it
so that every note for these should sound, despite having lower frequency than e
, where
only 12 out of 13 notes sound. I played around with adding extra conditions to the if
clause to generate better values, but in the end, the easiest solution was just to hard code
those values as {1, 3}
, which I picked because it’s close to {1, 2}
which is the ratio
for frequencies just above 1.0 while still being a bit thinner, but still a greater
frequency than j
’s {1, 7}
.
Implementing frequency thinning
Up until now the code has been generating each measure as a string containing code for a LilyPond tuplet.
This has been fine when every part is a constant stream of notes, but when we want to start introducing rests, it becomes insufficient. Since we are replacing notes with rests in a repeating modulo pattern, we need to start treating each note as its own entity, and we need to be able to construct a full list of every note in a part so that we can insert rests at the calculated indices. This requires us to be able to handle the notes both as a flat list of events, while also keeping them grouped into measure-length tuplets.
Here, instead of returning a LilyPond string, I’m returning a list of tuples of the form
Now we can loop through that list, along with the correct modulo, keeping a running index, and return a list of processed tuplets, which can then be collected into strings.
For now I’m not applying this to the pulse parse, because I do want there to be a constant pulse, even as I disrupt the constancy of the other parts.
Now we need a way to bring this back to LilyPond. With some more pattern matching this is easy enough:
And this gives us:
Hey! I’m starting to really like this. We’ve still got the pulse, and the fun polyrhythms, but there’s a bit of breath to break up the heavy down beats and provide more variation in the texture of the parts. If you look closely there’s definitely some cleanup to be done in terms of how the tuplets are drawn, which will require additional LilyPond massaging, but for now, with our rhythms turned into constant pulses into something with a little more variation, it’s time to move on to pitch considerations.
Thanks for reading! If you liked this post, and want to know when the next one is coming out, follow me on Twitter (link below)!