Contour Grammar vs. Fractal Cutting
Finally, having completed a basic implementation of Contour Grammar, I am getting to see what the idea sounds like in musical form. Indeed, some very interesting effects have been achieved thanks to the functional phrase language as well as the use of coherent random sources (which I will discuss at some point in another post). However, even this idea is lacking the coherence of Fraccut, and it bothers me enormously that such an elaborate idea as Contour Grammar can't even beat a relatively simple implementation of random fractal cutting, a rudimentary mathematical process.
Of course, it's very obvious why Fraccut still holds the record for best melodies. Since the cutting engine essentially starts with the chord progression as a seed, coherence is almost guaranteed. It's a powerful effect. Yet, the reality is that Fraccut is hardly aware of what it is doing. In a sense, the coherence generated by Fraccut is nothing more than a cheap trick stemming from the clever picking of the seed. Nonetheless, it undeniably works better than anything else. In the end, I am concerned only with the audible quality of the results, not with the route taken to achieve said results.
That being said, the big question now is: how can Contour Grammar pay closer attention to context in order to achieve greater contextual coherence? It would be nice to somehow combine the internal consistency of Contour Grammar with the external, contextual coherence of Fraccut.
Grammatical fractal cutting, anyone?