Image for post
Image for post

Computers have replaced humans in many mundane activities. Can they also replace humans when it comes to creativity?

“Certainly not!” is the answer people usually give. Music is something emotional, how can a computer compose? Music is emotional on many levels — composition, performance, perception. The level that is most susceptible to formalization is the composition.
Composers usually follow many rules in order to produce pleasurable music — harmony, rhythm, chord progressions, beats, sound effects, etc. If computers are taught these rules, they can generate melodies that are potentially pleasant to listen to.
It won’t come “from the heart”, but even music arising from the feelings and thoughts of the composer follows the above rules. In a way, it is shaped by a musical framework. And it is that framework that a computer can understand.
Obviously, a computer cannot be better than Mozart.

Not that it’s impossible for the algorithm to generate Eine Kleine Nachtmusik — it is possible, but it will generate a lot worse pieces along the way, and it won’t know they are worse. Even with self-learning algorithms the process of creativity combined with gradual improvement can hardly be mimicked.
But not all human composers are Mozart either — music is sometimes just a simple, non-memorable background tune. Take elevator or stock music, for example — the point there is to provide a pleasant alternative to silence.
Pop music, on the other hand, aims to be catchy, but it often does that by using beats, lyrics and vocals, rather than the melody itself. In fact, many composers are already using computer software to assist them in writing their pieces.
Algorithmic music composition has been attempted even by Mozart himself, and computers have been used for the task since the sixties, including recent projects like the online service, research projects Iamus and WolframTones, and the SoundHelix programmable framework. As expected, the task turned out to be very hard and results are rarely satisfactory.
What might an endless “musical well” mean — the end of human-composed stock music? Turning composers into arrangers of what has already been composed by the computer?

Maybe there will be live performances of algorithmic music. Imagine that algorithms improve sufficiently and together with synthesized singers (an actual project by Microsoft) provide an alternative to the multi-billion music industry and copyrighted music.
But let’s take a step back from the science fiction landscape where computer-drawn holograms perform computer-generated music using synthesized instruments and voice. In order for all that to work, music must be likeable by humans. It should trigger emotions and affection. And it does so by triggering centers in our brains, as neuroscientists have discovered.
All the rules of harmony and rhythm are aimed at activating our brains. A computer can do that, we must agree. It can’t play emotionally or perceive the music, but it can compose it for humans to play and perceive.
People are extremely complex computers, and the more we learn about our internal functioning, to more aspects of it we can automate. Music is just an example.

Software engineering. Linguistics, algorithmic music composition. Founder at

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store