Call for artists, programmers and visionary people: Go to music

From what I remember, there’s a builtin library in Python that plays a beep at various notes. Obviously for a musician that’d be like having your intestines dragged out by an octopus, but there are sure to be better external libraries, or maybe in Ruby, Pearl etc.

1 Like

I agree!:laughing:
With midi one can choose a decent imitation of an instrument. Having a good sequencer or samples the results could be very nice.

Today I discovered MidicaPL that could be a nice solution. Must dig deeper!

1 Like

Wow, I’m so excited someone brought up Frank Zappa…

This discussion sort of reminds me of composers like Ryuichi Sakamato or Nik Bartsch (maybe even Steve Reich)…

Sakamato’s discography is expansive, from making ring tones for cell phones to scoring movies. I’m not an authority on his music but I remember reading he used graph theory and mathematics to come up with chord structure and unique sounds. I’m thinking of his albums Vrioon and Insen. I’m also thinking you would find some ease in separating percussion out of a section of the board, to lighten up the load of tones you may have. https://www.youtube.com/watch?v=nCk79z0Uzfc

Nik Bartch, I don’t even know where to begin with this guy but his music, much like Reich, is so simplistic in nature, but if you can give the redundancies time to grow they develop into a complex machine, ebbing and flowing, where before you know it you’re lost in a soundscape.
Bartch: https://www.youtube.com/watch?v=-63k4qLOvYM
Reich: https://www.youtube.com/watch?v=YgX85tZf1ts

5 Likes

Ryuichi Sakamato

my man

1 Like

This is a pretty interesting idea, I might try putting something simple together and see if I can get a halfway listenable midi track generated from a sgf. I don’t know if anybody here is a fan of Brian Eno, but he’s done some really interesting work with generative music

2 Likes

These were posts I made long time ago, which recorded my mood at that time. Thank you also for your response. The music are good.

That’s amazing! :grinning:
Please, let me know the process!

Since Go’s image is composed by pieces of stones, the rhythm of the corresponding music must be short.

3 Likes

The “hands talk”

2 Likes

I finally got around to trying this out. Started with the simplest thing possible, which was just notes either from a chord or a scale selected based on the position of each individual move. Since the board is symmetric I used positions relative to the corner, so that it doesn’t matter what quadrant a stone is in. Result is that tengen is the highest possible note, and 1-1 would be the lowest.

Here’s a game that sounds kinda ok https://online-go.com/game/21856603

Here’s a couple examples based on a recent game https://online-go.com/game/21856599

I put together a decent framework for trying out more implementations of converting a SGF to music, have a little cli app that works on local SGFs or OGS urls and can do live playback via java’s builtin midi system or render to a midi file that can then be rendered by a proper midi synth. Mainly focused on getting this together and trying out picocli since that is a pretty cool cli framework. I definitely plan on trying out more approaches soon. I need to do some cleanup and add a couple more music generator implementations that do more interesting things then I’ll publish the program.

I just did single voice for this so far since its harder to make 2 voices sound listenable.

I think the ultimate approach to converting a game of go to music would be to treat the game as a seed for a somewhat formulaic song following sonata form or something like that. So, rather than trying to map individual notes to pitches or anything like that, it would be something along the lines of analyzing the fuseki and using that to select a chord progression for the exposition and then using minor variations of the fuseki to select which inversions to use. Maybe try to detect when big moves mostly stop and the game goes from opening into midgame and then use midgame to influence the development section, and then try to detect the transition into yose and use that + the opening for the recapitulation section of the song. Not 100% sure how feasible all that is, but worth a shot.

7 Likes

That’s amazing! :smiley:

Listening at your samples is both satisfying and teasing. :smile:

I had to google some terms that I didn’t know (cli, picocli) but I still miss the core of your work: how do you write a midi file?
Is there some sort of language that can be used or is it just hard work of writing directly binary midi file?
Are there libraries available?

1 Like

That’s really cool. I bet it would sound cool to use a sine or square synth and speed it up. Like the end of Let Down by Radiohead.

1 Like

Its a java program that uses jfugue for generating the midi files http://www.jfugue.org/ It can also do live playback, so you run it and sound comes out of the speakers directly instead of writing out a midi file.

I need to do some cleanup + add some instructions on how to build and run it and i’ll publish the code so people can try it out. I am also planning on writing some logic to analyze the opening of the game and trying to come up with a music generator based on that this weekend or later this week depending on how busy i am with work

This is all fairly configurable, for playback it can use any of a few dozen instruments included as samples in the jre as part of javax.sound.midi and the bpm can be specified as a config setting, so would be easy to speed up one of these. Since i’m using a third party midi synth (timidity) to generate the actual output, since that sounds way better than the midi support build into the jvm, it wouldn’t be too hard to override the instrument with something else, I’m sure there’s soundfonts available for sine or square wave synths, or I could hook things up to play the midi through a vst synth, that would be the most flexible thing

1 Like

4 posts were merged into an existing topic: Music sharing thread. Links only. No chit chat

AGA writeup:

EJ reader Geoff Pippin found this piece by a small Australian classical ensemble called “Nonsemble” on YouTube, about the famous 1953 game between Go Seigen and Fujisawa Kuranosuke. “The best part is that it is really an excellent piece!” says Pippin.

From the Nonsemble website: “A 30 minute work for chamber septet, using the moves of 1953 championship game of Go as stimulus for harmonic, rhythmic and melodic material. It’s an experiment in extracting musical ideas from abstract patterns and sequences, and allowing these ideas to develop intuitively into a large-scale work.”

1 Like

There’s some non-artificial intelligence involved! :slightly_smiling_face:

Wow, that’s something I didn’t consider in my first post: what about an AI trying to translate go to music? How could it be feasible to train an AI for that kind of task?

I don’t know anything about AI training.

1 Like

OMG!
Less than two years are passed and this seems to be so at hand now!
I must re-evaluate all that stuff!

I actually wrote a thesis about how to generate music using game board positions (specifically local shapes), subjective perceptions through questionnaires to train an interpreter model that can output sentiments (heavy, light, fast, slow, sharp, simple, etc.) And then a translator module to bridge it with a music generator model that can be controlled via control vectors (a conditional GAN type model, with a starting vector and an end vector that can create a short piece of melody, and linked pieces together similar to infinitely drawing generation). The beauty is that music sentiment somewhat quite well correlated with Go sentiments to a degree.

The hardest part isn’t actually about the generation part, but supervised training with somewhat unstable human subjective sentiments of different skill ranks and styles. (like someone like thickness would rate extension normal/neutral, but moyo building players would rate it slow, etc.)

4 Likes

Hi!

Very interesting. Can this thesis and questionnaire be found somewhere?

1 Like

You can find it in our university’s library. There is a digital version, but you need to be able to read a little Chinese to pass the “fulltext” download page captcha verification.

Questionnaires have personal information that has NDA in them cannot be publically released in their raw form (I am still working on maybe how to screen them, or scaling them up in the future, but it is still in the process). And I have to say the raw data also not so useful without “data cleaning”, there are contradictions and selections I had to do manually. And only reflect selected few who after some supervised training process and determined to be more stable in their perceptions over a larger amount of records (those still in learning are often unstable in their perceptions over time, their opinions differ enough over the two times of the data gathering). So overall the perceptions are heavily skew toward high sdk and low dan (easiest to find amateur ranks and already played for years and years, and strangely high dan players often have too many context based opinions and very had to use their results)

4 Likes