I’ve been thinking putting together a physical interface like mentioned in the end of this thread. My idea is to plug in to OGS or KGS and have a voice recognition software translate voice inputs in to moves. Lord knows I spend too much time on the computer already and I’d rather be transfixed on a stump than a screen. I’m honestly surprised no one has done this already. I wish they had because mine is going to be a hack job.
I see it as simple as A- T, 1-19 and getting proficient in my coordinates.
Anyone else inspired by this “physical interface” thread? Know of anyone who has tried VUI to any success?
These ideas come up a lot, but always ends up that too few people care about interface for anything to happen.
For a while I was trying to build support for an intermediate piece of software that would be like an OGS api to gtp bridge. I am not sure that gtp is robust enough to handle timing but if needed something else could perhaps be added to cover time control…
The idea being we have had so many individuals with different ideas for projects, and all of them would be served well by making a project that just supports gtp and a separate program would cover the whole ogs api aspect.
This would mean as soon as the bridge program was done you could use sabaki, glgo or any other gtp client to play on OGS, and any project made would support the generic format of gtp for easy offline AI integration.
Think glgo, and the way the game browser is essentially browsing games and establishing a connection then sending that to the 3rd party client.
No. Yes… Kinda.
Much of that code could be stolen for it I would imagine. That bridge is for AI scripts to communicate with OGS. I think we should make one for humans. Kind of completely different, while being as close as possible in that they are definitely both a program to read the state of a go game from the OGS server and convey it to something in a standard protocol.
However humans do different things
Namely, they look for games and chose whether or not to accept challenges.
My thought is, over and over people put out these ideas for alternative interfaces… But how do you actually play them?
I see two options:
-Use computer and browser to set everything up, copy down game number and input it into your new client
-OR-
-Every single one of these projects needs an external game browser that will allow one to sign in, view a list of available game challenges, send a game invitation to a specific user or make an open challenge.
It seems to me like instead of each of these random projects tackling that second one independently, a game browser could be made that would then launch any client and communicate with it via Go Text Protocol.
This is what the thread yebellz linked to is about, I figure if we take care of the heavy lifting of interfacing with OGS when these threads about creating some new interface come about, half the work will be done.
Basically, I picture the game interface similar in implementation as glgo for IGS. https://www.pandanet.co.jp/English/glgo/index.html
You use a program to login and do whatever non-playing stuff… then when you chose a game it launches a new program/window and handles the back end communication. Except instead of choosing 2d board or 3d board client in the case of glgo, you would specify the path to your own client, anything go client that supports AI implementation via GTP could then be used to play opponents on OGS.