AI can talk, not just show us your progress

Check my reply (and the linked post about a Chinese model I tested).

People had tried for decades, even before LLM became the new favorite model. There is simply still a wide gap between low-level features and high-level concepts that current models still can’t handle. A picture, a board, and a string of text, are simply very hard to build a multimodal model to accommodate all of them, let alone running any inference on them.

1 Like