A neural net is an AI model that shovels calculations through multiple nodes repeatedly, instead of doing a single big calculation (this technique was originally based on an abstract guess about how brains work, hence the name).
Deep learning is an architecture for neural nets where there are many “layers” nodes. You can think of it as building neural nets on top of neural nets on top of neural nets.
Transformers are a twist on deep learning that made LLMs possible.
AlphaGo was a neural net, and a deep learning model, but not a transformer model. It may have been important to the dominance of deep learning inside Google’s research programs, but the Google Brain team that invented transformers was (at that time) distinct from the Google DeepMind team that built AlphaGo.