Alright, we might be getting to a part of the discussion where we might never agree:
everything I quoted above is, in my opinion, also true about humans.
As many people “know” and most everyone else stubbornly refuses to accept, humans don’t actually have free will, or rather, the concept of “free will” doesn’t make any sense scientifically (because everything in the universe is a combination of deterministic and stochastic behaviour, and the common intuition of “free will” has the arrogance to imagine it’s neither, yadda yadda yadda – it’s actually old news so I’m not going to talk about it too much, you either agree or you don’t – and if you don’t you’re almost surely wrong).
Almost surely, humans are actually organic robots that generate/exhibit consciousness (whatever that means) and are “programmed” in a way that gives them the illusion of this “free will”, but their behaviour is as deterministic as everything else in the physical universe. I think the only way for this to not be true is to resort to some kind of “spiritualistic” theory, like postulating the existence of a “soul” that somehow “controls” the human body and doesn’t follow the laws of physics as we currently understand them. So you know, it could be, it just seems very very unlikely.
So humans don’t have a “source code” as usually intended in computer science, but they do have an “algorithm” based on physics, with an input and an essentially deterministic output (the “choice” of behaviour).
To be clear, while “free will” doesn’t exist, the phenomenon of “choice” does. Humans do choose things… the same way that any algorithm does. You give a very basic chess-playing bot an input, the algorithm will run through the options of the legal moves and choose one according to the algorithm. It’s a deterministic choice, but it’s useful to be able to describe it as a choice. Just like it’s useful to talk about evolution as if it was consciously making living beings evolve in a certain way with the intention of making them survive. It’s too complicated to talk about reality without using these words, so we just use these words.
“Feeling” is something that depends on consciousness, because it’s something you “experience”. As far as we know, no piece of code so far has achieved consciousness, but some day they might, and that day they might also develop something similar to “feelings”.
Also, there’s the next part which is more of my own original idea:
“Understanding” is just an illusory feeling that humans get when they have a very good heuristic about something they’re very familiar with.
Anyone who’s studied logic knows that “it’s turtles all the way down”, i.e. you can’t actually prove anything to be true unless you start from some arbitrary point where you say “alright, this is true and these rules for deduction are also true, let’s go from here”; an analogy to how it works in practice is that there are some ideas about which humans just go “this must be true, there’s no way it isn’t true” (though they often don’t agree with each other about it).
For example, if I say, “If it’s raining then I deduce it’s raining”, you’ll say “obviously the deduction is true”. If I say “If it’s raining then I deduce it’s not raining”, you’ll say “obviously the deduction is false”. And you probably think you understand what you’re talking about.
But you’re basing that sense of what’s “obviously true” on your experience of how the world, and how the phenomenon “things being true” works. In other words, your sense of understanding what’s obviously true is just a heuristic. It might be a very good or a very bad heuristic depending on what experience you have, but it’s still just a heuristic, and as anyone who’s studied, say, mathematical set theory, knows, that heuristic is actually bad more often than we think.
There may or may not be an “actual objective truth” in the universe, but the best humans can do is build a heuristic mental model for what it is. And most of the times, when a human feels like they “understand” something, they’re actually stupendously wrong.
So, yes, I agree that machines don’t have “understanding”, because “understanding” is a feeling/experience, and like all feelings/experiences it requires consciousness.
But since understanding is an illusion (in my opinion), then I don’t actually think it matters much at all.
This is why I find AI research so fascinating. It’s an awesome intersection of Mathematics, Computer Science and Philosophy, and it forces us to challenge our assumptions and intuitions about what “intelligence” or “consciousness” or “feelings” or “understanding” mean, much more concretely than philosophy already did for centuries.