Here is a guy I linked to before, because of his interesting research:
In order for machine intelligence to perform in the real world, it needs to create an internal model of the external world. This can be as trite as a model of a chessboard that a chess-playing algo maintains. As information flows in from the senses, that model is updated; the current model is used to create future plans (e.g. the next move, for a chess-playing computer).
Another important part of an effective machine algo is “attentional focus”: so, for a chess-playing computer, it is focusing compute resources on exploring those chess-board positions that seem most likely to improve the score, instead of somewhere else. Insert favorite score-maximizing algo here.
Self-aware systems are those that have an internal model of self. Conscious systems are those that have an internal model of attentional focus. I’m conscious because I maintain an internal model of what I am thinking about, and I can think about that, if I so choose.
All of the above is pretty standard cognitive theory, it seems. I’m not building a machine intelligence, but I am trying to design a specialized CMS, so these ideas are helpful.
However, this is the quote that stopped me in my tracks:
I believe that if someone builds such a device, they will have the fabled conscious, self-aware system of sci-fi. It’s likely to be flawed, stupid, and psychotic: common-sense reasoning algorithms are in a very primitive state (among (many) other technical issues). But I figure that we will notice, and agree that its self-aware, long before its intelligent enough to self-augument itself out of its pathetic state: I’m thinking it will behave a bit like a rabid talking dog: not a charming personality, but certainly “conscious”, self-aware, intelligent, unpredictable, and dangerous.
Whoa! I’m starting to wonder if, someday, I could pass a Turing test . . . .