Delivering Project & Product Management as a Service

Epistemology (theory of knowledge) and AI safety

👉AI breakthroughs in the last years brought back to life from science-fiction the concept of AGI (Artificial General Intelligence).

👉Some people resigned because of it (Hinton, Illia) and other fear from this AGI taking over.

👉Even earlier than AGI, the concept of consciousness was dealt with by humans, since HGI started, well, earlier. The problem we are dealing now with AI is what Epistemology calls “The other Mind Problem”.

👉In general, it’s our inability to KNOW what the other is feeling, is the color Red that I’m seeing is the same color you see? I believe so, but belief is not knowledge.

👉Moving to clinical terms, when a patient is feeling pain, the surgeon has no way of knowing the pain, even if the patient describe it on a pain scale and even if the surgeon had the same operation, she is still a different person than him, so the pain may not be the same. Our feelings and thoughts are a closed box.

👉Artificial Neural Networks are a bit different, as the physical model of the network can be introspected by mapping the state of the weights activations as every feature in an AI model is made by combining neurons, and every internal state is made by combining features. All this without the process of an ethics committee.

👉But even if you map the physical structure of the network, which Anthropic is doing as part of their AI safety effort on Claude. I wonder if they will be able to measure a “self” or Qualia in ANN. This I think will have to involve (or evolve) an introspective component in the network.

https://lnkd.in/eynYQtEQ