No system can know itself. A brick, a palm tree or a poem cannot know themselves for obvious reasons. The only system of which it is at all meaningful to speak it terms of active knowledge is, of course, we ourselves.
First, I cannot fully know myself on simple empirical grounds. I can learn several general facts of various complexity about myself via observation: I have five fingers, I get cranky when I’m hungry, I like Western music. If I wish to learn more about the way my internal organs work – that would be trouble. Were I even to conduct a misguided attempt at auto-vivisection, as soon as I begin slashing my brain into pieces to see what it’s made of I would already begin defying the purpose – by destroying the only thing in me that can perform the “knowing”. Albeit this is a rather silly example I want to immediately address two objections: one, I cannot learn about myself by dissecting other people – that could maybe be a way to learn generalities about human bodies, not myself; and two, what I actually learn by using tools, such as x-ray machined and fMRIs is a big question in itself and I will address it below.
Second, I cannot know myself simply because of the physiological properties of myself. Knowing where the light switch in my bathroom is, to say nothing of actually operating it, involves an enormous amount of other context-dependent information about spatial relations, visual representations and so forth. In other words, it takes a vastly complex neural network in my brain to know even a simplest thing. Were I to go really micro and try to gain knowledge of the structure of my brain, even if the technology were available, In no way could I do that. To have a representation of a single neuron in my brain would take more than one neuron because, well, neurons don’t work alone, and even if they were, say, simple transistors with just two binary states, all such a neuron could possibly contain by itself is just a 1 or a 0, and nothing about its own structure. In short, to know of every neuron of my brain would simply take more than one brain. And one, of course, need not stop on neurons as a basic unit of what there is to know and can go smaller and smaller.
A brain, for this reason I think, but maybe for some other reasons too, is essentially a machine of approximation and generalization. One can know a certain pattern of one’s own behavior or that of nature, one can even fine-tune it rather substantially, but there is just no way to reach that ultimate level of representation where the correspondence between the phenomenon and the representation is 1 to 1. And if a system cannot know itself it can obviously not know the system it is a part of – the world.
There are many ways of externalizing knowledge: books, fMRI pictures – all kinds of ways of encoding and storing vast amounts of information. But that doesn’t solve the problem – just pushes it to another level. All these tools can do is give more ways of approximation in the best case, and more ways to distort data in the worst case. Approximations are fine, but this means that there is no access to the “real” world out there that scientific realists, for example, want to believe in. And it at least puts a limit on the kinds of things we can know – certain “laws” of interactions between objects, maybe, but certainly not things in themselves.