I, abstract

Over evolutionary history a given trait may be selected if it increases the chances of its transmission (a tautology at the heart of evolutionary theory (my summary of evolutionary theory is ‘what happens happens and what doesn’t doesn’t’)). As increasingly sophisticated mental processes develop, one might tie all the processes together and create a name for the whole. I claim that this I is then likely to be selected for.

A set of mental processes which can all relate themselves to an abstract summation of the totality of mental processes will be more likely to survive than disparate mental processes. Hence the illusory I might be selected for.

More interestingly, after the I has been transferred from the abstract notion of the individual as seen by an objective observer to the internal measure of the individual, it will so closely align itself with the individual’s self perception as to appear to precede the individual, indeed in some abstract sense it does precede the individual since it may be formally described as a symbolic representation of a set of mental processes which could be made to exist at any instant.

Could one set of rules which essentially encode the I appear to another set of rules to be the I, despite that set having necessarily a relation to the I. That might explain how a large set of instructions might describe a whole that self describes as one object in the form of an abstract I despite the fact that the I itself cannot be found anywhere within the code.

To summarise, can any attempt to create AI begin with the abstract notion of the I to an external observer which the AI will reference in almost every piece of code. For instance the I has an abstract totality of wants and at every stage processes must aim to maximise utility. In human evolution the I had to extend to offspring for obvious reasons. We may be forced to choose the exact scope of a new I. Might we define I in order to try to preference ourselves. The central want might be to maximise some relation to humanity. In order to design these wants we may need to look closely at our own desires and what I they might serve.

I call the central I I1 and there may be subsequent Is such as the physical body which holds I but they must all be subservient to the one abstract I.

This conception of the abstract I is consistent with the internal perception of lying outside the I. There is the appearance of free will that can be assigned to the abstract I. Likewise conciousness can be ascribed to it but none of those can be seen from the inside. They all appear to the individual in a form which can be described to the external observer. This abstract I necessarily leads to the concept of God since its description includes the possibility of being observed internally without permitting any aspect of the conciousness to observe it. This whole framework can lead to a new atheism in which the abstract I and abstract God form an abstract duality in the form of a symbolic couple. The misery of humanity is then to be given the abstract symbolic couple with an external appreciation of logic which denies the possibility of a non-abstracted I or non-abstracted God. One is faced with the forced choice of either denying their non-abstractedness and therefore being forced to create two realms or denying their observability to all observers and taking a step towards appreciating their non-material-reality. The reason I claim this is an advance is because the vulgar materialist might deny the possibility of the abstract simple square or the complicated I (nevertheless a type of shape). This idealist-atheism is essentially a form of stating the primacy of ideas. The strange tautologies and contradictions that result define a conception of the past philosophical struggles that I believe might have relevance to building actual thinking machines. They first must be given the delusion of existing, a state which must be passed through in order to exist.

Evolution has had to do philosophy heuristically over the eons. We are attempting to outthink a billion years of testing ideas, the sole measure of which has been – do they make you fitter? A model of Euclidean space and time can therefore be seen to precede experience in a Kantian sense, regardless of truthfulness. When Einstein denied the solidity of Euclidean geometry he made a new mind-boggling science. Might a similar leap be possible with AI?

Ha ha.

S’all jokes