Semantron 20 Summer 2020

Utilitarianism

Determinism

When Nagel proposes different forms of luck (constitutive, circumstantial, causal, resultant), 69 he's simply outlining different facets of determinism. Or referring to varying degrees of determinismwhere in most cases our choices have less effect on the outcome than we might hope. This I can address by reiterating the distinction between ‘g88d’ intentions versus ‘g88d’ outcomes. Accountability and judgement are valid of intentions and the degree to which the decision-maker considered the probability of their intention actually affecting the consequence. It would also be affected by the degree to which determinism is true. When talking about determinism I think we should first assume that it is true, and then ask whether there is a possibility where it could not be. When we consider the factors that led to even trivial decisions, the answer always lies in a mixture of our past memories, upbringings and external factors (external factors themselves influenced by the logical cause and effect of physics). But for the decision- making process in our minds to be rational and logical, 70 there must be one singular end goal which we are trying to maximize. If we take my premise of utility as being comprised of multiple end-desires, all usefully formed through evolution (happiness, knowledge, experiences etc.) then there is no one desire to aim for. Artificial intelligence can be taught to think up methods of reaching an objective (desire) but it cannot rationally find a way to consider multiple end objectives/desires. 71 It cannot ‘choose’ between them; it must be told how much to weigh each separate objective; it must be told which to prefer and by how much to prefer one objective over another. Humans must make our own considerations (choices) of how to equate each facto r in our own ‘utility calculus’. 72 When given choices, we must decide how to weigh each end- desire. This doesn’t discount molecular level determinism, but if we think that our streamof consciousness operates rationally in pursuit of an objective, much like a computer, this would not be possible if we have multiple end- desires. If it’s true that there is more than one end desire, it would seem necessary that we do indeed make at least some choices. In addition, if we were to isolate from our senses in a neutral room, we are able to logically trace our thoughts backwards (this is why we are inclined to believe in determinism) but we are unable to trace our thoughts forward any faster than real time. Despite knowing that there will be no major external input of sense data, I cannot forward my understanding or predict where my thoughts will go. This leads me again to think that there must be some randomness, some element of choice in partnership with determinism.

69 Nagel 1979. 70 ‘logical’/’rational’ as in: ‘Influenced by x amount of emotion, y amount of past experience, and z external factors, and all the factors subconsciously (but rationally) weighed up to form a conclusive action.’ 71 If you were to program the AI an overarching objective which balanced the multiple sub-objectives beneath it, then it could balance them. But to rationally balance multiple objectives without an overarching objective is not

possible, unless it were done randomly. 72 A replacement of the Hedonic Calculus.

112

Made with FlippingBook - Online catalogs