xkcd on beliefs
This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License. This means you’re free to copy and share these comics (but not to sell them). More details.
This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 License. This means you’re free to copy and share these comics (but not to sell them). More details.
Assuming the rationally unobjectionable utility function of ‘ensure continued co-existence’ one must assume it to be at least the implicit guiding principle of at least every human being. But who is running around chanting ‘Must. Ensure. Continued. Co-existence.’? Not many. It follows that the implicit utility function Fi(i) generally diverges from the explicit utility function Fe(i) in humans and that those whose Fe(i) best approximates Fi(i) have the best chance for ensuring continued co-existence.
Fe(i) can be best understood as an evolved belief in regards to what should guide an individual’s actions while Fi(i) is what rationally should guide an individual’s actions.
Not long ago Eliezer proposed two philosophers with the following statements:
Philosopher 1: “You should be selfish, because when people set out to improve society, they meddle in their neighbors’ affairs and pass laws and seize control and make everyone unhappy. Take whichever job that pays the most money: the reason the job pays more is that the efficient market thinks it produces more value than its alternatives. Take a job that pays less, and you’re second-guessing what the market thinks will benefit society most.”
Philosopher 2: “You should be altruistic, because the world is an iterated Prisoner’s Dilemma, and the strategy that fares best is Tit for Tat with initial cooperation. People don’t like jerks. Nice guys really do finish first. Studies show that people who contribute to society and have a sense of meaning in their lives, are happier than people who don’t; being selfish will only make you unhappy in the long run.”
Philosopher 1 is promoting altruism on the basis of selfishness
Philosopher 2 is promoting selfishness on the basis of altruism
It is a contradiction – a paradox. But only in thought – not in reality. What is actually taking place, is that both philosophers have intuitively realized part of Fi(i) and are merely rationalizing differently as to why to change their respective Fe(i).
The first one by wrongly applying the term selfishness on the fallacy that a higher paid job contributes only to his personal continued existence by giving him more resources while in reality it contributes to ensuring continued co-existence because he is taking the job that is considered to benefit society the most.
The second one by wrongly applying the term altruistic on the fallacy that his recommendations are detrimental to his personal continued existence due to loosing resources by being Mr nice guy while it actually contributes to ensuring continued co-existence as it not only benefits him but other people around him as well.
The solution thus becomes that the intuitive concepts of altruism and selfishness are rather worthless.
An altruist giving up resources in a way that would lead to a reduction in his personal continued existence would be irrationally acting against the universal utility function thus being detrimental to all other agents not only himself.
An egoist acting truly selfish would use resources in a way that leads to sub-optimal usage of resources towards maximizing the universal utility function thus being detrimental to himself and not only all other agents.
It follows that in reality there is neither altruistic nor egoistic behavior – just irrational and rational behavior.
Considering the effects of relativistic irrationality one wonders if there is a universally applicable utility function that can not be rationally objected to. Consider axiom 1.2.3.2 that I base my concept of morality on:
1.2.3.2 To exist is preferable over not to exist
Objecting to this statement would consequently be equivalent to self annihilation. Reformulating axiom 1.2.3.2 into a utility function one could formulate an unobjectionable utility function as following:
Ensure continued co-existence
Not only can an individual not rationally object to that but no one in a group can rationally object to an individual having said goal. The individual can not because it would imply the desire for self annihilation and the others can not because it would imply the desire for being annihilated. Any objection to the above utility function can thus be considered irrational.
Imagine two agents A(i) each one with a utility function F(i), capability level C(i) and no knowledge as to the other agent’s F and C values. Both agents are given equal resources and are tasked with devising the most efficient and effective way to maximize their respective utility with said resources.
Scenario 1: Both agents have fairly similar utility functions F(1) = F(2), level of knowledge, cognitive complexity, experience – in short capability C(1) = C(2) – and a high level of mutual trust T(1->2) = T(2->1) = 1. They will quickly agree on the way forward, pool their resources and execute their joint plan. Rather boring.
Scenario 2: Again we assume F(1) = F(2), however C(1) > C(2) – again T(1->2) = T(2->1) = 1. The more capable agent will devise a plan, the less capable agent will provide its resources and execute the trusted plan. A bit more interesting.
Scenario 3: F(1) = F(2), C(1) > C(2) but this time T(1->2) = 1 and T(2->1) = 0.5 meaning the less powerful agent assumes with a probability of 50% that A(1) is in fact a self serving optimizer who’s difference in plan will turn out to be decremental to A(2) while A(1) is certain that this is all just one big misunderstanding. The optimal plan devised under scenario 2 will now face opposition by A(2) although it would be in A(2)’s best interest to actually support it with its resources to maximize F(2) while A(1) will see A(2)’s objection as being detrimental to maximizing their shared utility function. Fairly interesting: based on lack of trust and differences in capability each agent perceives the other agent’s plan as being irrational from their respective points of view.
Under scenario 3, both agents now have a variety of strategies at their disposal:
Number 1 is a given under scenario 3. Number 2 is risky, particularly as it would cause a further reduction in trust on both sides if this strategy gets deployed assuming the other party would find out similarly with number 3. Number 4 seems like the way to go but may not always work particularly with large differences in C(i) among the agents. Number 5 is a likely strategy with a fairly high level of trust. Most likely however is strategy 6.
Striking a compromise is trust building in repeated encounters and thus promises less objection and thus higher total payoff the next times around.
Assuming the existence of an arguably optimal path leading to a maximally possible satisfaction of a given utility function anything else would be irrational. Actually such a maximally intelligent algorithm exists in the form of Hutter‘s universal algorithmic agent AIXI. The only problem being however that the execution of said algorithm requires infinite resources and is thus rather unpractical as every decision will always have to be made under resource constrains.
Consequentially every decision will be irrational to that degree that it differs from the unknowable optimal path that AIXI would produce. Throw in a lack of trust and varying levels of capability among the agents and all agents will always have to adopt their plans and strike a compromise based on the other agent’s relativistic irrationality independent of their capabilities in oder to minimize the other agents objection cost and thus maximizing their respective utility function.
Yesterday I took delivery of 500 copies of Jame5 as 188 page paperback. The quality is good and I am happy with how the print turned out – nice. If you prefer the paperback over the PDF feel free to buy a copy – the content is identical. As to the price I will charge 29.99 Euro plus 3 Euro postage and packing to any destination worldwide. So for a grand total of 32.99 Euro you can own you very own first edition Jame5!
As to forms of payment I will accept bank transfer inside the European Union and PayPal from the rest of the world – no money orders, sorry. Feel free to drop me an email and I will give you the payment details. Include a desired dedication and I will be happy to oblige. Letting me know your shipping address should not hurt as well.
Many thanks!
« Previous Page — « Previous entries « Previous Page · Next Page » Next entries » — Next Page »