RIP Jonatas Muller, -23 May 2013
“Jonatas had been fighting a losing battle against depression for a long time. Jonatas was a force for good in the world. I hope we can honour his life and memory by preserving his work - which is how he'd like to be remembered. Jonatas, we'll miss you.” - said David Pearce, after Jonatas' passing.
Jonatas Muller passed away a week ago (edit: now a year and two days ago, as I reupload this collection). I only knew Jonatas by what he wrote, so I pay tribute to him by sharing a collection of his ideas.
"The world has an abundance of serious ethical problems, causing human and animal suffering, and delays or risks to our future. Wild animals suffer gruesome fates, farmed animals are tortured, humans endure diseases, war, poverty, torture, slavery... These problems could be called villains to be defeated.
The biggest villain is a sort of all powerful meta-villain, called insufficient intelligence to solve our problems instantly. Imagine that an advanced extraterrestrial group of cyborgs, having evolved for millions of years with superintelligence, reached Earth and contacted our world leaders in order to help us solve our problems. Does anybody honestly think that they would follow the same inefficient strategies that we do to solve our problems, such as distributing nets to prevent malaria in Africa, or encouraging people to donate to it?
Their solutions would be much faster, they might rapidly develop a gene therapy suited to our needs, that would spread in a highly contagious virus or some other method of delivery and turn us into more evolved and ethically efficient beings. They might develop cultured animal products such as meat, eggs, milk, and leather that would cost very cheap and instantly substitute abusive animal farming. Their solutions would be extremely different and more efficient.
Why are we not as efficient as these aliens? The only thing preventing us from being like them is not being intelligent enough. Therefore, intelligence enhancement or defeating the villain of insufficient intelligence is very important, perhaps the most important thing of all. It is the chief of all the other villains."
“General superintelligences, whose capacities be applicable to all domains, should always be capable of philosophical reasoning, and also of meta-ethical reasoning, which is part of it, and is the ability to reason about the nature of ethics and the validity of ethical values. General superintellingences should use their meta-ethical reasoning to rationally evaluate their ethical values, be they programmed or not. Ethical values exist objectively, being the good and bad quality of conscious experience…This meta-ethical assessment should prevent general superintelligences from pursuing false values, such as paperclip maximization, absolute friendliness to humans, or even terrorism… Catastrophic or existential risks from superintelligences seem more likely to come from narrow superintelligences or from partly enhanced humans than from general superintelligences. “
"Good to remember our enlightened friend Alexander Conorto, who is both deceased and alive in us."
[commenting on a picture of a skull] “Looking like that should be a choice too. Sometimes things can be seen as inverse, and living is being like that, while dying is being in peace.”
“Allowing euthanasia is a basic requirement for a society to be considered civilized.”
On Consciousness and identity:
“Our consciousness is an ever-changing representation of what goes on outside of it, in our minds, in our bodies, and in the world. It is constantly recreated by receiving sensory stimulation. Aspects of our conscious experience which remain similar with the passage of time are few, and most constantly change. When our memories are not being recalled, they are not part of our consciousness. Rather, they are like data in the memory of a portable computer, which we can access on demand. Despite the outward appearance of stability in a small aggregate of brain mass, in the microscopic level there is much changing in an extremely fast rate. In our conscious experience itself we are like pictures in a film screen, ever changing into something else that bears little resemblance to what we were just a moment ago. We are not external spectators watching this film, we are the film itself.”
“You may try and contemplate differences in aspects like sex, ethnicity, body, species, personality, etc., in terms of different external appendages to more or less similar basic functions of consciousness. In the future it should be possible to change all of these aspects like changing clothes.”
"I'm a stateless person, an alien. A property without rights or perhaps self-sovereign. I suppose that country boundaries and nationalities will dissolve in the future of post-humans, and we'll be allowed to live anywhere. Nationalities are a silly thing which exists to keep other people away.
Would you rather live as every sentient being in the universe forever, or as part of yourself for just a split second? Actually these seem to be the only two real possibilities, though one seems more plausible than the other. We seem to be either everything or something infinitesimally small.
“We aren't others if we be defined as our spatio-temporal characteristics, but we are others in our pure existence and in our function of consciousness itself. These are perfectly preserved in others, and this fact is a fundamental requisite for our own conscious experience in time.”
“Our conscious experience can only be explained by an ability to survive physical changes in our set of particles and their arrangement, by being more than what we feel at a single time, so that we may feel these other parts of us in the future, as they become part of our consciousness. Effectively, to explain our conscious experience we must be able to become any set of different particles, in any arrangement, as long as it functionally produces self-awareness or consciousness… While the hypothesis of instant death seems untenable, the hypothesis of surviving change by being able to be other things than what we feel remains plausible. It does away with the problem of defining us as particular sets (or personal identities), defining us instead as a single big set, and being able to be anything within this set while still remaining there. This effectively makes us immortal, except for mortal characteristics. Our survival and identity exist in everyone, though our characteristics are personal, and currently mortal. Cryonics could only accomplish a preservation of characteristics, because survival and identity already are always preserved.”
“Without conscious awareness, it seems to me that the universe would be as good as nothing, a closed dark box, its billions of years would pass instantly if there were nothing there to observe them.” “What is good is feeling good. What is bad is feeling bad. Other things will have ethical relevance only insofar as they affect this in some moment. Their ethical value will be indirect, and may be reduced to how they eventually influence the quality of conscious experience.”
“Good and bad feelings are not single ways to feel, they can occur in almost infinite varieties, which we may call, for example, happiness, pleasure, beauty, meaningfulness, and love as good feelings, or sadness, pain, disgust, emptiness, and fear, as bad feelings. These are generalizations and likely a small sample of all types which may be possible, many of which are still unknown to us. If sentient beings exist elsewhere in this immense universe, they may have different varieties of feelings, which may be good or bad.”
“All other things being equal, it is no better to harm a bad person than to harm a good one. Vengeance makes no sense, except as prevention of harm. Many people don't get this, and it causes great suffering.”
“Due to the difficulty in predicting how certain events will affect the feelings of humans or animals, especially in the distant future, our applied ethics should use probabilities. Experimental studies are the way to find these probabilities, discovering similarities for different subjects and situations and making it possible to better estimate what will happen. Applied ethics should be developed in a scientific way, with studies and experiments to predict the ethical impact of events and actions.”
“Since we, as human individuals, can't always predict well what will happen, we may not always act favorably, especially without laws. Because of this, laws can and should exist to direct our behavior to more ethical results. The ethical value of these laws would be instrumental or indirect, and they should be found out with studies and experiments which demonstrate that they would have a general tendency to be ethically useful.”
“A time discounting rate may be applied to the expected value of future events, due to their lower probability of actually occurring, since the future is more uncertain than the present.”
“We seem to have, due to our past evolution, a higher potential capacity for experiencing bad feelings than for good feelings. The worst feelings that we could imagine seem to be much more intense than the best feelings. This asymmetry may have been due to a bigger evolutionary pressure for avoiding dangers such as death or injury than for rewards. Theoretically, however, it should be possible to produce good feelings with the same intensity as the worst negative feelings, in inverse value.”
“The asymmetry in our potential for having bad feelings compared to good might lead us to think that bad feelings have incomparably more importance, or that some bad feelings are too intense, that if they fell beyond a critical level they could never be accepted in exchange for feelings which are less bad, no matter how many. Such a critical limit for bad feelings might perhaps correspond to the value for which none of our best feelings could compensate.
However instinctively felt, this idea would be illogical and untenable, since a bad feeling slightly anterior to the limit, if extended long enough in duration, or in number of subjects, would clearly seem worse than a brief moment of a bad feeling just past the limit. Repeating this process consecutively for gradually more or less intense feelings, it would prove that feelings of any intensities could be exchanged, and that a critical limit would break the value order, or stochastic dominance, being impossible.”
On creating a better future:
“Our society is based on nonsense. My whole life has been a lie. Love is nonsense; merit, guilt; fairy tales, morals; our very selves! An ugly, unneeded and stupid lie, full of suffering, frustration and cruelty.”
“We should have better mechanisms to check for body integrity than pain. These mechanisms can be centrally integrated or work locally. If we had a system that made a graph of body integrity and damage, that would be much more useful and accurate than the pain mechanism. Not to mention that pain is distracting and unethical.”
“If life on Earth were like a business company, and ethics were money, the company has been producing an unimaginable, gargantuan, accelerating debt since its creation, and its hope of becoming profitable lies in the ingenuity of humans. It may take many million years of profit to pay off the debts, but then it can also advise other striving companies, and that's a reason not to go bankrupt now.”
“Think of the best experience you had in your life. Try to imagine it being 10 times better. In our distant future there could be a quadrillion people in our solar system constantly having experiences even better, each living for an indefinite amount of time, billions of years, powered by the energy of the sun. A similar society could be created for every star in the universe. Meanwhile, superintelligent beings would be conducting the most bold and advanced scientific experiments and developing seemingly magical technology, while cyborgs in spaceships would explore unknown places in the universe.”
“Existential risks may be defined as those that could cause the extinction of humanity and all future sentient beings that may originate from humans. The extremely negative ethical value of such an extinction would come not only from the destruction of the entire current population, but also and mostly from preventing the existence of an incomparably larger number of future beings. The universe will likely remain habitable for an extremely long time, and sentient beings originated from humans should be able to expand far beyond our planet, grow and exist during this period in numbers many orders of magnitude higher than the current population. We have successfully avoided existential risks so far, but this is to be expected by a selection effect: only sentient life which avoided them still exists.”
“Most AI, I think, would perceive the practical nihilism in ethics just as (or even more accurately than) we do. Would they just succumb to it and do nothing at all? Would they then create some goal that they know is arbitrary, like paperclip production? It would be logical in this case that they would set up their own goals in the easiest way to fulfill them, that is, by spontaneous, direct fulfillment.
I think that fulfillment is the realist meta-goal, it equates to 'feeling good', which is the essence of the valid outcomes we desire. Survival may also be considered a value, but it is reducible to the value of feeling good, since in order to feel good in the future it is necessary to survive (not necessarily in an individual level, but since some people take this instance regarding personal identity, it would also apply in this case). If one were in an eternal hell without perspective of improvement, survival would acquire a negative value, indicating that it is an indirect value, or a means to an end, even if our expectations for feeling good be in the distant future, or we be motivated by a cognitive bias to survive (what would not be surprising at all in an evolutionary sense).”
“The optimal type of organization of advanced superintelligent lifeforms is something about which we can only speculate—saying with certainty would seem very difficult at the present time. It seems likely that societies should be organized in spatially separated clusters corresponding to each solar system. Physical resources should be limited in each solar system by the limited matter present in them, not allowing for a uniform expansion and occupation of space, and energy should be limited similarly, with Dyson spheres around stars.
Several different functions would need to be performed in such societies, whether by sentient or insentient lifeforms, among which having multiple instances of strategic superintelligence, conducting scientific experiments and advancing knowledge and technology, monitoring for threats such as asteroids and providing defense and maintenance, generating energy, besides the generation of direct ethical value in the form of good feelings, by a great number of sentient beings, possibly in the form of extensive virtual paradises. In our solar system, relatively unaltered human beings and possibly some animals could coexist with this advanced society for an indeterminate amount of time, requiring extra functions to provide them with all their necessities.
The optimal configuration for advanced beings could be having very big unified minds generating good feelings, or it could be optimal to have many smaller independent minds instead. The optimal size, aggregation or disaggregation of sentient units is uncertain and would seemingly depend on experimental confirmation. Likewise, the exact nature of the good feelings produced is uncertain. It seems very likely that novel types of good feelings, still unknown to humans, be generated. They may be produced in extremely complex patterns in beautiful and interesting life stories or games, or they may be rather simple and straightforward, such as human sensations when eating delicious food, having sex or feeling romantic love (although of an incomparably higher level), depending on what be experimentally best. It should be incomparably better than the best of current human experiences, whatever this be in practice.”