Utility & Utilitarianism
Utilitarianism is an ethical system based upon maximising or minimising some utility function. A utility function, in this sense, is some operation on a set of information that produces a value. In other words, it is a function that makes statements like "state
A of the universe is less desirable than state
B". In conversations around human ethics, this utilitarian utility function is often vaguely expressed in terms of "well-being" or "happiness". On occasion, I've been called a utilitarian because of my discussions of heuristics, ethical systems as any systems which rank the future, and "maximizing" or "minimizing" things, but I'm not quite sure I understand what it means to be one.
It seems to me that all ethical systems contain utility functions, by their very nature of being information systems which rank future states of the universe. Someone driven by a religious ethical system might take their utility function from an ancient holy text. Someone driven by egoism centers their utility function entirely on their own pleasures and desires. A nihilist may choose to implement an entirely random utility function. All of these ethical choices are driven by some concept of utility. What then distinguishes these systems from the label of utilitarianism?
Occasionally, I read LessWrong. People on LessWrong often describe themselves as "rationalists". It's a little tricky to pin the term down beyond "users of LW", though many have tried. I won't try to define a community that I don't identify with and that isn't the point of this post. I wouldn't describe myself as a rationalist for reasons that should be obvious if you read more of my writing, though to be honest if you did ask me what intellectual labels I identify with I'd probably just say "dumbass". Rather, I'd like to examine what I think is the community's overuse of parable as a deceptively irrational rhetorical tool for explaining epistemic and empirical concepts.
A parable is a story meant to convey some sort of lesson to the reader. Many religions and schools of philosophical thought are packed with parables meant to convey moral or logical ideas. It is, quite evidently, a fantastic way to get people to think about a certain thing and to retain their interest. It's also, unfortunately, a deeply flawed one if our intention is truly for the reader to rationally and objectively evaluate our ideas.
Egos Are Bandwidth Membranes
What do I mean when I say the word "ego"? Previously I've said that I mean an intelligent entity who possesses continuous agency within the world, a stateful sense of self, and preferences for the future. Human beings normatively value this form of intelligence highly, because we are egos ourselves. However, these 3 properties are only measurable through inferring internal state from external actions of the entity. They are subjective experiences of the entity which manifest externally in measurable ways, but with only this definition we lack an understanding of its material structure and physical limitations.
Can we find another, more physical lens through which to examine this sublime object? Undoubtedly. But which one is most interesting? I've been thinking about the lens of informational bandwidth. Can we perceive an ego by the relative bandwidth of information flow within a system, as a bandwidth membrane separating an area of high bandwidth from an area of lower bandwidth? What possibilities and limitations does such a lens suggest? Is your ego truly dependent on being enclosed from the outside world through these membranes?
Egos Are Fractally Complex
"All models are wrong, but some are useful" -- George Box
As our ability to model and simulate systems grows, we exert more and more computation on simulating ourselves - human agents, society, the systems which make up our existence. We simulate economic systems, weather systems, transport systems, power grids, ocean currents, the movements of crowds, and a million other models of our real world that attempt to predict some spread of possibilities from a given state.[^1] However, a model can only simulate the abstractable, and there is one object which remains resolutely unabstractable - the agency of egos.[^2] We may find ourselves immensely frustrated at this property in the future, and I believe it to be an insurmountable task. Here's why.
Ethics & Arbitrary Objects
There are many abstract, mental systems that we build throughout our lives. We build systems firstly in order to interpret our memories and sensory input into an internal simulation of "reality", our experience up to that moment. Secondly, we then run simulations of the future and predict their probability. These systems are abstract objects that exist within your mind that define your experience within the world.
However, at some point, your systems will come to the final part of this process of existence: selecting which predicted future you should pursue. Eventually you have to answer questions like "Why should we reduce human suffering?" This will involve using some form of heuristic to determine future value, and then balancing that value against probability.