Note: Many noders have ethical opinions, but they are tucked away in various nodes. Those of us looking for views on such things have trouble finding them. I will probably be making quite a few Ethics - Xyz nodes. Feel free to add to any of them. My views are by no means correct.

One of the strong ethical theories of our time is utilitarianism. The basic idea is that whatever produces the most utility is the morally best thing to do. Utility is defined as happiness, pleasure, or something else of that sort, depending on who you ask.

It seems certain, from an objective standpoint, utility is the best goal. You want to cause the most of what is most important, and it seems that to humans (and other creatures), some sort of happiness is most important. We all work towards happiness, although we might have different ways of achieving it (money, friends, health, children).

Utility might also be defined in terms of well-being for those who wish to take the environment into account, but for humans, happiness is the most important part of well being. You could also apply utility only to yourself (egoism). But that would not be objective, and I think that ethics, if not morality, must be objective.

I would also say that utility is something any ethical system should take into account.

But I do not agree with ethical utilitarianism.

The problem here is that utilitarianism holds that the morally correct thing to do is the one that produces the most utility, and it is morally incorrect to do something that reduces utility. So here's an example of a utilitarian-incorrect but morally-correct action.

Ed cannot see the future. He can only make good guesses as to what might result from his actions. He must make all moral decisions based on these guesses, and he is aware of this.

Ed is walking through the woods one day, and he comes across a lake in which a man is drowning. He must make a moral decision--should he save the man or let him drown?

Ed decides that the moral thing to do is to save the drowning man. He throws him a lifesaver, and pulls him to shore. The man thanks Ed and leaves.

The man assassinates ten world leaders in the next seven days, starts a nuclear war, and commits suicide by tying a hydrogen bomb to his back and walking into the United Nations.

By the utilitarian system, Ed did a bad thing (morally). By my system (a mix of duty and virtue ethics), Ed did the right thing (morally). Ed did something that was not in the best interests of humanity in either case, but one system damns him, the other praises him.

There is much debate on this, but it seems to me that ethical system that judges you based on things that you have no control over is no ethical system at all. As you can see, I am defining an ethical system as one that judges you on your intentions. You should not be given positive or negative moral 'scores' based on something you cannot see. It would then be conceivable that Hitler was more moral that I, which I would like to think is not the case.

The reason I have for defining an ethical system in this way is that it seems that a moral system is set into place to judge humans qua free agents. And to judge them in this way, you judge them on what they try to do. If you wish to judge humans on what they actually cause to happen you will certainly produce an interesting statistic. But it doesn't tell me how 'good' a person is, or weather I should like or trust the person.

Note though, that it is my duty to try to do that which causes the most utility. I am judged (morally) on my intent to produce good utility. Ed intended to do good. He is morally good, although instrumentally bad.

I mention all of this because I think it is an important distinction that is very often overlooked. This is only a brief overview -- I could also add bits about immorality through ignoring instrumental effects, instrumentality through encouraging morality, etc. But I think that this gives you a basic idea of my views.

Please feel free to stick you own views here. Please stick to the topic. If you have other issues to discuss, make a new node. It costs nothing, and if you make it in the Ethics - Xyz format I and others will see it.

I will rework this node as I see fit, so direct replies may not be a good idea. If you do wish to reply to a specific point I've made, you should probably quote it or a paraphrase of it in your WU.

General Wesc: The 'goodness' that people are is different from the 'good' that things are. (Remember, all this is my own personal opinion). Ed is a good person, because he does the best possible action (from the view of foresight -- which is all he has). The social system he is working within can be good in that it makes people happy, or bad in that it does not. It cannot be 'bad' in a moral sense. (Likewise, a coffee cup may be a bad cup -- it spills coffee. It is not a morally bad cup).

The 'bad' that I apply to people just slides off of systems and things. The weather can be bad, but not morally bad. Windows can be bad, but only the programmers can be morally bad. Our laws may have been created with malice and hate (or not...), but it just doesn't make sense to call them bad in the same way that that guy that Ed saved was bad.

So in other words, I agree with "Ed has indirectly caused nuclear war and all sorts of other Bad Things. Let us praise him." Except that I would prefer to say that "Ed saved a life, and indirectly caused nuclear war and all sorts of other Bad Things. Let us praise him, and learn from his lesson." Although frankly, I'm not certain what we would learn from this specific lesson...

Ed has indirectly caused nuclear war and all sorts of other Bad Things. Let us praise him.

Wait, that doesn't sound right..

Ed has indirectly caused millions of deaths and various Bad Things, albeit accidentally. What a rotter!

Hmmm...that doesn't sound too nice either.

Ed did the wrong thing. He did a Bad thing. He started a global thermonuclear war. Nothing Ed has ever done resulted in anything good, but Ed tries to be good, thus Ed is good.

Question: Doesn't a moral theory have to result in Good if followed properly? A population of Eds could result in daily catastrophical disasters. Sounds like the Deontologial-character theory thinks this is good. If I support that theory, then I support all these Bad Happenings. (I seem to be stuck in a teleological mind-set)

"Gee, that was a good action."
"He just killed 50,000,000 innocent people!"
"Yeah, but he didn't mean to, so it's okay."

I'd like to say that everything Ed does is Bad, but Ed is still a Good Person. Is that permissible?

"Morally good, although instrumentally bad": Aren't we simply instruments for causing goodness? ("moral agents")

Log in or register to write something here or to contact authors.