![]() |
"Playing God": When does the line begin?
The entire "Are we screwing ourselves?" topic has lead my mind to percolate another question as a whole. When does "Playing God" come in as a factor? Where do we draw the line? I'm an atheist myself, so I speak metaphorically. But when is it okay to intervene with the sick/dying men, women, and children. And when should we draw the line and not research areas (like cloning, and sickle cell research. Both are very close to altering almost entirely a human being to our own design).
I'll use the "screwing ourselves" topic as another example of how we could, possibly, have already crossed a line. Simply put, if we found cures for polio, malaria, smallpox, etc. which allowed people to live on where they would have died. They then reproduced with genetic defects for other diseases we're unable to cure. If that sets a far-future generation down the road to near-extinction simply by certain diseases that evolved to resist our medicine (and since we're full of genetic weakness now, our immune systems are too weak to fight them off on their own), did we not just alter the natural course of humanity by "playing God" and screwing our entire species over? If you save a man from a cliff, and that man later goes and kills a family of four: Did you really do the right thing? Or were you "playing God"? Discuss Now. |
Quote:
This also applys to the more serious argument. You can't know what will happen in the future. Yes, we can make predictions, but ultimately, we can't see the future. We can only try to do what we think is right in the present. |
Quote:
Quote:
Quote:
|
You guys are arguing a bit too specifically. Those were merely samples to get your minds thinking. The main point is, if people are able to, say, research stem cells, or even go down the road of genetic alteration to improve humans as a whole... if we're able to, should we? If so, where is the line then? And if not, do we draw the line there, or could we roll it back to, say, anytime we save someone from dying, especially if it seems like their time to go? Or even to the point where we've begun polluting the air... or advanced our technologies to a certain point.
A truly philisophical question: It's not about right or wrong, but about the many ways to look at it. |
My my, this reeks of Kantianism vs Ultilitarianism! My favourite! =D
Ok, ok, so here's how it goes: Kant argues that we cannot predict the consequences of our actions. He says that consequences are never a way in which we should act, since even seemingly clear consequences are not completely reliable. Conversely, if you do the right thing (which is determined via two maxims that require your action be univeralisable under his categorical imperative... I'll go into that if someone's actually interested) you did the right thing at that moment - which is something we can control. Our duty to these morals are entirely reliable as we control it, unlike the future. There's more to it of course, but there's your support for why we should save every person we possibly can right now. Act utilitarians? They might disagree. It depends. In judging this, you have to carefully try and decide what the consequences might be. Anything is justified if it brings the greatest happiness to the greatest number. (No, that doesn't mean you get to slay 49% of people for the good of 51%, it means getting the greatest happiness for the absolute maximum you can - 100% ideally). So, perhaps then an act utilitarian would argue that we should hold off from rushing into every single development, and we should try to work out a reliable cure to things - even though the majority may be deprived temporarily, it'd bring the greatest happiness to everyone if we could go down a path of medicine that wouldn't lead to the 'Screwed ourselves?" topic's sceanario. Mind you, being the tricky beasts they are, other act utilitarians might be able to argue it the other way, which is why I can't be totally concrete here. But regardless, just so you all know, you're referencing heavily ideas found in these two moral philosophies. Do we attempt to predict the future and control the consequences? To what extent are we responsible for an actions impact on the future? How far into the future should we look, and when can we blame something for doing something in the now, and ignoring the future? These are important questions that arise between the two moral sets, and that would have to be answered to determine the answer here. Sadly, it's not that easy to decide ^^; (For way more info than you ever wanted to know about Kant/Utilitarianism, PM me. On the other hand, someone who's formally studied philosophy longer than I have can feel free to show up and tell me how I'm talking crap and oversimplifying ^_^) |
Quote:
|
I think "playing god" is the wrong term to use, in this situation.
Playing God would be more along the lines of using powers beyond our capacity to fully understand, for our own petty amusements. My opinion: If knowledge is about to jump off a cliff, only to kill our families should it be saved, we should still pull it away from the ledge. If there's one thing I've learned from star trek, it's that often, technology (or knowledge) sometimes gets ahead of us, and that when it does, it's more important to have known it, and lost everything because of it, just so that we can go back, and use that knowledge more effectively. Of course, leaps and bounds in technology are packed with ethical questions, (but this thread doesn't appear to be about addressing those ethical questions). It's up to us to pick and choose which technologies we put our faith in. Personally, I'd rather support stem cell research than nuclear weapons. Both are essentially methods of 'playing god' one is meant to preserve life (and may bite us in the butt in the future) and one is meant to extinguish life, and will quite likely lead to the annihilation of most life on this planet (unless something survives nuclear winter). Anyhoo, I'd rather take a road that sounds positive, and could potentially be either positive or negative, rather than a road that sounds negative, and has more potential to bring more negative results. So, sum up: Let's 'play god' and see where it takes us. If we're not ready, we can learn from our mistakes, and if we are, we can continue on to greater things. edit for fifthfiend's sake: If a potential law or ethical solution can work if EVERYONE follows that rule, then it is universalizeable, and valid. For example: if EVERYONE in the world followed the rule "it is okay to lie, if it gets you what you want" then everyone will realize that others are lying, and in order to lie, one will have to tell the truth... sounds a little convoluted, but eventually, we conclude that such a rule is logically impossible to execute. ...I've forgotten the second maxim... I think it had to do with the preservation of life, or the greater good....I wish I could find my philosophy notebook. edit again: thanks pictish, now I remember. If you treat others as ends, then you are essentially making the ends of all your means "other people" which basically adds up to: don't do anything that would make someone else a means to some other end than the greater good of others. (he also has a bunch of stuff talking about how you wouldn't put some starving guy's needs ahead of your family's needs, but I don't think that's really on-topic.) |
There are of course questionable areas in the following, but there're problems in all areas, surely? (And I might explain them semi-badly)
Righto, so Quote:
Likewise, you wouldn't kill, steal, etc etc since you wouldn't want to justify anyone doing that. The most important bit of this might be that it's the categorical imperative. Not hypothetical imperative. This means that you can't say "It is not right to kill. Unless..." or ifs or any kind of added condition to a rule. The rule must stand on its own unconditionally. This leads to loads of stuff being universalised. No lies, no stuff like that. (This leads to the problem of a clear murderer coming into your house and asking the whereabouts of your friend, who is hiding in the next room. You cannot lie. However, you also cannot allow him to murder your friend, so you must heroically then save your friend without acting immorally to the killer... extreme case. Think Vash from Trigun, really.) Second up, much simpler - Quote:
The basics of this is "When you act on another person, treat them as their own human being whose end result of an act matters. It's not ok to use someone to fulfill your end, disregarding what happens to them". So, no slavery, no killing for gain, no paying people too little that they can't survive. Mind, it's 3:30 am here. This probably isn't the best time for me to be posting this, but I think that mostly makes sense of it. Any questions I could probably dive deeper into. There're problems with it, but many of them do have some kind of responce. I tried living under Kantian ethics for a month. My god it was hard. I failed miserably, no matter how hard I tried. |
I don't see the problem with doing those things. That is to say, it doesn't seem there's any inherent problem. Playing god is only an issue if you believe in god.
As for trying to predict the results of our actions, yes, I totally believe that we should attempt to do that. Frankly, it seems ridiculous not to. A goal should be set for the future, and that goal should be worked towards. We can't know the future, but we can try and make educated guesses. This doesn't necessarily equate to taking crazy risks, either. If something could be of benefit but is very risky, that doesn't mean it has to be done. Meanwhile, if we take the alternate route and try to achieve a goal immediately with actions, what are we to do with our foresight? Are we to ignore the obvious repercussions of our actions, even if they're negative, in spite of the actions themselves? Let's take an example. We could put antibiotics into the drinking supply. A fair number of people would stop being sick, or would be prevented from becoming sick. Immediate and widespread good. Great, right? Except it would accelerate bacterial evolution to become immune to our best ways to fight it, leaving us totally boned in the future. But, again, there was immediate good, so it's cool, right? |
Sticking the water full of meds would have more immediate problems than people in the future getting diseases. And anyway, we have no real idea that the strains of viruses, diseases, et cetra would evolve too fast for us to counter them.
|
| All times are GMT -5. The time now is 12:03 AM. |
Powered by: vBulletin Version 3.8.5
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.