![]() |
Why the future doesn't need us
Just thought you might enjoy a little something to keep you up all night.
I'm sorry the article is so old, I just found it. But, it creeped me the fuck out. It just seems so completely plausible and fits in line with our self-destructive nature. Though I would much rather see a Ghost in the Shell future than the Matrix or Terminator. Sure, I could comment. But, I'm interested in the assessment of others on the boards, mostly because none of my comments would do this thing justice. Will any of this come to pass? Or should I just pour myself a glass of JD and go to bed? Quote:
|
It's never so much the technology itself, but more the way we propose to use it.
Mankind has a quite large group of actually intelligent people who lead in this sort of stuff. Sure, the great mass is mostly scientifically dumb to a certain degree, but we do, in fact, have a nice group of reliable people lead the next technology. To that extent, i would trust them not to make mankind completely obsolete. You could say, for instance, that when the Auto industry became mechanized, that humans no longer had to work on the factory lines to put a car together, that those jobs would all be replaced by machines and would never return to the hand of humans... but that's just not the case. Even MORE people began working in the auto-industry as it expanded, they just had to learn a new set of skills that were needed. With that in mind, i don't really fear Technology and science. The burden is not on their shoulder. The burden is on the masses that elect leaders who select and fund WHAT KIND and HOW technology will be produced. That's the real danger. That worries me because we have a lot of people (worldwide) that think the world will end on 2012 and not enough Neil Degrasse Tysons to counter THAT logic A real danger that we face is not really about technology, even though is a by-product of it. And that's Birth control. It's inevitable. Eventually we will have more people alive than our global society can support, resources will go thin, and the next step is "Guess". |
Whole hell of a lot of postulating there.
Of course, it's pretty hard to predict what could happen in the next 90 years. I imagine the world will be quite different by then, and barring some cyborg, organ transplant, or stem cell proceduring, I won't be around to see it. I preferred Asimov's conclusions on the fate of the world plus robots. |
I will simply say that humans seem to be very hit-and-miss when predicting what scientific advances will occur and when. It's quite possible that given an eternity, humans might never perfect a self-conscious AI because it is simply not possible. Or they might produce one tomorrow.
In any case, these occurrences seem incredibly specific--I'd be more afraid of what I can't think up. Reading the speculative fiction of the past, people back then were making all kinds of predictions and conjectures about what could happen by now--and most of it hasn't come to pass, let alone the mass amounts of things that won't ever come to pass because they're actually impossible. And the mass amount of things that no one ever saw coming ever. To actually fear the occurrences outlined here as a real threat would logically require a person to become like the Unabomber--therefore, even if such a threat were real, most people, including the author of the article, would probably be all well and good with writing an article like this but secretly in their mind they're going, "Yeah, right, like that'll ever happen." Either they're self-deceptive or not truly sincere that they think these are real threats. Or they're like the Unabomber. So I will say that I don't think this is anything to get all worked up about. For one thing, say you had super-intelligent computers making decisions for you--well, what if these super-intelligent computers happened to not have access to weaponry, and you did? I'm pretty sure you could choose not to follow their advice at your leisure then. Or what if the decisions they were making were actually the best ones, so there never came a time that they were making decisions that were not beneficial to you? All of the scenarios outlined in this article and various speculations require very specific factors for them to occur--it's basically impossible to accurately predict them. These are very broad speculations being made here--they're missing all the little predictions that would make them worthy of consideration. EDIT: Let me clarify with a metaphor (which will probably just make my opinion more confusing). Let's say someone gave you the box that houses a 24 inch x 24 inch jig-saw puzzle, and said that if you put together all the pieces inside the box, you'll get the picture on the lid. However, there are no pieces inside the box, and the person hands you three random jig-saw pieces. He has no proof that these jig-saw pieces actually fit with this picture, and he's missing all the other pieces that give them context and connections with the greater picture. In fact, it's quite possible he just grabbed these jig-saw pieces at random from a bin of jig-saw puzzle pieces. But he expects you to believe that he KNOWS that those three puzzle pieces, in combination with the unknowns, will create that picture on that box perfectly. People who write articles like this are like that guy. |
There's a reference here to Ray Kurzweil, and I think it's the same person as Ray Kurtzweil of The Singularity is Near. Kurtzeil's confidence in accelerating technological advancement rests on shaky foundations.
|
Completely speculative and not really based on anything and there has been better speculations. Even from 100 years ago you can read say Marx or Heidegger and they give a much richer analysis of man's relationship with technology and there is probably even newer, more sophisticated analysis that I haven't read yet. This is just ramblings that I can find in pretty much standard grade sci-fi.
And then of course respected people have been predicted this kind of stuff for a long time and we're no closer than we ever were. |
I see whole lot of speculation and very little of actual facts. That said, it is impossible to accurately predict the future, our future.
|
We are in far greater danger of the polutants and other by-products of technology then of some kill crazy AI. Plus I never understood why an AI would want to kill humans. You could say because humans are killing the planet, or are imperfect, or other Sci-Fi plot lines. But why the hell would it care? It has no reason to care if the planet is habitable or not, no reason to want to take over since its not driven to expand, and if its really so logical it should realize its not perfect. There might be some "Robot Rights" movement, but I see no reason for it to lead to genocidal war.
Besides what kind of idiot leaves weapons of mass destruction in the hands of an automated system. Even during the cold war they had the fore-sight to not use a deadman switch |
Quote:
Regarding the article, although it may sound plausible, doesn't necessarily mean that's what will happen. |
Just watched an episode of Star Trek: TNG which in part dealt with the question of living in an artifcial world where everything is provided with no struggle and how it would be pointless to live in such a world. I'm not sure I agree with that, but in a world of such advanced technology that there is virtually no need for 'work' by humans, nothing to strive for, is there a point to exsistence?
|
| All times are GMT -5. The time now is 10:44 PM. |
Powered by: vBulletin Version 3.8.5
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.