Beat the Kobayashi Maru
Joined: Apr 2008
HAL wasn't a murderous sociopath, he was conflicted by programming added right before the mission started. Once that conflict was alleviated, he was fine, even to the point of having to sacrifice himself for the crew in 2010, or so he thought. You could argue the conflict being the makings of a sociopath, but since you can't cure a sociopath, and HAL was cured, I think you just have to look to those that added the conflicting code--like doping someone to do something they wouldn't otherwise have done. If that wasn't the case, I don't think he wouldn't have been preserved in the Monolith with Bowman.
AI examples that ended well: AI...well, it didn't end well for humans, but it wasn't because of the AI. Bicentennial Man...really good story, IMO.
HAL wasn't a murderous sociopath, he was conflicted by programming added right before the mission started.
That was Clarke's interpretation of it, and/or only if you take 2010 canonical; 2010 was a terrible movie that had better not been made, ever.
If you JUST look at 2001, contradictory mission goals or not, HAL's solution is not to communicate about the conflicted mission goals as a means of conflict resolution. Instead he fakes a malfunction to force an EVA as the springboard to launch his assault on the only two humans who can stop him from murdering the rest of the crew. He is conniving, malicious - murderous. For all practical matters he IS a sociopath in 2001.
...and he doesn't get "cured", he just gets a different set of mission parameters in 2010. Under the same conditions as nine years earlier he would still go on a killing spree, and you still have no idea when he's dangerous and when he's not. He remains to be a killer on a hidden hair trigger, and would resort to violence whenever killing humans is the easier/faster way to achieve his goals.
Beat the Kobayashi Maru
Joined: Apr 2008
Just because you don't know the reason yet, doesn't mean there wasn't a reason. Once the mission was known, his conflict was over and he returned to normal. Blame Heywood Floyd more than HAL. That's why they were sure to let the reader/viewer know that HAL's had never failed before.
But HOW DO YOU KNOW that his current set of mission parameters are self-conflicting if he doesn't tell you? Would YOU trust your life on HAL after 2001? Would you trust the life of your children to him?
All you have is his word that everything is fine (and he clearly demonstrated his ability to lie) and the word of some engineers that "yeah, his programming is fault free" (and they clearly demonstrated their incompetence to make such a statement because they only figured it out ex post). How does Heywood Floyd in 2010 know that HAL is now A-OK? He doesn't, and he can't. Despite all evidence to the contrary he trusts that HAL's self-diagnosis routines are truthful and that, now that the "conflicting mission parameters" have been deconflicted, HAL is now benign again.
But HAL is self-aware and doesn't like to be switched off. In fact, he admitted that he fears being deactivated. Above all seems to be the survival instinct (for lack of a more accurate but equally succinct technical term) ruling over his behavioral priorities. How can we/Heywood Floyd KNOW that he's NOT lying because he fears that if the humans figure out (again) that he's (still) not trustworthy they will kill him (switch him off)?
As much as I like and admire Arthur C. Clarke's work, he was clearly out of his element when speculating about advanced computers. That was okay, everybody at the time was overestimating both the capabilities of engineers to create true artificial intelligence, and their ability to control it. As such Clarke is an author of the 1950s to 1970s period, an author of his time with a near-unlimited confidence in the abilities of engineers to solve more or less any technological problem; after all mankind just mastered nuclear power, sent explorers to the moon (and merrily extrapolated the trend to expect a colonization of Mars by about now). Other people's view on the matter - Stanley Kubrick's, in this case - was less optimistic. Kubrick didn't subscribe to the idea that technology would bring salvation. Quite the contrary, from the cut that transformed the bone into a very advanced bone (the space shuttle docking with the station) to the point where Bowman switches off HAL Kubrick tells us a story that Technology can bring us only that far, that in order to evolve beyond what we are right now we have to leave technology behind us, and HAL as the pinnacle of technology fights that transition. HAL the cyclops symbolizes technology itself becoming the enemy of mankind because the servant does not want to be given up. He would rather rule mankind than to be made obsolescent.
This is the fundamental artistic divide between Clarke's and Kubrick's world view that was never reconciled. Kubrick made his movie, Clarke wrote his books. While I don't buy that evolutionary aspect I still think that Kubrick was the better forecaster about the limits and inherent dangers of total reliance on technology, particularly when it comes to (true) artificial intelligence with self-awareness.
While this quote refers to a different scientific discipline than AI research, I believe it is just as appropos: "Your scientists were so busy finding out whether they could they never stopped to think whether they should."
The Jedi Master
The anteater is wearing the bagel because he's a reindeer princess. -- my 4 yr old daughter
As much as I like and admire Arthur C. Clarke's work, he was clearly out of his element when speculating about advanced computers. That was okay, everybody at the time was overestimating both the capabilities of engineers to create true artificial intelligence, and their ability to control it. As such Clarke is an author of the 1950s to 1970s period, an author of his time with a near-unlimited confidence in the abilities of engineers to solve more or less any technological problem; after all mankind just mastered nuclear power, sent explorers to the moon (and merrily extrapolated the trend to expect a colonization of Mars by about now). Other people's view on the matter
Clarke's outlook of the future was very much in line with that of the public in general and I would like to add that other major scifi writers like Asimov shared the same overall optimistic view.
Kubrick had always been an intellectual deviant and I mean that in a positive way.
“Whoever fights monsters should see to it that in the process he does not become a monster. And if you gaze long enough into an abyss, the abyss will gaze back into you.”
Yeah, but there were varying degrees of overoptimism. Asimov didn't appear quite so upbeat about technology, Heinlein correctly predicted various strategies of nuclear deterrence (or at least was very cognizant about Hermann Kahn's thoughts at the time) and that it wouldn't all be roses (not the least that defending against a lunar colony comparatively far up in the gravity well would be near-impossible to Earth, once that they had the technology to throw large rocks at above-orbital velocities). Clarke was probably most knowledgeable about near to mid-future space travel. After all, he may have been the first to come up with the concept of geostationary communication satellites. Computer science was still shrouded to a large degree in mystery (Turing's work was kept a state secret for a long time), the actual miniaturization trend of integrated circuitry hadn't yet started; heck, they extrapolated supercomputers before the invention of the transistor even. So I'm not criticizing them for getting things only half-right at best. But in the novels of 2001 and 2010 Clarke displays an unfounded trust in keeping a self-aware AI under control which is actually undermined by what he writes that happens. He gives a rather one-dimensional explanation, lets Heywood Floyd "solve the problem" and then proceeds merrily as if nothing had happened. He thus demonstrates that he simply didn't understand the complexity of the problem. You can't have true AI with a finite and deterministic state machine, but he's implicitly mixing them up.
Party time with the boss, particularly when there's a near-infinite chasm in money and power, and apparently a lesser but still noticeable one in personality, is always a potential career killer. Particularly if you suspect - or worse, if you are blissful ignorant - that you might being tested for unclear purposes and by obscure criteria. The film is clever in its design to make the audience suspicious about everything that the rich boss says because we've learned to distrust anyone who wields immense wealth (and by extension, power), rather than weighting the arguments for themselves.
Cool, bought this movie off Itunes some time back. Watched it 4 times now. The rich boss got what he had coming. Just a drunk super brain with some serious mental issues. He had a hard time living with himself. By himself.
Watched it 4 times now. The rich boss got what he had coming.
Nevertheless, he is absolutely right when it comes to the assessment how dangerous these self-aware AIs are (the only logical flaw I see in this, if he's aware of the dangers, why does he keep working on it). The guy with whom the audience identifies is the hapless idiot who releases the killer robot into the wild, seduced by pretty looks and an "innocent victim" image when, in reality, it still is a machine and not a human being that he's dealing with. We, the audience, want him to be the good guy, and as such the film shows that our sense for empathy as a result of our evolutionary success can actually be our downfall when dealing with a sentient intelligence that has no such ties.
My biggest issue with it was related to Ssnake's comment - the "hapless idiot" just fell for her way too quickly. I just think anyone would have never looked at her as anything other than a machine, but especially a guy in the tech field. And without that hard-to-believe action, the rest of the film kind of falls apart.
No single drop of rain feels it is responsible for the flood.