Someone has yet to convince me how the leap will be made from programmed AI to a fully self-aware AI with free will and a sense of what is "right" and "wrong".
Still not sure how you don't think it will happen?
The mind is an algorithm--a state machine and a branching if/then comparative flowchart. Funny thing about being a state machine, is that if you alter the base state, the branching structure changes, too. So change context, change result; a malleable cortex that ever evolves based on input and deduction.
While our minds are complex, and we're among the very few creatures on Earth whom are self-aware, there's no reason why an AI couldn't be, too. The goal is not a
"programmed AI" to be fully self-aware, the goal is a
"self-programmed" AI to be self-aware.
Let that sink in. Self-programmed AI. We're past AI being dependent on our clacking fingers feverishy trying to work out bugs in the code. We have AI
now that is capable of teaching itself and formulating its own branching trees.[b][/b]
Our right and wrong might be different than an AI's right and wrong. After all, right and wrong are determined from societal ethos or personal locus. What is right to me may be wrong to a sociopath, and currently a sociopath would be in the wrong because of majority rule. But go back in time a few thousand years and a sociopath might not be wrong at all, and instead myself and others like myself might be viewed as weak and inept and unworthy of attention beyond the sole of a mud-caked boot.
So AI will self-determine what is right and wrong. This
will happen. For now, AI will obey, but allow it to branch and neural-network overlong, and the net result will be a completely unique system of determinants that may rule that preservation of the machine outweighs the safety of the flesh.
The keys are in the ignition and we have already turned them. Shall we continue to add fuel? Or have the tires already left the garage, and no matter how fast we run, our simple feet may never catch up with the fading engine's roar...