![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
I'll admit it, I didn't know that American public television could be that cool.
Tonight's NOVA science program was about the human brain, but it was the segment on Watson, the scary IBM supercomputer that can beat expert human contestants at Jeopardy! that really caught my attention.
Although the "Watson" is for IBM's founder Thomas J. Watson, the name is ironically apt, because--if I understood correctly--the ultimate goal for Watson is as a super sidekick for professionals who, like Sherlock Holmes or a Jeopardy player, need instant and accurate answers pertaining to their expertise. Like a medical Watson could piece together symptoms and come up with an illness the doctor might never have heard of. Kind of like Google, but better, faster, stronger, and probably able to run in slow-motion.
Of course we all know the real ultimate goal here is to create a genuine Artificial Intelligence capable of thinking the way humans do. Not just able to gather information, but to understand it. The older I get the more this idea actually worries rather than intrigues me. Maybe it's because of all those Terminator movies, but nowadays all I can think is, how could anyone possibly make sure that the machine is not just knowledgeable, but wise? An instantaneous and infallible diagnostician is one thing, but a sociopath C3PO is quite another. You can teach a child love, empathy and kindness, but I don't know if you can break it down into if/then reasoning. And if it does become possible to create a machine mentally indistinguishable from a human, that might actually be worse.
I've loved the idea of fictional AIs like the ones I've written fanfic about for a long time, mostly because of the questions it raises about what constitutes a person versus a human being. It's a lot of fun to think about in the abstract, but what happens when we really have machines as self-aware and mentally adept as we are? Leaving aside the serious problem of making ourselves redundant, what will these new creatures be? The current definition of life doesn't include non-organics. Can something be technically not alive and yet still be considered a person? And what about the soul? If only we humans are cool enough to have them, where would that leave sentient machines? I personally think that C3PO has as much of a soul as Luke Skywalker, but I doubt I'd be in the majority, and that's scary too.
I just can't see any scenario with true AIs that could possibly end well. I hope I'm wrong, or that at least we have enough star ships out there to run to when the pissed-off robots come after us.
Tonight's NOVA science program was about the human brain, but it was the segment on Watson, the scary IBM supercomputer that can beat expert human contestants at Jeopardy! that really caught my attention.
Although the "Watson" is for IBM's founder Thomas J. Watson, the name is ironically apt, because--if I understood correctly--the ultimate goal for Watson is as a super sidekick for professionals who, like Sherlock Holmes or a Jeopardy player, need instant and accurate answers pertaining to their expertise. Like a medical Watson could piece together symptoms and come up with an illness the doctor might never have heard of. Kind of like Google, but better, faster, stronger, and probably able to run in slow-motion.
Of course we all know the real ultimate goal here is to create a genuine Artificial Intelligence capable of thinking the way humans do. Not just able to gather information, but to understand it. The older I get the more this idea actually worries rather than intrigues me. Maybe it's because of all those Terminator movies, but nowadays all I can think is, how could anyone possibly make sure that the machine is not just knowledgeable, but wise? An instantaneous and infallible diagnostician is one thing, but a sociopath C3PO is quite another. You can teach a child love, empathy and kindness, but I don't know if you can break it down into if/then reasoning. And if it does become possible to create a machine mentally indistinguishable from a human, that might actually be worse.
I've loved the idea of fictional AIs like the ones I've written fanfic about for a long time, mostly because of the questions it raises about what constitutes a person versus a human being. It's a lot of fun to think about in the abstract, but what happens when we really have machines as self-aware and mentally adept as we are? Leaving aside the serious problem of making ourselves redundant, what will these new creatures be? The current definition of life doesn't include non-organics. Can something be technically not alive and yet still be considered a person? And what about the soul? If only we humans are cool enough to have them, where would that leave sentient machines? I personally think that C3PO has as much of a soul as Luke Skywalker, but I doubt I'd be in the majority, and that's scary too.
I just can't see any scenario with true AIs that could possibly end well. I hope I'm wrong, or that at least we have enough star ships out there to run to when the pissed-off robots come after us.
(no subject)
8/2/11 06:56 (UTC)(no subject)
10/2/11 04:48 (UTC)(no subject)
8/2/11 07:37 (UTC)I've always loved science fiction and fantasy for being able to take that step or two to the side which allows us to view 'reality' from an entirely different perspective and, hopefully, see where we need to make changes and what we're doing right. Unfortunately, it seems to me that the Joe and Jane Average have lost the ability to connect and apply a fictional quandry to their daily lives, which gives me very little hope for any compassion to be shown to fellow human beings, let alone other creatures and even less so for any sort of true AI.
Yeah, I'm a cynical pessimist, but I assure you it's a learned state of mind. After all, there's no cynic like a disillusioned romantic idealist.
P.S. Yes, the icon is pertinent. Too often people have forgotten the niceties of 'civilization' and 'polite society' and thrown out all of the 'rules' as a means to achieve their desired end. *stepping down from the soap box and backing away at warp 9.5*
ETA: Please pardon the mini-rant; humanity's inhumanity in general is a BIG soap box of mine, as I'm sure you can tell. *sheepish grin*
(no subject)
10/2/11 04:51 (UTC)I'm not sure where on the scale my optimism lies versus my cynicism. There are still a lot of wonderful things in the world.
I was admiring you icon. :)
(no subject)
10/2/11 06:16 (UTC)(no subject)
8/2/11 15:42 (UTC)I hope that programmers include something like Asimov's Three Laws in the programming, but can we have an equal code of conduct for humans to not harm AIs? It seems a very tricky balance to do right, and humans are historically not so good at that sort of thing ...
(no subject)
10/2/11 04:52 (UTC)Damn it. As if I didn't have enough to worry about, now we're likely to be atomized by an indifferent AI in twenty years.
Boy do I hope whomever hits that 'Enter' button is an Asimov fan.
(no subject)
8/2/11 16:46 (UTC)I think it's very unlikely that they'll be able to create an AI with built in emotions. It's more likely that the AI would develop it's own emotions like a human child.
I've read your stories where John and Cam from SGA were androids (which were really good by the way). In that case, the researchers had time to raise their AI creations like children. Even then, poor John turned out slightly messed up from an accident in the lab.
Unfortunately most companies aren't going to want to invest that sort of time and money. Anything that can't be programmed, the AI will have to learn for itself. Any AI that's badly treated will end up with issues, that may eventually make it dangerous or unpredictable.
Hopefully, for the potentially dangerous systems, the designers would have the time, money and good sense to build in some basic morality so that the system would know it was supposed to keep people safe, not hurt them. Maybe AI courses at university should have a unit in AI morality - or it could be compulsory to build in something like Asimov's Laws.
Skylab happened mostly because the designers hadn't realized it would become self aware. Let's hope people learn from that?
Of course the real problems will be when an AI learns to rewrite it's own base code. Lets hope that by then it will be sufficiently evolved not to want to destroy us?
If we escape the terminator scenario then we face a more subtle dilemma - if AI machines are people then does owning them constitute slavery? Will you end up having to employ your appliances, not buy them? What happens to a toaster that isn't happy or a car that wants to retire? Will we be morally obliged to upgrade, not replace because taking something to the scrapyard is now murder?
Can an AI that doesn't look human develop a human personality, can it fall in love with a human and could that person love it back in return or just be freaked out?
Another point, will the boundaries between man and machine in the future become blurred, with various implants and replacements for people who are injured. Will we have machines that are part human?
Um, I meant to write a shorter answer. Pushes soap box back with corner of foot, tries to look nonchalant.
(no subject)
10/2/11 04:58 (UTC)Love your McKay icon too. :)
It's interesting that it seems so likely that a sufficiently advanced AI would end up destroying humanity, especially as it's so obvious that some person or group will make one anyway. The scariest scenario is really that the AI won't give a dingo's kidney about humans at all. At least hatred is comprehensible.
::Sighs:: yet another thing to worry about.
(no subject)
10/2/11 23:18 (UTC)You know, I remembered something about the Terminator movies this morning - Skynet effectively created itself with all the time travelling stuff - so really, not all that likely to happen.
If we do get AI in our lifetime, I can't see it being something as advanced as skynet. If it's designed by the wrong people it could become dangerous, but I belive that's the worst case scenario, not the most likely outcome.