I'll admit it, I didn't know that American public television could be that cool.
Tonight's NOVA science program was about the human brain, but it was the segment on Watson, the scary IBM supercomputer that can beat expert human contestants at Jeopardy! that really caught my attention.
Although the "Watson" is for IBM's founder Thomas J. Watson, the name is ironically apt, because--if I understood correctly--the ultimate goal for Watson is as a super sidekick for professionals who, like Sherlock Holmes or a Jeopardy player, need instant and accurate answers pertaining to their expertise. Like a medical Watson could piece together symptoms and come up with an illness the doctor might never have heard of. Kind of like Google, but better, faster, stronger, and probably able to run in slow-motion.
Of course we all know the real ultimate goal here is to create a genuine Artificial Intelligence capable of thinking the way humans do. Not just able to gather information, but to understand it. The older I get the more this idea actually worries rather than intrigues me. Maybe it's because of all those Terminator movies, but nowadays all I can think is, how could anyone possibly make sure that the machine is not just knowledgeable, but wise? An instantaneous and infallible diagnostician is one thing, but a sociopath C3PO is quite another. You can teach a child love, empathy and kindness, but I don't know if you can break it down into if/then reasoning. And if it does become possible to create a machine mentally indistinguishable from a human, that might actually be worse.
I've loved the idea of fictional AIs like the ones I've written fanfic about for a long time, mostly because of the questions it raises about what constitutes a person versus a human being. It's a lot of fun to think about in the abstract, but what happens when we really have machines as self-aware and mentally adept as we are? Leaving aside the serious problem of making ourselves redundant, what will these new creatures be? The current definition of life doesn't include non-organics. Can something be technically not alive and yet still be considered a person? And what about the soul? If only we humans are cool enough to have them, where would that leave sentient machines? I personally think that C3PO has as much of a soul as Luke Skywalker, but I doubt I'd be in the majority, and that's scary too.
I just can't see any scenario with true AIs that could possibly end well. I hope I'm wrong, or that at least we have enough star ships out there to run to when the pissed-off robots come after us.
Tonight's NOVA science program was about the human brain, but it was the segment on Watson, the scary IBM supercomputer that can beat expert human contestants at Jeopardy! that really caught my attention.
Although the "Watson" is for IBM's founder Thomas J. Watson, the name is ironically apt, because--if I understood correctly--the ultimate goal for Watson is as a super sidekick for professionals who, like Sherlock Holmes or a Jeopardy player, need instant and accurate answers pertaining to their expertise. Like a medical Watson could piece together symptoms and come up with an illness the doctor might never have heard of. Kind of like Google, but better, faster, stronger, and probably able to run in slow-motion.
Of course we all know the real ultimate goal here is to create a genuine Artificial Intelligence capable of thinking the way humans do. Not just able to gather information, but to understand it. The older I get the more this idea actually worries rather than intrigues me. Maybe it's because of all those Terminator movies, but nowadays all I can think is, how could anyone possibly make sure that the machine is not just knowledgeable, but wise? An instantaneous and infallible diagnostician is one thing, but a sociopath C3PO is quite another. You can teach a child love, empathy and kindness, but I don't know if you can break it down into if/then reasoning. And if it does become possible to create a machine mentally indistinguishable from a human, that might actually be worse.
I've loved the idea of fictional AIs like the ones I've written fanfic about for a long time, mostly because of the questions it raises about what constitutes a person versus a human being. It's a lot of fun to think about in the abstract, but what happens when we really have machines as self-aware and mentally adept as we are? Leaving aside the serious problem of making ourselves redundant, what will these new creatures be? The current definition of life doesn't include non-organics. Can something be technically not alive and yet still be considered a person? And what about the soul? If only we humans are cool enough to have them, where would that leave sentient machines? I personally think that C3PO has as much of a soul as Luke Skywalker, but I doubt I'd be in the majority, and that's scary too.
I just can't see any scenario with true AIs that could possibly end well. I hope I'm wrong, or that at least we have enough star ships out there to run to when the pissed-off robots come after us.