I can work on your project.

Find me! Call DAP at 214.350.7678 or email rene@dallasaudiopost.com. Also check out echocollectivefx.com for custom sfx, and tonebenders.net for my podcast.

Saturday, April 12

computers, human self-awareness, and the decisions our kids will have to make

I came across this today [thank you stumbleupon] and it really posed some excellent questions about what we may be someday capable of creating with regards to computed artificial intelligence.

It goes through the basic premise that we as humans don't really comprehend the basic elements that make up the mechanics of common sense in any way that's sophisticated enough to program into a computer. Right now the vast majority of programs written by people are designed to do very specific things at an expert level, and are incapable of doing anything else. This blogging software does an expert job of collecting words and images and posting them to a web server, but its thoroughly incapable of discerning for itself whether or not I'm making any sense in what I post. It can check grammar, but not content.

It argues (as artificial intelligence studies have done for years) that programming which relies on if-then type rules will never really understand anything because of the basic fact that there are just too many contingencies to put into place. This is "logic" based programming, and it's what drives this blog's software, every videogame out there, the Windows operating system, and almost all other software that we've ever come into contact with. Instead, programmers must aspire to write programs that can do an action, analyze a result, deduce what went wrong in a very conceptual and therefore abstract way. By abandoning logic for experience, programs can potentially gain self-awareness.

Of course, it qualifies self-awareness like this:

 ================== ARE HUMANS SELF-AWARE? ==================

Most people assume that computers can't be conscious, or self-aware; at
best they can only simulate the appearance of this. Of course, this
assumes that we, as humans, are self-aware. But are we? I think not. I
know that sounds ridiculous, so let me explain.

If by awareness we mean knowing what is in our minds, then, as every
clinical psychologist knows, people are only very slightly self-aware, and
most of what they think about themselves is guess-work. We seem to build
up networks of theories about what is in our minds, and we mistake these
apparent visions for what's really going on. To put it bluntly, most of
what our "consciousness" reveals to us is just "made up". Now, I don't
mean that we're not aware of sounds and sights, or even of some parts of
thoughts. I'm only saying that we're not aware of much of what goes on
thoughts. I'm only saying that we're not aware of much of what goes on inside our minds.
The paper also produces a pretty cool way to look at the question of whether computers have the capacity to actually pull this off:

It is too easy to say things like, "Computer can't do (xxx), because they
have no feelings, or thoughts". But here's a way to turn such sayings into
foolishness. Change them to read like this. "Computer can't do (xxx),
because all they can do is execute incredibly intricate processes, perhaps
millions at a time". Now, such objections seem less convincing -- yet all
we did was face one simple, complicated fact: we really don't yet know
what the limits of computers are.
This paper was published in 1982 so it's been around forever, but this is the first time I've seen it, and I think it was written in a way that equates to timelessness in this computer age.

No comments: