That's an interesting thought, in some ways all you would need given that concept is amachine that learns.
Cetainly processing power is getting closer and closer to mimicing the power of an organic brain. What do our techie gurus have to say on this matter?
I was kinda ashamed to ask but thats exactly what i wanted to say, id like a tech wiz's opinion
but personally i think they arent really getting that close, since we humans can store years of visual information (imagine how much space itd take in hard drive means to store all the memories of a 100 year old man, an anime 24 ep of 20 minutes each ep is only about 2-4 yigabites, and thats only like 8 hours lol + its low quality compared to how we can "see", i mean we do notice the diference between normal TV and HD)
I don't think it's storage capacity that's the problem, rather how you connect the information together. Learning something isn't about storing it but rather connecting it to the things you already know so that you can follow trains of thought and do reasoning. Machine learning is something that has been researched for at least 50 years but even so I think it's still not much beyond its infancy.
Look at subjects like machine translation for instance: except where languages are very similar, you tend to get hit and miss results when trying to translate stuff with Babelfish etc. The problem is that words have so many shades of meaning, which vary by context, and it's hard to get a computer to come to a real understanding of a sentence so that it can express it in another language. The meaning of any given word in a sentence may depend on an adjacent word. But the meaning of the adjacent word in turn depends on the first wordâ€”catch 22!
Here's an example. I just looked up the word "run" in the Concise Oxford English Dictionary. It lists 35 meaning of the word as a verb, 21 meanings as a noun, and over a 100 idiomatic meanings (ie. phrases containing "run" that have meanings distinct from the 76 normal meanings for this word). Multiply this by, say, a 20,000 word vocabulary and it's staggering that we ever learn language at all! It's one thing to be able to strore all this information, quite another to apply it to real world situations!
lol Hiro-san you just reminded me of my favourite line from the film "Hero" starring Jet Li.
It has the Chinese Emperor saying "27 different words for sword, no wonder we have trouble communicating!"
I understand what you mean about languages and your definition of learning is exactly right as far as I know.
Another thing differs between computers and organic brains is that organic brains are continuously monitoring and adjusting and performing several million functions at once even whilst we're asleep
QUOTE (Hiroyuki @ Oct 17 2009, 11:38 PM)I don't think it's storage capacity that's the problem, rather how you connect the information together. Learning something isn't about storing it but rather connecting it to the things you already know so that you can follow trains of thought and do reasoning. Machine learning is something that has been researched for at least 50 years but even so I think it's still not much beyond its infancy.
This is most definitely true. A lot of the programming I did during my summer internship involved machine learning, so I got to see first-hand most of the techniques involved, and dang, it's complicated stuff, especially the more general-purpose you go. If you want a machine to tell whether a video feed has a human being in it is very doable. But trying to get that machine to actually act human, to be able to adapt, to change the way it thinks, to change its own motives, is an entirely different problem.
Although, I've gotta say, storage, right now, is a problem. The issue of capacity should disappear soon enough (capacity doubles every 2 years, which is why we can store so much high-def anime on our computers today
). To model a human brain, we would need many terabytes, maybe even petabytes (I haven't checked recently...) to store all that information. Getting to that point isn't the issue, being able to access the right information quickly enough to make your android/robot/whatever usable is the problem. The only people who have dealt with this problem today are companies like eBay, Google, or the government/military, but they have football-sized super-computers and data-farms to be able to deal with it. We'd need to be able to get something on that scale to function in something brain-sized, and we just don't have the technology yet. There's no doubt in my mind that we'll get there within our lifetimes, but we just don't have it right now.
I think that what we have right now is a fantastic proof-of-concept. We can make machines learn over time, we have/will have the hardware we need to handle the all the computations we'd need, and we've made amazing progress over the past decade in understanding the human brain. I think once we've finished unraveling the human brain (once we can model it), the technology will fall into place.
Re. storage I think a difference between humans and computers is that the brain has what we might call active storage, a meld of processing power and memory: every neurone not only stores information but processes it, whereas with computers the memory is passive. Brains are massively parallel with billions of operations taking place simultaneously and independently, whereas a computer has only a few processing threads even now. A computer cpu core can do a few billion operations a second, but the brain working at maybe 100 cycles per second still has an advantage since there are billions of neurones working in parallel. The difference is narrowing but is still big, especially given that the brain knows what to do whilst people are still trying to come up with a computer program equivalent to what the brain does.
Brains definitely operate on a whole different level than computers do. The brain is massively parallel (can do many things simultaneously), while most computers today are... less parallel (can do up to 8 things simultaneously... unless you get into supercomputers or graphics chips or nonsense like that where you can have thousands, or even millions of cores working together. still nowhere close to simulating a bran).
What I see as the key difference though is that the brain is organic. It doesn't just become a beautiful thinking machine, it needs to teach itself how to think, how to remember things. It doesn't really have set rules as to how to analyze information, it grows and evolves as it gets older and as it receives new information. Computers, on the other hand, cannot do that. They're a series of complex circuits blasted into silicone, and that's it.
And what's more, a vast majority of the brain is developed to deal with social interactions (as opposed to thinking, or running your organs). A computer, on the other hand, is 100% devoted to 'logical thinking'. It's very difficult to be able to get a machine like that to 'logically simulate' something like a complex human interaction (which is why it's so difficult to hold a meaningful conversation with your microwave.
Still, the main problem with simulating the human brain is actually knowing how it works. We've made progress, but we still need more of that sweet, sweet neuroscience. I'm sure we'll be able to make life from scratch long before we can make machines as "intelligent" as we are.