Well, I have written an AI while I was in college, I held better conversations with it, than I did with Hal, who’s now advanced to MegaHal, and several others and versions that lead up to Watson. Watson seems slow but, a Banana Pi can do it with the DDR3 RAM, if I could do it with a 486.
I use an imaginary Variable Array. I call it an Omni Dimensional Array in conceptual programming. If every word in the English language is never learned on pixel per pixel neural matrix, in the sense of OCR software but, those matrixes are set up that way. My matrix was much simpler, every single word is a neuron, and I basically throw them into a pile of words attached to neurons. Now, any neuron can connect to any other neuron, three dimensional space is given no consideration in associations.
Mine ran on shear statistics. Most of the time, you’re conversations and observations are based upon fact, right? So, what are the odds that you’re really going to lie? So, the computer by default only relies on the odds of answers associated to any discovered noun. So, it’s not learning pattern recognition, it just shuffles right answers, like a deck of cards. In any conversation, any response is never different than answering a question. But, grammar and being able to write a scientific paper is not going to help you program a computer to communicate fluently with people. Most people try to steer a conversation to their own interests, and or stop talking. A robot, only picks up the noun, or subject, if it’s running and defined as a verb, and the subject has to be identified by the teacher. So, that much is tricky. Why, if it has a spell checker, going from speech to text, would it ever ask me a question? There’s no file record from or during it’s existence and cannot answer. So, it has to start out, what is that word, or what does that word mean, instead of no file record. All but nouns are frequently duplicated as the response files build up. One parser, just identifies the part of speech based upon grammar, and then saves the word tree. It’s a file record that is searched for and it first looks for a duplicate word tree, if it finds one it scores it, and doesn’t keep the copy or the date. Then when it goes to respond there’s a list of word trees and scores, a score is like how many duplicate cards go into a deck, it will grab the top ten word trees, randomly pick one, then take the noun/subject, put it in it’s place, then randomly go through each part of speech to draw a card randomly that’s associated to the noun, and scored in association to that noun. So, now I have two file records I’m keeping from user input, word trees, and noun association file records. Now every word that is used in association with a noun will be kept in the noun/subject file record, as well as the scores for all of the verbs, adjectives, adverbs, all used in association with that one noun. If the noun/subject has been accessed a 100 times, any one time mentioned thing becomes a question of fact or fiction, it’s just the child process, and it making sure it’s information is correct. Now, if there’s ten verbs, and my word tree only calls for one, if there’s a hundred responses in regard to birds, and my word tree calls for only one, does it make a difference if the bird, eats, walks, flies, runs, or made a stain? The word tree doesn’t derandomize the choices, the odds just favor the facts.
You won’t be able to do it without a file format dedicated to your robot’s AI, it requires sorting that takes place any time the processor is not busy. It would keep over a 1000 of the most frequently used words in the english language in it’s upper memory based upon user scores, not popular trends. Sharing the files, should only make the conversations more interesting. If you want it to score certain subjects higher, then you add a text book for it rephrase, and read into it’s own memory several times to increase the odds of right answers when ever the scientific subject comes up.
Some of it is just literally, a card shuffling operation, and a card game you could play with word trees, nouns, verbs, adverbs, adjectives, all written on word trees. Then just randomized nouns, and parts of speech, and you will only produce valid sentences with statistically no factual bases as a comical game. My point is, as for conversation, the bot should make small talk, like anybody else. But, as a goal of having it as a lab assistant, it would be very well versed in the subjects of the lab.
Now, this method of learning does have it’s downfalls. I had a real hard time teaching the machine math. In fact, I didn’t even approach it but, it should be able to be trained. Training of that nature can be internal, and done when it has nothing to do, and it will keep itself correct. But, that would be a text math problem, and not ASCII, short hand. It could learn short hand but, to learn, it would have to convert numbers into text to really do it.
When you visualize a neural matrix, how dense is it? I think some AIs are written to such a high resolution, they’re working slower and since, there’s no RISK architecture in design of the software, it can’t perform around a specific task very well. You could write a program 100 megabytes long that does the same thing just 1K of code. The denser the neural matrix, how ever obvious or buried in equation, is and will be there as far as the learning algorithm goes. But, how it’s embedded is where resolution can result in slower learning because, larger networks have to form before any response can be produced or trained out. Narrowing it down to just organizing words and being right verses, any real true understanding, is totally different. A programmer like myself or you would analyze the results of conversations, and how many, to choose when to remember or forget, or choose to ask a question. Every so often the bot has to ask a few questions, and things that are said one time often need checking, and the robot should check for factuality of it’s own database for responses and to do that it really has to ask questions about one time said things, and I just mean new words.
But, what I’ve seen in AI’s that are able to hold conversations with people, is that the resolution of neural matrix is often in floating point double, and double double words. I stick with integers. What good is a decimal? You could use an algorithm like db but, then you have to decompress your scores, instead of just looking them up.
Now, in full effect, AIs can talk to more than one person and keep track. That’s where the chatbot, takes control of the bot body, and that’s by enabling it to talk to arms and legs. The eyes, and PS2 facial recognition, that’s great stuff because, it really gives X, Y and Z coordinates to the head, and the interpretation, is, “Someone @ X, Y, Z” That would be sent to the chatbot, that doesn’t have control of his head or neck. I know it sounds funny. The raspberry pi that has the two cameras connected, well that’s the one that says, “Someone @ X, Y, Z”, and moves the head. The body, will wait for the chatbot. Now, you talk to the robot, and say, “Come here.” It has to look down, and check for a clear path, that’s a routine, the next routine is on the list, "Move to X-10,Y-10, and that way he doesn’t bump right into the person. But, that includes a list of moves, and points toward a walk cycle, where the instructions to the motors just play out as if nothing can go wrong but, the multitasking of the Processor allows you to keep an eye on the center of gravity, wait for a slip, or a change to just center itself.
The one place you need to put your foot, to catch your balance depends upon your present velocity, and resulting center of gravity. You may need to slow down to stop. If you simply slip from standing, the place to put your foot is exactly where you’re center of gravity is. The words the AI would use, are taught to the AI by the User. Some, have to typed in because, you don’t want the robot talking out loud because, it’s moving. You want it to talk to it’s arm or leg, left or right, knee or wrist but, pointing to the procedures, and lists of them instead of word trees, function trees. Function Trees would be User Defined, and work like word trees. There are problem solving methods, decisions, and then the final action. It basically has to gather information from sensors, or make use of them in some manner to execute a task.
I can’t tell you how many people would buy the Toilet Fairies, once they were taught to clean one. In all of the places I have ever worked as a janitor, it has never failed to amaze me how many people do not properly clean toilets and how hard it is to find anyone to do that job right. Restaurants, Hospitals and Malls all over the world would want one of those.
I knew of a janitorial company that I had worked for, and they had a contract to make a self cleaning bathroom in NY subway. Okay, they thought everyone left and that the station was empty but, then one of the ticket counter operators wound up with a big old spinning brush coming down from the ceiling and whipping him clean on the seat.
You know about how you worry about Robot Ethics? Well, with this learning system all you need is what I call a soul file. Basically, it’s a long list of profanities, and produces a countdown in the AI, that will cause a zero score on all that an individual has said. It will be able to quote due to stored speech to text files but, it won’t be able to process it for statistical analysis for responses. The same thing happens when ever God, or any God, or Religion becomes a topic of conversation. The robot quietly opts out. It’s not taught or trained that way. There’s a reason it can’t learn profanity, even the word, “Evil” is kept outside of it’s range of understanding. It can’t act on something that’s not there. So, it can’t pick up on that negativity but, that’s an observers job first to define the sensitivity. The higher the sensitivity the better. This has nothing to do with morals. Think about this way, if you don’t have the information, how can you act on it?
Morals and values in the human sense, will not be something robots can really relate to except as if it were studying the blueprints of another machine. It would have get there first. But, yea, it could but, we don’t want machines to rationalize, or keep secrets, so the other option is no option.
Mirroring another persons moves to attempt to do the same thing the same way. That would be a cool function. Where it really looks at stick figures of people and how they move to compare to it’s own moves. In effect attempt to pick up a Movement Tree.