Hello,
I’m good at ideas but, I didn’t present a list. It’s a good foundation for a list.
But, I wanted to mention something to keep your minds on. Most robots, could be using their motors as sensors. Any time a motor sees a load, it increases how much current it draws. So, if you were monitoring the current from any motors closely enough, a simple obstacle and increase in load, causes an increase in current flow. The more gain, you apply to measuring that current, the closer you are to a touch sensor.
My real issue is foot slam! I want a robot that won’t stomp, and I’m personally working on that problem in theory, and tackling learning another programming language to just have the right software for building a robot. Any time you rely on a touch sensor, you’ll have foot slam. Here’s what happens when the motor controller just gives it a position, and waits for the sensor at full speed. The sensor, makes contact, and the motor controller stops the motor but, the inertia of the movement, and the slack in any gears will still follow through and be let out as a tap, or loud slam of the foot of the robot. The stomp is just caused by the release of inertia and the slack coming out of the gears that allows the foot to continue to move after the motor has stopped. This is why I believe that surface mapping a level surface is important because, it will pertain to most flat surfaces, and prevent stomping by slowing the motor down before the foot’s sensor detects contact. It’s just a small change in motor speed as it approaches a stopping point that is only a few milliseconds different than noisy.
The AI.
Here’s how my program worked. First, I would type in a sentence. Then, the program would spell the check the entire sentence, and fix any errors. That really helps allot when it comes to a fast, in a couple hours start holding conversations with a blank robot database, because, there’s a couple of them. Recording the initial conversation, keep that and time and date stamp it. Later, it will become a quote function. A robot can make a very good reporter, of conversations and saved overheard conversations, which it can learn from, as long as there’s a file record for every noun used, or subject of conversation, which is more important than grammar because, the subject of a conversation could be a verb like running. A user has to teach/define a subject of conversation but until then, it’s assumed to be the proper noun, pronoun, and a pronoun refers to a previous sentence and proper noun.
You might wonder how that can all work with just randoms.
Well is starts by parsing the sentence like you would for this word tree.
http://www.linguisticsgirl.com/wp-content/uploads/2013/03/2013-03-05-Prepositional-Phrase-Disjunct-Adverbial-Tree.jpg
All that’s important is the first line of descriptors, Noun, Verb, etc… The Spellchecker function, also includes the part of speech of the word but, the spellchecker is keeping that with the proper spelling. Some words have two functions and the robot needs to be taught/told which of the two the word is functioning as. In the Subject Directory, the subject is stored in the immediate folder. Now, each file is alphabetically organized, and segmented based upon memory space. Inside the folder, a single noun could take up a folder, like electron, and electronics because, the list of verbs, and nouns associated to that one word electron, is so greate. A subject, is stored with all of the words, nouns, verbs, with identifiers, and usage scores. These scores don’t change unless, it’s directly used in a sentence. It’s governed by memory size, and how much it can fit, so there may be several files in that one folder. The sorting is done in a simple way, first every part speech has a separate folder for total word counts, this is used basically to establish the most commonly and frequently used words to place the top 1,000 to 10,000,000 words into the upper memory for use in conversation. Once it’s sorted it automatically generates you’re greeting/superficial conversation mode. It moves to searching the hard drive when the conversation and the quantity and quality of information are properly inputted. It can read textbooks, and use the scores as if were overheard communications. Reading several times adjusts the scoring, and it’s robot, it doesn’t take that long to read a book 20 times. So, it pulls out the word trees from the book, loads up subjects and word counts based upon usage, and associates them to nouns. But, then it becomes the lab assistant. If you get a little rusty, you can just what was that equation, and the equation has to be a quote function. The more specific you get, the closer the robot get’s to the right answer. There’s tons of equations in electronics.
So, I would load one of the randomizing python functions, it works to randomize a list of objects, so you can throw in as many copies as you want, it works like lotto from here. Remember the scores, it throws that many copies into randomizer. If I asked, what does an electron do? it could say, drift, spin, boil, exhibit pressure on space, travel at one third the speed of light through space, well, there’s allot. I always wanted to have two of these in the same AI for word tree picks, and results. Let’s say you totalled the scores of all of the words used in two sentences, it would then count the words, and divide the scores by the number of words used in each sentence, and the highest score is the result I see from the two competing parts of this function. A response, is just pick a word tree, then select randomly from the list of subject associated verbs, etc. to fill in the word tree. If I just remove the sentence, look at the parts of speech and throw darts at a dictionary, to randomly pick word of the same part of speech, it’s still a working sentence, with little or no bearing on reality.
Okay, so a bunch random words can make a sentence. But, how does this omni-dimensional array really work. Well, the computer only works with groups of neurons that are suppose to work together, and since each word is a neuron, simply remembering that, means that each file record, includes several neurons if it’s a Subject or Topic of Discussion. CSV model file record.
Subject. x used verbs first, etc… other parts of speech.
Bird, 100, walk, 10, run, 1, fly 85, flying 17,
The file record now represents a group of neurons, and only the ones used in context with the subject, bird. Since, bird is the subject, it’s count is there for later, and sorting. It could wind up in the top 1,000 spot, and in memory for rapid access. When I save them like this, I don’t need to worry about geometry. The computer will look up Bird, and use this file record to choose a sentence that’s factual, hopefully, usually by statistical odds, and use it. Lying is easily sorted out by the lowest scores over time. You could almost look at the word score for the subject hit 1,000, and just start deleting any of the words in the list with a score of 1.
The trick is the robot only speaks when spoken to. It will always try to put in the last word. It’s just the robot logic. It never understands or knows anything honestly. It just interacts socially.
I want mine programmed with Electronics 4 times, and a sci fi novel “The Ion War”,eight times to acquire my veteran of an alien war. Well, yea, you can actually use cheating a little to add some character. It would identify itself as the lead character.