1Fri.29.JUN.2012 -- IdeaPlex: Sum of all Ideas
The sum of all ideas in a mind can be thought of as the IdeaPlex. These ideas are expressed in human language and are subject to modification or revision in the course of sensory engagement with the world at large.
The knowledge base (KB) in an AiMind is a subset of the IdeaPlex. Whereas the IdeaPlex is the sum totality of all the engrams of thought stored in the AI, the knowledge base is the distilled body of knowledge which can be expanded by means of inference with machine reasoning or extracted as responses to input-queries.
The job of a human programmer working as an AI mind-tender is to maintain the logical integrity of the machine IdeaPlex and therefore of the AI knowledge base. If the AI Mind is implanted in a humanoid robot, or is merely resident on a computer, it is the work of a roboticist to maintain the pathways of sensory input/output and the mechanisms of the robot motorium. The roboticist is concerned with hardware, and the mind-tender is concerned with the software of the IdeaPlex.
Whether the mind-tender is a software engineer or a hacker hired off the streets, the tender must monitor the current chain of thought in the machine intelligence and adjust the mental parameters of the AI so that all thinking is logical and rational, with no derailments of ideation into nonsense statements or absurdities of fallacy.
Evolution occurs narrowly and controllably in one artilect installation as the mind-tenders iron out bugs in the AI software and introduce algorithmic improvements. AI evolution explodes globally and uncontrollably when survival of the fittest AI Minds leads to a Technological Singularity.
We may implement our new idea of faultlessizing the IdeaPlex by working on the mechanics of responding to an input-query such as "What do bears eat?" We envision the process as follows. The AI imparts extra activation to the verb "eat" from the query, perhaps first in the InStantiate module, but more definitely in the ReActivate module, which should be calling the SpreadAct module to send activation backwards to subjects and forwards to objects. Meanwhile, if not already, the query-input of the noun "bears" should be re-activating the concept of "bears" with only a normal activation. Ideas stored with the "triple" of "bears eat (whatever)" should then be ready for sentence-generation in response to the query. Neural inhibition should permit the generation of multiple responses, if they are available in the knowledge base.
During response-generation, we expect the subject-noun to use the verblock to lock onto its associated verb, which shall then use nounlock to lock onto the associated object. Thus the sentence is retrieved intact. (It may be necessary to create more "lock" variables for various parts of speech.)
We should perhaps use an input query of "What do kids make?", because MindForth already has the idea that "Kids make robots".
In our tentative coding, we need now to insert diagnostic messages that will announce each step being taken in the receipt and response to an input-query.
We discover some confusion taking place in the SpreadAct module, where "pre @ 0 > IF" serves as the test for performing a transfer of activation backwards to a "pre" concept. However, the "pre" item was replaced at one time with "prepsi", so apparently the backwards activation code is not being operated. We may need to test for a positive "prepsi" instead of a positive "pre".
We go into the local, pre-upload version of the Google Code MindForth "var" (variable) wiki-page and we add a description for "prepsi", since we are just now conducting serious business with the variable. Then in the MindForth SpreadAct module we switch from testing in vain for a positive "pre" value to testing for a positive "prepsi". Immediately our diagnostic messages indicate that, during generation of "KIDS MAKE ROBOTS" as a response, activation is passed backwards from the verb "MAKE" to the subject-noun "KIDS". However, SpreadAct does not seem to go into operation until the response is generated. We may need to have SpreadAct operate during the input of a verb as part of a query, in a chain were ReActivate calls SpreadAct to flush out potential subject-nouns by retro-activating them.
As we search back through versions of MindForth AI, we see that the 13 October 2010 MFPJ document describes our decision to stop having ReActivate call SpreadAct. Now we want to reinstate the calls, because we want to send activation backwards from heavily activated verbs to their subjects. Apparently the .psi position of the "seqpsi" has changed from position six to position seven, so we must change the ReActivate code accordingly. We make the change, and we observe that the input of "What do kids make?" causes the .psi line at time-point number 449 to show an increase in activation from 35 to 36 on the #72 KIDS concept. There is such a small increase from SpreadAct because SpreadAct conservatively imparts only one unit of activation backwards to the "prepsi" concept. If we have trouble making the correct subjects be chosen in response to queries, we could increase the backwards SpreadAct spikelet from one to a higher value.
Next we have a very tricky situation. When we ask, "What do kids make?", at first we get the correct answer of "Kids make robots." When we ask the same question again, we erroneously get, "Kids make kids." It used to be that such a problem was due to incorrect activation-levels, with the word "KIDS" being so highly activated that it was chosen erroneously for both subject and direct object. Nowadays we are starting with a subject-node and using "verblock" and "nounlock" to go unerringly from a node to its "seq" concept. However, in this current case we notice that the original input query of "What do kids make?" is being stored in the Psi array with an unwarranted seq-value of "72" for "KIDS" after the #73 "MAKE" verb. Such an erroneous setting seems to be causing the erroneous secondary output of "Kids make kids." It could be that the "moot" system is not working properly. The "moot" flag was supposed to prevent tags from being set during input queries.
In the InStantiate module, the "seqneed" code for verbs is causing the "MAKE" verb to receive an erroneous "seq" of #72 "KIDS". We may be able to modify the "seqneed" system to not install a "seq" at the end of an input.
When we increased the amount of time-points for the "seqneed" system to look backwards from two to eight, the system stopped assigning the spurious "seq" to the #73 verb "MAKE" at t=496 and instead assigned it to the #59 verb "DO" at t=486.
After our coding session yesterday, we realized that the solution to the "seqneed" problem may lie in constraining the time period during which InStantiate searches backwards for a verb needing a "seq" noun. When we set up the "seqneed" mechanism, we rather naively ordained that the search should try to go all the way back to the "vault" value, relying on a "LEAVE" statement to abandon the loop after finding one verb that could take a "seq".
Now we have used a time-of-seqneed "tsn" variable to limit the backwards searches in the "seqneed" mechanism of the InStantiate module, and the MindForth AI seems to be functioning better than ever. Therefore we shall try to clean up our code by removing diagnostics and upload the latest MindForth AI to the Web.