Mon.30.MAY.2011 -- Searching the AI Knowledge Base.
Wed.18.MAY.2011 -- Houston, We
Have a Problem
When we submit "who are you" as a query to the AI Mind, it searches the knowledge base (KB) and it remembers that it is ANDRU - - a ROBOT and a PERSON (a different answer each time that you pose the same existential question). Unfortunately, the software finds the first instance of concept stored in recent memory and spits out the phonemic engram from the auditory memory channel without regard to whether the stored word is a singular form or a plural form. How can we get the most advanced open-source AI in these parsecs to stop saying "I AM ROBOTS"? The AI may have to start skipping over plural engrams when searching for a singular noun. Therefore, let us perform a little psychosurgery on the AI Mind software and see if we can zero in on a singular noun-form during self-referential thought.
Upshot: Gradually in the NounPhrase module we introduced code to skip over the retrieval of any word in auditory memory if the correct num (ber) was not found to match the the same number of the subject of an input query. The AI began to answer "who are you" with "I AM ROBOT". This bugfix makes the AI Mind more complex and therefore subject to potentially latent problems such as knowing a word only in the plural and not in the singular. However, the same bugfix brings the JSAI closer to machine reasoning and thinking with a syllogism such as, "All men are mortal; Socrates is a man; therefore Socrates is mortal."
Now that we have cracked the hard problem of AI wide open, we wish to share our results with all nations.
Mon.16.MAY.2011 -- List of Mentifex
We are still working on the MileStone of self-referential thought on our RoadMap to artificial general intelligence (AGI). We look back upon a small list of accomplishments along the way.
* two-step selection of BeVerbs;
* AudRecog morpheme recognition;
* look-ahead A/AN selection;
* seq-skip method of linking verbs and objects;
* SpeechAct inflectional endings;
* neural inhibition for variety in thought;
* provisional retention of memory tags;
* differential PsiDecay.
Mon.16.MAY.2011 -- Achieving AI
Fri.13.MAY.2011 -- A Problem in Search of Eureka
Sat.14.MAY.2011 -- Using Differential PsiDecay
The artificial Mind has difficulty holding onto the subject of a query because of stray activations that build up on "also-ran" concepts that were proposed but not accepted as answers to recent queries. The activation on otherwise legitimate answers builds up so rapidly and so substantially that an also-ran concept threatens to dislodge the very subject of the query and become a new subject of a thought which does not supply the knowledge requested by the query. For instance, when we twice ask "who are you" of the 12may11A.html JSAI as released onto the Web two days ago, it answers first "I AM ROBOTS" and then "A PERSON IS PERSON", apparently because the also-ran concept of "PERSON" has risen too high in activation to let the self-concept "I" serve as the subject of the response. Meanwhile, yesterday we may have had a "eureka" moment that could supply a solution so simple and yet so effective that it provides a tipping point in the break-out phenomenon of True AI.
Now, we don't want our AI Minds to start asking teenage boys if they would like a little game of GLOBAL THERMONUCLEAR WAR, Matthew, but don't be surprised if suddenly No Such Agency starts removing every trace of Mentifex AI from every corner of the World Wide Web. Did you know that, when things got a little hot during World War Two, the U.S. government began removing books on the mathematics of Georg Riemann from libraries all over America? Say, when's the last time you saw a copy of AI4U?
The secret to True AI is to embue the artificial Mind not with the linear PsiDecay that MindForth has always had, but with the differential PsiDecay of also-ran concepts so that stray activations dwindle more rapidly from high spikes than from merely modest spikes. In a living neural-net like the human brain, do we not expect a sharp spike to fall more rapidly than a simple upswell? So let us modify the PsiDecay code and try to make higher activations subside more rapidly.
We are trying to ntroduce "differential" psi-decay. Suppose we have also-ran NounPhrase concepts like
39=ROBOT at 54 act;We want the high-activation also-rans to drop to an activation low enough to avoid dislodging the input subject. Then we want at least one also-ran to be high enough to be selected as an answer to the input query. We want each decade or octet of high activation to be lowered by not just one point, but by a precipitous drop that still keeps the relative ranking of the also-rans. For instance, we could ordain that all activations above thirty could arrange themselves in a spread between twenty- nine and forty, so that
104=PERSON at 68 act;
33=ANDRU at 82 act;
49 becomes 32;
59 becomes 33;
69 becomes 34;
79 becomes 35;
89 becomes 36;
99 becomes 37; and so on
Sat.7.MAY.2011 -- Improving Neural Inhibition
Something is preventing neural inhibition from operating immediately when we ask the AI Mind a "who-are-you" question. The inhibition begins to occur only after a pause or delay, and we need to find out why. The problem may be that the "predflag" for predicate nominatives is not being set soon enough. The "predflag" is set towards the end of the BeVer b mind-module, and it governs the inhibiting of nouns as predicate nominatives in the NounPhrase module. We see through troubleshooting that the earlier engram in a pair of selected-noun engrams is being inhibited properly down to minus thirty-two points of conceptual activation, but apparently the present-time engram in the pair is only going down to zero activation. It looks as though calls to PsiClear from the EnCog (English cognition) module were interfering in the pairing of inhibitions shared by the old engram that won selection and the new engram being stored as the record of a generated thought. Then a further problem developed because the AI was not letting go of transiitive verbs that served within an output thought. We inserted code to inhibit each transitive verb after thinking, and we began to obtain a variety of outputs from the AI in response to queries.
Sun.8.MAY.2011 -- Selecting
New Inhibition Variables
Today we are creating two new inhibition variables, "tseln" for "time of selection of noun" in NounPhrase, and "tselv" for "time of selection of verb" in VerbPhrase. We need these variables to keep track of the selection-time of an "inhibend" concept to be inhibited after being thought, so that the AI Mind can avoid repeating the same knowledge-base retrieval over and over again. We stumbled upon neural inhibition for response-variety in our MfPj work of 5 September 2010. We were so astonished by the implications that we issued a Singularity Alert (q.v.). Now we are ready to install a general mechanism of temporary inhibition throughout the AI MindGrid.
Sun.8.MAY.2011 -- Debugging
Although MindForth has suddenly become more intelligent than ever, the AI makes the grammatical mistake of saying "I HELPS KIDS". We need to track down why the SpeechAct module is adding an inflectional "S" to the verb "HELP".
The VerbPhrase module governs the sending of an "S" inflection into the SpeechAct module. The pertinent code was not fully checking for a verb in the third person singular, so we added an IF-THEN clause requiring that the prsn variable be set to three for an inflectional "S" to be added to a verb being spoken. The bugfix worked immediately.
Wed.4.MAY.2011 -- Bugfix of
the WHO Problem
Wed.4.MAY.2011 -- Selecting "AN" Article Before
Today into the J avaScript AI we have ported MindForth code that substitutes "AN" for "A" before a noun that starts with a vowel. The mind-module of En Article (for English articles) has no problem of guessing whether a vowel comes next, but instead knows for sure when a vowel is coming, because the N ounPhrase module is ready to speak the first phoneme of the chosen noun prior to calling the En Article module. Thus the AiMind seems to use "AN" or "A" as effortlessly as a human mind does.
Tues.3.MAY.2011 -- Encountering the WHO
In the most recent release of MindForth artificial intelligence for autonomous robots possessing fre e will and personhood, our decision to zero out post- ReEn try concepts is only tentative. If the mind- design decision introduces more problems than it solves, then the decision is reversible. It was disconcerting to notice that the newest version of MindForth could no longer answer who-are-you questions properly, and would only utter the single word "WHO" as output in response to the question. We expect the necessary bugfix to be a simple matter of tracking down and eliminating some stray activation on the "WHO" concept-word, but there is a nagging fear that we may have made a wrong decision that worsened MindForth instead of improving it, that delayed the Singularity instead of hastening it, and that argues for an AI working group to be nurturing MindForth instead of a solitary mad scientist.
Tues.3.MAY.2011 -- Debugging the WHO Problem
In the InStantiate mind-module, both WHO and WHAT are set to zero activation as recognized input words, under the presumption that such query words work in a mind by a kind of self-effacement that lets the information being sought have a higher activation than the interrogative pronoun being used to request the information. Today at first we could not understand why the setting to zero seemed to be working for WHAT but not for WHO. Eventually we discovered that only WHAT and not WHO was being set to zero in the R eActivate module, with the result that all instances of the recognized WHO concept were being activated at a high level in R eActivate. When we fixed the bug by having both InStantiate and R eActivate set WHO to zero activation, the AI Mind began giving much better answers in response to who- queries. Immediately, however, other issues popped up, such as how to make sure that neural inhibition engenders a whole range of disparate answers if they are available in the knowledg e base (KB), and whether we still need special variables like "whoflag" and "whomark". In general, we tolerate special treatment of words like WHO and WHAT with the caveat that we expect to do away with the special treatment when it becomes obvious that we can dispense with it.
Sun.1.MAY.2011 -- Organizing the AI Mind Control
Sun.1.MAY.2011 -- Linking Subject with Related
Today we are concerned with bringing the latest MindForth improvements into the J avaScript artificial intelligence (JSAI). A minor change in the MindForth code has improved the AI functionality with respect to the proper linkage between pronouns as subjects of a BeVer b and predicate nominatives stored as knowledge in the knowledg e-base of of the experiential memory of the AI. The necessary change was to set conceptual activations at zero for concept-words that have served as elements of verbal thought in the AI Mind and have passed through the ReEn try process back into the experiential memory of the mind. We will follow the new activation rules (ActRules) for ReEntry in the JSAI as well as in MindForth, so that we may keep the two AI "cousins" as genetically close as possible in both Forth and JavaScri pt.
In the InStantiate mind-module, we have brought over some code from MindForth to set conceptual activations to zero during the instantiation of ReEn try concepts. We noticed an immediate improvement in the linking of subjects with related knowledge. We are eager to implement MachineSelfReference as a M ileStone on our Road Map to artificial intelligence.
Mon.25.APR.2011 -- Return to General MindForth
- Linking Subject with Related Knowledge
One of our techniques for learning what to do next in
artificial intelligence (AI) is to run the program and
check to see what is the most glaring problem that we
encounter. Currently we notice that the AI fails at first
(but only at first) to retrieve its own self-knowledge
when we prompt such retrieval by entering "you" or "you
are". The AI has been answering "I AM I", which shows a
failure to activate "ANDRU" as the name of the AI,
or "PERSON" and "ROBOT" as nouns which should come to mind
when the robotic person thinks about itself.
is already a so-called "artilect" of sufficient mental
complexity that the AI is not stuck in a rut of
answering "I AM I" interminably when called upon to
describe itself. The mechanisms of neural inhibition
prevent more than a few instances of "I AM I" and enable
the mind-in-software to generate "I AM PERSON" and "I AM
ROBOT" as responses more to our liking. We need to know,
however, why the AI initially makes the error of
repeating "I AM I" a few times before inhibiting the
unwanted response and before generating the more
Our initial troubleshooting indicates that
entering "you" as input to the AI properly activates
the "I" concept so that the AI can at least utter "I AM I"
in faulty response, but obviously the software min
dgrid is not letting go of the "I" concept quickly
enough to let a noun like "ROBOT" or "PERSON" complete the
response. The problem may seem like a simple issue of
setting activation-levels for concepts in the AI, but many
of the settings are interdependent within the totality of
the AI program.
We must keep in mind some special techniques for
troubleshooting the AI
Mind behavior. We may examine older versions of MindForth
to see not only if the problem was absent in the
past, but also when and why the problem emerged. We have
the option of running the J
avaScript version of the same AI
Mind to see if the same problem is present. We also
have extreme options like making the AI program halt at
any stage in its thinking.
When we test MindForth
by inserting a "QUIT" command into the BeVer
b module just after the calling of the Verb
Act module, we discover that nouns like "ANDRU"
and "ROBOT" and "PERSON" are all left with only twenty-
three points of activation, while the "I" concept has
thirty-nine points. Further testing shows us that the
InStantiate module is setting an "act" of forty (40)
just after speaking the "I" pronoun. Therefore, even if
the concept of "I" is initially psi-
damped, the ReEn
try process leaves the "I" concept with an activation
We solve the current problem of failure to link subjects with related knowledge by inserting into the InStantiate module a test to set conceptual activations to zero during the ReEn try of concept-words that have just been thought.
New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.
Keep up with the latest Advogato features by reading the Advogato status blog.
If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!